Artificial intelligence is considered a key enabler for realizing a more efficient future air traffic management system. As the automation designed to support us grows more sophisticated and complex, our understanding of it tends to suffer. Recent research has addressed this issue in two ways: either through increased automation transparency or increased personalization. This paper overviews recent work in these two areas of strategic conformance (i.e., personalization) and automation transparency (e.g., explainable artificial intelligence and machine learning interpretability). We discuss how to achieve and how to balance conformance and transparency in the context of a machine learning system for conflict detection and resolution in air traffic control. In the MAHALO project, we aim to build, and empirically evaluate, a personalized and transparent decision support system by combining supervised and reinforcement learning approaches. We believe that such a system could strive for optimal performance while accommodating individual differences. By knowing the individual's preferences, the system would be able to afford transparency by explaining both why it suggests another solution (that deviates from the individual's), and why this solution is considered to be better.