Adaptive and personalized systems have become pervasive technologies which are gradually playing an increasingly important role in our daily lives. Indeed, we are now used to interact every day with algorithms that help us in several scenarios, ranging from services that suggest us music to be listened to or movies to be watched, to personal assistants able to proactively support us in complex decision-making tasks. As the importance of such technologies in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. Unfortunately, the current research tends to go in the opposite direction, since most of the approaches try to maximize the effectiveness of the personalization strategy (e.g., recommendation accuracy) at the expense of the explainability and the transparency of the model. The main research questions which arise from this scenario is simple and straightforward: How can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? The workshop aims to provide a forum for discussing such problems, challenges and innovative research approaches in the area, by investigating the role of transparency and explainability on the recent methodologies for building user models or for developing personalized and adaptive systems.