Effective explanations of recommendations
2007, Proceedings of the 2007 ACM conference on Recommender systems
https://doi.org/10.1145/1297231.1297259…
4 pages
1 file
Sign up for access to the world's latest research
Abstract
This paper characterizes general properties of useful, or Effective, explanations of recommendations. It describes a methodology based on focus groups, in which we elicit what helps moviegoers decide whether or not they would like a movie. Our results highlight the importance of personalizing explanations to the individual user, as well as considering the source of recommendations, user mood, the effects of group viewing, and the effect of explanations on user expectations.
Related papers
2013
Recommender systems are software tools that supply users with suggestions for items to buy. However, it was found that many recommender systems functioned as black boxes and did not provide transparency or any information on how their internal parts work. Therefore, explanations were used to show why a specific recommendation was provided. The importance of explanations has been approved in a number of fields such as expert systems, decision support systems, intelligent tutoring systems and data explanation systems. It was found that not generating a suitable explanation might degrade the performance of recommender systems, their applicability and eventually their value for monetization. Our goal in this paper is to provide a comprehensive review on the main research fields of explanations in recommender systems along with suitable examples from literature. Open challenges in the field are also manifested. The results show that most of the work in the field focus on the set of characteristics that can be associated with explanations: transparency, validity, scrutability, trust, relevance, persuasiveness, comprehensibility, effectiveness, efficiency, satisfaction and education. All of these characteristics can increase the system's trustworthiness. Other research areas include explanation interfaces, over and underestimation and decision making
2021
In this paper, we shed light on two important design choices in explainable recommender systems (RS) namely, explanation focus and explanation level of detail. We developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides on-demand personalized explanations of the input (user model) and output (recommendations), with three levels of detail (basic, intermediate, advanced) to meet the demands of different types of end-users. We conducted a within-subject study to investigate the relationship between explanation focus and the explanation level of detail, and the effects of these two variables on the perception of the explainable RS with regard to different explanation aims. Our results show that the perception of explainable RS with different levels of detail is affected to different degrees by the explanation focus. Consequently, we provided some suggestions to support the effective design of explanations in RS.
Proceedings of the fourth ACM conference on Recommender systems - RecSys '10, 2010
Recommender systems are intended to assist consumers by making choices from a large scope of items. While most recommender research focuses on improving the accuracy of recommender algorithms, this paper stresses the role of explanations for recommended items for gaining acceptance and trust. Specifically, we present a method which is capable of providing detailed explanations of recommendations while exhibiting reasonable prediction accuracy. The method models the users' ratings as a function of their utility part-worths for those item attributes which influence the users' evaluation behavior, with part-worth being estimated through a set of auxiliary regressions and constrained optimization of their results. We provide evidence that under certain conditions the proposed method is superior to established recommender approaches not only regarding its ability to provide detailed explanations but also in terms of prediction accuracy. We further show that a hybrid recommendation algorithm can rely on the content-based component for a majority of the users, switching to collaborative recommendation only for about one third of the user base.
Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization
In this paper, we shed light on explaining user models for transparent recommendation while considering user personal characteristics. To this end, we developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides interactive, layered explanations of the user model with three levels of detail (basic, intermediate, advanced) to meet the demands of different types of end-users. We conducted a within-subject study (N=31) to investigate the relationship between personal characteristics and the explanation level of detail, and the effects of these two variables on the perception of the explainable recommender system with regard to different explanation goals. Based on the study results, we provided some suggestions to support the effective design of user model explanations for transparent recommendation. CCS CONCEPTS • Human-centered computing → Interactive systems and tools; • Computing methodologies → Artificial intelligence.
… of the 13th international conference on …, 2009
2017
This report discusses the explanations in the domain of recommender systems: A review of the research papers in the domain, the different explanation interfaces and the evaluation criteria, our vision in this domain and its application on the e-learning project “METAL”.
User Modeling and User-Adapted Interaction, 2012
Research on recommender systems typically focuses on the accuracy of prediction algorithms. Because accuracy only partially constitutes the user experience of a recommender system, this paper proposes a framework that takes a user-centric approach to recommender system evaluation. The framework links objective system aspects to objective user behavior through a series of perceptual and evaluative constructs (called subjective system aspects and experience, respectively). Furthermore, it incorporates the influence of personal and situational characteristics on the user experience. This paper reviews how current literature maps to the framework and identifies several gaps in existing work. Consequently, the framework is validated Equation Modeling. The results of these studies show that subjective system aspects and experience variables are invaluable in explaining why and how the user experience of recommender systems comes about. In all studies we observe that perceptions of recommendation quality and/or variety are important mediators in predicting the effects of objective system aspects on the three components of user experience: process (e.g. perceived effort, difficulty), system (e.g. perceived system effectiveness) and outcome (e.g. choice satisfaction). Furthermore, we find that these subjective aspects have strong and sometimes interesting behavioral correlates (e.g. reduced browsing indicates higher system effectiveness). They also show several tradeoffs between system aspects and personal and situational characteristics (e.g. the amount of preference feedback users provide is a tradeoff between perceived system usefulness and privacy concerns). These results, as well as the validated framework itself, provide a platform for future research on the user-centric evaluation of recommender systems.
Proceedings of the 24th International Conference on Intelligent User Interfaces, 2019
Recommender systems have become pervasive on the web, shaping the way users see information and thus the decisions they make. As these systems get more complex, there is a growing need for transparency. In this paper, we study the problem of generating and visualizing personalized explanations for hybrid recommender systems, which incorporate many different data sources. We build upon a hybrid probabilistic graphical model and develop an approach to generate real-time recommendations along with personalized explanations. To study the benefits of explanations for hybrid recommender systems, we conduct a crowd-sourced user study where our system generates personalized recommendations and explanations for real users of the last.fm music platform. We experiment with 1) different explanation styles (e.g., user-based, item-based), 2) manipulating the number of explanation styles presented, and 3) manipulating the presentation format (e.g., textual vs. visual). We apply a mixed model statistical analysis to consider user personality traits as a control variable and demonstrate the usefulness of our approach in creating personalized hybrid explanations with different style, number, and format. CCS CONCEPTS • Information systems → Decision support systems; Collaborative filtering; • Human-centered computing → Social networking sites; Empirical studies in visualization.
2010
Recommender systems are intended to assist consumers by making choices from a large scope of items. While most recommender research focuses on improving the accuracy of recommender algorithms, this paper stresses the role of explanations for recommended items for gaining acceptance and trust. Specifically, we present a method which is capable of providing detailed explanations of recommendations while exhibiting reasonable prediction accuracy. The method models the users' ratings as a function of their utility part-worths for those item attributes which influence the users' evaluation behavior, with part-worth being estimated through a set of auxiliary regressions and constrained optimization of their results. We provide evidence that under certain conditions the proposed method is superior to established recommender approaches not only regarding its ability to provide detailed explanations but also in terms of prediction accuracy. We further show that a hybrid recommendation algorithm can rely on the content-based component for a majority of the users, switching to collaborative recommendation only for about one third of the user base.
ACM Transactions on Interactive Intelligent Systems, 2020
Recommender systems are ubiquitous and shape the way users access information and make decisions. As these systems become more complex, there is a growing need for transparency and interpretability. In this article, we study the problem of generating and visualizing personalized explanations for recommender systems that incorporate signals from many different data sources. We use a flexible, extendable probabilistic programming approach and show how we can generate real-time personalized recommendations. We then turn these personalized recommendations into explanations. We perform an extensive user study to evaluate the benefits of explanations for hybrid recommender systems. We conduct a crowd-sourced user study where our system generates personalized recommendations and explanations for real users of the last.fm music platform. First, we evaluate the performance of the recommendations in terms of perceived accuracy and novelty. Next, we experiment with (1) different explanation st...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (13)
- REFERENCES
- Van Barneveld, J. and Van Setten, M. Personalized digital television, chapter 10, 259-285. Kluwer, 2004.
- Bilgic, M. and Mooney, R.J. Explaining recommendations: Satisfaction vs. promotion. Beyond Personalization Workshop, IUI, 2005.
- Chen, L. and Pu, P. Trust building in recommender agents. WPRSIUI workshop, 2002.
- Carenini, G. & Moore, D.J. An Empirical Study of the Influence of User Tailoring on Evaluative Argument Effectiveness. IJCAI 2001.
- Czarkowski, M. A Scrutable Adaptive Hypertext. PhD thesis, University of Sydney, 2006.
- Herlocker, J. L., Konstan, J. A. and Riedl, J. Explaining collaborative filtering recommendations. In CSCW, 2000.
- Hingston, M. User friendly recommender systems. Honours thesis, University of Sydney, 2006.
- McNee, S.M., Lam, S.K., Konstan, J.A. and Riedl, J. Interfaces for eliciting new user preferences in recommender systems. User Modeling, 178-187, 2003.
- McNee, S.M., J. Riedl, and J. A. Konstan. Being accurate is not enough: How accuracy metrics have hurt recommender systems. Extended Abstracts, CHI 2006
- Swearingen, K. and Sinha, R. Interaction design for recommender systems. Designing Interactive Systems, 2002.
- Tintarev, N. and Masthoff, J. Survey of explanations in recommender systems. WPRSIUI, 2007.
- Ziegler, C., McNee, S.M., Konstan, J.A. and Lausen, G. Improving recommendation lists through topic diversification. WWW 2005