Key research themes
1. How can evaluation methodologies effectively measure open-domain conversational agent performance?
Evaluating conversational agents, especially those designed for open-domain or social conversations, remains challenging due to the subjective nature of conversational quality, lack of objective tasks, and limitations of automatic metrics. This research theme focuses on developing comprehensive evaluation frameworks combining human judgment with automatically computable metrics that reflect engagement, coherence, topical diversity, and dialogue depth. Improving evaluation approaches matters for guiding development of more human-like, engaging conversational systems and for benchmarking progress.
2. What design principles and interaction models enhance the human-like qualities and user engagement in conversational agents?
Creating conversational agents that feel human-like and engaging requires carefully designed interaction models, incorporating anthropomorphic features, dialogue management strategies, multi-party interaction coherence, and the ability to sustain long-term, meaningful conversations. This theme explores how design choices—in narrative, embodiment, dialogue strategies, and social cues—affect user perceptions, engagement, usability, and system effectiveness across different application domains including education, health, and legal advice.
3. How can conversational agents be effectively applied and adapted for specialized domains such as healthcare and legal advice?
This research focuses on the development and deployment of conversational agents tailored to specialized domains that require domain knowledge, precise guidance, and user trust, including healthcare, legal dispute resolution, and education. It involves challenges like incorporating domain-specific knowledge representation, user-friendly interfaces, and maintaining ethical and credible communication. Advancing such domain-adapted agents enhances accessibility, efficiency, and outcomes in critical areas.