Key research themes
1. How can automated evaluation enhance programming education through precise and fair assessment with feedback?
This research area investigates the development and implementation of automated tools to accurately assess programming assignments, aiming to reduce manual grading errors, improve efficiency, and standardize evaluation. It focuses on both correctness and qualitative assessment dimensions, such as code structure, style, and performance, and addresses the challenge of providing detailed, consistent feedback to learners.
2. How can implicit user behavior and interaction data be utilized for automated evaluation of intelligent assistants' effectiveness?
This research theme examines methods for automatically evaluating voice-activated intelligent assistants by leveraging implicit user feedback such as interaction patterns, satisfaction metrics, and acoustic signals. The goal is to create consistent, scalable, and task-agnostic evaluation frameworks that overcome the challenges posed by diverse, evolving tasks and reduce reliance on costly, manual ground-truth annotations.
3. What are the challenges and solutions in automating subjective evaluation and feedback for written responses and complex open-ended answers?
This theme focuses on automating the evaluation of subjective, open-ended responses—such as essays, summaries, and diagrams—using natural language processing, semantic similarity measures, and diagrammatic analyses. It addresses the tension between capturing writing quality, providing instructional feedback, and ensuring evaluation fairness, especially when using indirect measures and avoiding superficial text features.