Automatic Item Generation (AIG) is a process in educational assessment that utilizes algorithms and computational techniques to create test items or questions automatically. This approach aims to enhance the efficiency, scalability, and diversity of assessment materials while maintaining alignment with learning objectives and content standards.
lightbulbAbout this topic
Automatic Item Generation (AIG) is a process in educational assessment that utilizes algorithms and computational techniques to create test items or questions automatically. This approach aims to enhance the efficiency, scalability, and diversity of assessment materials while maintaining alignment with learning objectives and content standards.
—Several attempts have already been made to automate the generation of assessment questions. These attempts were mainly technical and lacked a theoretical backing. We explore psychological and educational theories to support the... more
—Several attempts have already been made to automate the generation of assessment questions. These attempts were mainly technical and lacked a theoretical backing. We explore psychological and educational theories to support the development of principled methods to generate questions and control their properties. We present a similarity-based theory to control the difficulty of multiple-choice questions and show its practicality and consistency with the psychological and educational theories.
Multiple choice questions (MCQs) are considered highly useful (being easy to take or mark) but quite difficult to create and large numbers are needed to form valid exams and associated practice materials. The idea of re-using an existing... more
Multiple choice questions (MCQs) are considered highly useful (being easy to take or mark) but quite difficult to create and large numbers are needed to form valid exams and associated practice materials. The idea of re-using an existing ontology to generate MCQs almost suggests itself and has been explored in various projects. In this project, we are applying suitable educational theory regarding assessments and related methods for their evaluation to ontology-based MCQ generation. In particular, we investigate whether we can measure the similarity of the concepts in an ontology with sufficient reliability so that this measure can be used to control the difficulty of the MCQs generated. In this report, we provide an overview of the background to this research, and describe the main steps taken and insights gained.