Modifying Multiple-Choice Questions in Computer-Based Instruction
1990
Sign up for access to the world's latest research
Abstract
Abstract: Research has shown that multiple-choice questions formed by transforming or paraphrasing a reading passage provide a measure of student comprehension. It is argued that similar transformation and paraphrasing of lesson questions is an appropriate way to form parallel multiple-choice items to be used as a posttest measure of student comprehension. Four parallel items may be derived from a lesson:(1) an item that is neither transformed nor paraphrased, thus testing simple rote memory;(2) an item that is ...
Related papers
Proceedings of the HLT-NAACL 03 workshop on Building educational applications using natural language processing -, 2003
This paper describes a novel computer-aided procedure for generating multiple-choice tests from electronic instructional documents. In addition to employing various NLP techniques including term extraction and shallow parsing, the program makes use of language resources such as a corpus and WordNet. The system generates test questions and distractors, offering the user the option to post-edit the test items.
Applied Measurement in Education, 2003
ABSTRACT The cognitive equivalence of computerized and paper-and-pencil reading comprehension tests was investigated using verbal protocol analysis. It was hypothesized that participants taking the computerized tests would have a greater load on their working memory, which would affect their cognitive processes and test-taking strategies. The results indicated that the only significant difference between the computerized and paper-and-pencil tests was in the frequency of identifying important information in the passage. There was no evidence of any differences in search strategies or in overall test-taking strategies on the computerized and paper-and-pencil tests. The results suggest that computerized and paper-and-pencil reading comprehension tests may be more cognitively similar than originally thought. In fact, some of the findings indicate that computerized tests may encourage more construct-relevant behaviors than paper-and-pencil tests.
1971
Two separate studies were conducted: 1) one examining the effect on sixth grade subjects (N=113) of relevant questions cccurring shortly after reading textual material on posttraining tests to a control condition not receiving the questions, and 2) one replicating it and also examining learning in small group (individual-like situations) as well as intact classrooms, and comparing the performance of sixth graders (N=96) and college students (N=74) on the same content. Data for the first study consisted of the number of correct responses by each student to the three daily 12-question posttests and the 18-question post-posttest; for the second study the number of correct responses by Ss to a 16-item posttest and a 20-item post-posttest. Results were submitted to means tests and analysis of variance to determine the effects on performance of class, day, conditions, type of administration, and their possible interactions. The study failed to support previous studies: There was no general facilitative effect of interspersed queStions (after relevant text material) on incidental learning. No Ope'rimental differences were found when *sixth graders were treated 0 intact classroom situation's vs. small Troups, and no differences were found that could be attributed to days with respect to short term and_ delayed retention. If mathemagenic behavioisu are generated in children, they do not seem to take the same form as those reported ih young adults. (JS)
Journal of experimental psychology. Applied, 2010
Students are often encouraged to generate and answer their own questions on to-be-remembered material, because this interactive process is thought to enhance memory. But does this strategy actually work? In three experiments, all participants read the same passage, answered questions, and took a test to get accustomed to the materials in a practice phase. They then read three passages and did one of three tasks on each passage: reread the passage, answered questions set by the experimenter, or generated and answered their own questions. Passages were 575-word (Experiments 1 and 2) or 350-word (Experiment 3) texts on topics such as Venice, the Taj Mahal, and the singer Cesaria Evora. After each task, participants predicted their performance on a later test, which followed the same format as the practice phase test (a short-answer test in Experiments 1 and 2, and a free recall test in Experiment 3). In all experiments, best performance was predicted after generating and answering ques...
2013
choice (M-C) reading comprehension test performance among Iranian EFL university students at the upper-intermediate level. Sixty participants passed three M-C reading comprehension tests (male-oriented, femalecontent on the test performance of the EFL readers (p<. 05). The results of this study provided evidence that prior knowledge and interest to the passage content concerned with facilitating effects on the performance of the foreign language learners t aking M-C reading comprehension test at the upper-intermediate level.
2000
The comparability of computerized and paper-and-pencil tests was examined from cognitive perspective, using verbal protocols rather than psychometric methods, as the primary mode of inquiry. Reading comprehension items from the Graduate Record Examinations were completed by 48 college juniors and seniors, half of whom took the computerized test first followed by the paper-and-pencil version, and half of whom took the paper-and-pencil test before the computerized test. Participants were asked to think aloud as they answered the test questions. The verbal protocols were transcribed and coded for interpretation. There was a greater frequency of reading comprehension utterances during the paper-and-pencil test, but these were largely accounted for by the use of physical aids to identify important information in the passage. Many participants said that they felt disadvantaged during the computerized test by not being able to write on the passage and test questions. The frequently used strategy of marking the test did not seem to produce any cognitive benefits, however. There was slight evidence of a working memory load while answering the questions on the computerized tests, but overall there were few mode differences and the magnitude of differences was very small. Nearly all participants used the
Journal of Educational Technology Systems, 1996
This study examines the strategies used in answering a computerized multiple-choice test where the items have been semantically blocked (all questions on a semantic topic grouped together) or unblocked (semantics randomly distributed throughout the test). Student subjects had almost total control to navigate the test in any way they chose and also to reorder the organization of the multiple-choice items. The strategies were captured using a non-intrusive computer logging mechanism that records the actions of the subjects. Correlation analysis was used to evaluate the strategies that the subjects employed in completing the test. The findings indicate that students grouped by performance on the test used distinctly different strategies in completing the test. It is proposed that the differences are due to distinct cognitive processes between the groups. Computer-assisted instruction (CAI) has proven to be a powerful tool to teach' skills in many subject areas [l-31. In fact, with the evolution of microcomputer technology, learning through computer use will only accelerate. The control of the interaction between the computer and the user is a central issue associated with CAI [4-61. The instruction interaction between user and computer can be totally controlled by computer, giving the user no control over the session, or it can be 271
This study aimed to determine the effects of provided and generated questions on the students’ reading comprehension. It was a true-experimental study applying a randomized pretest-posttest control group design. A sample of 99 students were selected from the accessible population of the S-1 students taking the Reading Comprehension Course I. The Ss were then randomly divided into three groups (I, II, and III). Each group thus consisted of 33 students. In this study two types of data were collected: the students’ scores of reading comprehension and the types of questions generated by the students. The findings proved that both provided and generated questions promoted reading comprehension better than reading-only. The result of the investigation also proved that self-questioning was the most effective strategy for comprehending reading selections
2000
Computer assisted assessment (CAA) can play both formative and summative roles in teaching and learning (eg, practice questions and exams respectively). In either case, the creation of questions (and feedback) is often a time consuming task, and changes in course content or textbooks can necessitate rewriting of questions. One solution to the problem of creating large numbers of multiple choice questions is provided by textbook publishers, who sometimes provide question banks to teachers as an ancillary material when textbooks are prescribed for students in the teacher's course. In some cases in the past decade, computer software has been included for the presentation of material from question banks, but this software has generally been for stand-alone (not networked) computers, and is often not user-friendly.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (5)
- Anderson, R. C. (1972). How to constnict achievement tests to assess comprehension. Review of Educational Research, 42(2), 145-170.
- Bormuth, J. R. (1970). On the theory of achievement test items. The University of Chicago Press.
- Bormuth, J. R., Manning, J., Carr, J., & Pearson, D. (1970). Children's comprehension of between-and within-sentence syntactic structure. Journal of Educational Psychology, fd, 349-357.
- Davey, B. (1988). Factors affecting the difficulty of reading comprehension items for successful and unsuccessful readers. Journal of Experimental Education, IC 67-76.
- Note: The data reported in Study 1 was part of a previously reported study titled "Two immediate feedback forms with two conditions of contextualization" in the Journal of Computer-Based Instruction, in press. The data from Studies 2 and 3 were part of a dissertation completed at Memphis State University titled "An experimental study comparing three forms of feedback across two conditions of acqui sition".