Aggregating student peer assessment during capstone projects
2017, International Journal of Engineering Education
Sign up for access to the world's latest research
Abstract
Student assessment of other student's work has many potential benefits to learning for both the assessor and the assessed. However, sources of peer assessment provide, quite often, subjective evidence, which can be conflicting, uncertain and even ignorant. One of the key elements to providing an overall quality assessment of a student's work derived from the assessment by his/her peers is the use of an appropriate method of combining or fusing these heterogeneous evidence sources. Since the development of the belief theory introduced by Shafer in the 1970s, many combination rules have been proposed in the literature with two main methods selected here. The fi rst is an evidential reasoning (ER) approach, the kernel of which is an ER algorithm developed on the basis of the framework and the evidence combination rule of the Dempster-Shafer (DS) theory. It has been claimed also in the literature that Dempster's rule generates counter-intuitive and unacceptable results in pr...
Related papers
Studies in Higher Education, 2013
Peer assessment typically requires students to judge peers' work against assessment criteria. We tested an alternative approach in which students judged pairs of scripts against one another in the absence of assessment criteria. First year mathematics undergraduates (N = 194) sat a written test of conceptual understanding of multivariable calculus, then assessed their peers' responses using pairwise comparative judgement. Inter-rater reliability was investigated by randomly assigning the students to two groups and correlating the two groups' assessments. Validity was investigated by correlating the peers' assessments with (i) expert assessments, (ii) novice assessments, and (iii) marks from other module tests. We found high validity and inter-rater reliability, suggesting that the students performed well as peer assessors. We interpret the results in the light of survey and interview feedback, and discuss directions for further research and development into the benefits and drawbacks of peer assessment without assessment criteria.
Towards a new future in engineering education, new scenarios that european alliances of tech universities open up
Summative peer assessment is an assessment method where the one's work is typically graded by several other anonymous peers using predefined criteria. The value of summative peer assessments in higher education stems from the fact that they can provide scalability in assessment for large enrollment classes for a variety of different assessment types. The main disadvantages of using summative peer assessments are questionable validity and reliability. In this paper, the first results of using summative peer assessments in a large enrollment professional skills course at the University of Zagreb, Faculty of Electrical Engineering and Computing are reported and discussed. The main research question of this work is how well, given specific conditions of the conducted summative peer assessments, do assignment credits assigned by peers correlate with assignment credits assigned by course lecturers. Data were obtained from four summative peer assessments through the course. A random sa...
… of the 7th Australasian conference on …, 2005
Once the exclusive preserve of small graduate courses, peer assessment is being rediscovered as an effective and efficient learning tool in large undergraduate classes, a transition made possible through the use of electronic assignment submissions and web-based support software.
2011
This paper presents a new classifier combination technique based on the Dempster-Shafer theory of evidence. The Dempster-Shafer theory of evidence is a powerful method for combining measures of evidence from different classifiers. However, since each of the available methods that estimates the evidence of classifiers has its own limitations, we propose here a new implementation which adapts to training data so that the overall mean square error is minimized. The proposed technique is shown to outperform most available classifier combination methods when tested on three different classification problems.
Student assessment of other students' work, both formative and summative, has many potential benefits to learning for the assessor and the assessee. It encourages student autonomy and higher order thinking skills. Its weaknesses can be avoided with anonymity, multiple assessors, and tutor moderation. With large numbers of students the management of peer assessment can be assisted by Internet technology.
2008
What is our warrant for saying "Student X deserves a Grade C" ? It must be based on evidence, and the only evidence we see is what students produce during the exam. For valid assessment two criteria must be met: the examination must elicit proper evidence of the trait, and we must evaluate the evidence properly.
2008
What is our warrant for saying “Student X deserves a Grade C ” ? It must be based on evidence, and the only evidence we see is what students produce during the exam. For valid assessment two criteria must be met: the examination must elicit proper evidence of the trait, and we must evaluate the evidence properly. This highlights the importance of ensuring quality in the mark schemes with which we evaluate the evidence as well as in the questions which elicit it. Our recent research shows that improving mark schemes can make more impact on validity than further work on improving questions. In this paper we will outline a procedural model for maximising construct validity: at its heart is the concept of Outcome Space, the range of evidence that students produce. The model aims to ensure that our mark schemes evaluate this evidence properly in terms of the achievement trait we want to assess. This model has been developed in consultation with senior examiners and exam board personnel. ...
This paper is a sequel to an earlier one that examines "the efficacy of two innovative peer-assessment templates (PET and PACT) introduced to enable students provide evidence of their fairness in evaluating peer contributions to group project work" (Onyia, O. P. and Allen, S., 2012). In the present paper, three innovative methods of integrating peer-and teacherassessments are introduced and discussed, including the equal weighting integration (EWI), the unequal weighting integration (UWI), and the peer modulation integration (PMI) methods -all of which can help a college teacher in any area of business or social science education to combine his or her own assigned scores with those from students' peer-assessments (PA) of the group work in order to achieve a fairer final grade for each student in a group coursework assignment (GCA) that involves written reports and/or presentations.
2007
Abstract Peer assessment (or peer review) is a popular form of reciprocal assessment where students produce feedback, or grades, for each others work. Peer assessment activities can be extremely varied with participants taking different roles at different stages of the process and materials passing between roles in sophisticated patterns. This variety makes designing peer assessment systems very challenging.
Proceedings of the Canadian Engineering Education Association (CEEA)
This paper explores the implementation, outcomes, and student perceptions of the use of an online tool for anonymous peer assessment of student work. Peer assessment, where one student assesses the work of another, provides an opportunity for important skill development, as well as a fully-scalable strategy for rich, timely, and frequent feedback. In first and third year engineering courses at the University of British Columbia, we have begun using an online peer assessment tool (peerScholar). The tool divides the peer assessment process into three phases: a creation phase where the work is written or uploaded, an assessment phase where students are randomly assigned to assess the work of a set number of their peers, and a review phase where students review the feedback they received, with options to revise their work or assess the quality of feedback received. We have successfully used this tool in two large (n = 750) classes and one moderate-sized (n = 130) class, with a wide r...

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.