The purpose of this study was to examine prewriting, drafting, and revision in a large scale writing assessent. In April 1909, Louisiana administered a graduation exit examination. Written composition comprised one of the three testing...
moreThe purpose of this study was to examine prewriting, drafting, and revision in a large scale writing assessent. In April 1909, Louisiana administered a graduation exit examination. Written composition comprised one of the three testing components. From the 40,000 tenth grade students who participated in the written composition test, a stratified sampling of 1,467 was selected for this study. Using a research design incorporating both quantitative and qualitative assessment procedures, the study examined prewriting, drafting, and revision practices at two levels. In Level I, the first and final drafts of the 1,467 students were analyzed using a scoring model derived from Wisconsin studies conducted in 1981 and 1984. This model permitted a quantitative analysis of the first draft characteristics as well as an analysis of revision practices. In Level II, which was subdivided into two parts, 20 students were randomly selected from the stratified sample. Part A, the quantitative portion of Level II, examined the first and final drafts of these 20 students using a modified version of Lillian Bridwell's revision model. In addition to providing an in-depth analysis of these 20 students' revision practices, this portion of the xi study also studied essay length, revision frequencies, and scoring variance between the first and final drafts. Part B, the qualitative portion of Level II, focused on structured interviews which allowed each of the 20 students to respond to seven questions about prewriting, drafting, and revision. Resu1ts i ndicate that, though revis ion did have a positive effect on the quality of the compositions, the average point gain per essay was surprisingly small. Moreover, in many instances the composition scores for the final drafts remained unchanged after the students had revised. The study also found that the majority of revisions were generally cosmetic; prewriting activities such as outlines, notes, or clusters were seldom used; less successful writers made fewer substantive changes to their compositions than did the successful writers; and a knowledge of terminology relative to editing and revision was not a good predictor of student performance. xi i CHAPTER 1 INTRODUCTION TO THE STUDY OVERVIEW Educators are rethinking how writing achievement should be measured. Since the early 1970s much attention has focused on the assessment of writing through writing samples as opposed to standardized multiple choice tests of writing ' ' skills." This transition proceeds from "the growing belief that writing involves more than the mastery of syntax, usage, and word choice captured by most indirect assessments of writing ability" (Applebee, Langer, 6 Mullis, 1989, p. 5). Moreover, the assessment of writing through direct means more closely approximates actual classroom practices in that students are evaluated on their ability to write actual compositions in response to given prompts. Though the use of such assessments varies from state to state, the basic questions remain essentially the same. First, how well are students writing? And secondly, what can be done to improve their writing? With such states as California, Texas, New Jersey, Georgia, and Maryland in the vanguard of the movement, the transition to the direct assessment of writing has attracted a significant number of converts. According to recent surveys, over 30 states have already incorporated writing into their assessment 1 2 programs and many more were strongly considering the possibility (Roeber, 1989). In 1986, the Louisiana Legislature enacted a statute (R. S. 17:24,4) which repealed the state's minimum standards testing program and replaced it with "grade appropriate" criterion-referenced testing. The Louisiana Educational Assessment Program (LEAP), which forms the central infrastructure of this legislation, mandates that students be tested in grades three, five, seven, and at the secondary school level. More importantly, the tests are to be used in both promotion and graduation decisions, hence qualifying them as "high stakes" assessments. Though the term "high stakes" may be interpreted on several levels, the use of such a term from a testing perspective is solely for classification purposes. Applied to programs nationwide, "high stakes" denotes those assessments that use cutoff scores for determining if students pass or fail a particular grade or subject. Often, "high stakes" examinations are referred to as "gate-keeper" or "exit" examinations, especially when attaining the performance standard will permit a student to graduate. In the case of the Louisiana assessment, there is some reason to believe not all students felt much was really at stake during the year of this study. 4 Research Questions Despite the extensive research which has been conducted on prewriting, drafting, and revision and their roles in the writing process, relatively little research has been done on prewriting, drafting, and revision in large scale writing assessments. Especially lacking is research in those assessments where time constraints are operative and where the final draft determines in part a student's eligibility for graduation. What research is available is fully explored in Chapter 2. This study investigates the impact of allowing prewriting and multiple drafting in Louisiana's 1989 writing assessment and focuses on the prewriting, drafting, and revision practices exhibited by students during the assessment. Prewriting as used in this study refers to any visible signs of written activity such as semantic mapping, word walls, note-making, listing, or outlining which do not include text. Text is used in this study to mean a grouping of words, phrases, clauses, or sentences which are organized in such a manner as to be viewed as a composition, in whole or in part. Drafting refers to the production of text, and revision refers to the external and internal changes made to that text. Here, external changes are defined as those changes involving modifications in format, spelling, punctuation, capitalization, or legibility. Conversely, 6 Historical Background Direct writing assessment is not new to Louisiana. As early as 1976, the seeds for large scale assessment were planted when a group of Louisiana educators met in Baton Rouge to discuss the language arts curriculum. This Writing Advisory Council, convened by the Louisiana Department of Education, decided that if students should be assessed on how well they met curriculum standards, then an integral part of that assessment should involve a writing sample. In a memo to the Department of Education, co-authored by Cresup Watson and Elizabeth Penfield of the University of New Orleans, the council argued strongly for this writing, noting that such a component was "essential" (C. Watson, personal communication, October 7, 1989). In response to the actions taken by the advisory council, efforts at evaluating the progress made by Louisiana students in writing began in 1978 with the development of the Louisiana Minimum Standards for Writing, Grades 1-12. In the initial phase of this minimum skills program which later became a part of the State Pupil Assessment Program, the focus centered on piloting writing topics which later could be used in a more comprehensive statewide assessment. Using a representative sample of parishes, the Louisiana Department of Education tested approximately 2,520 students at grades 4, 8, and 11 on their ability to 7 "respond in writing to specific questions" {Louisiana Dept, of Education, 1978, p. 3). In the years to follow, the Department of Education would implement other writing assessments under its minimum standards program, but the scale of the assessment would remain relatively small. With the later demise of the minimum standards program in the early 1980s and the emergence of the Louisiana Educational Assessment Program in 1987, the assessment of writing continued but on a much larger, more comprehensive scale. In addition, with the inception of LEAP, the testing program's focus also shifted. Students in grades four, six, and nine were now administered norm-referenced examinations with crIterion-referenced tests being administered to the 3rd, 5th, 7th, 10th, and 11th grade populations. Though the criterion-referenced testing originally called for a written composition at all four of the specified grade levels, the Board of Elementary and Secondary Education decided that in the initial stage of the testing program, the written composition examination would only be administered to 10th graders. Thus, in comparison to previous programs, the statewide assessment of such large student populations as the 40,000 tenth graders tested in the spring of 1989 is unprecedented. Moreover, with the inclusion of the written composition at the fifth and seventh grades in the spring of 1990/ the writing component takes on even more significance as the state attempts to measure the writing abilities of its students. Whereas, in previous years of testing, the pilot studies had been the primary source of obtaining data on student writing, Louisiana is now attempting to examine large populations and more accurately determine the strengths and weaknesses of student writing. Nature of the Examination In the written composition segment of the examination, students are asked to formulate a written response to a given prompt within a specified time period. Using the English Language Arts Curriculum Guide, Grades 7-12 as a basis for both test and prompt development, the Louisiana Department of Education with help from local administrators, classroom teachers, and university representatives derived a series of prompts that could be used in both the 7th and the 10th grade compositions. Since the curriculum guide focused on the development of writing skills in the four traditional modes of discourse (narrative, descriptive, expositive, and persuasive), the testing committee decided that tenth graders...