Home | Statement | Drafts | History | Resources |
Final Draft (25 Nov 94) CONTENTS:
Assessments of written literacy should be designed and evaluated by well-informed current or prospective instructors of the students being assessed, for purposes clearly understood by all the participants; should elicit from student writers a variety of pieces, preferably over a period of time; should encourage and reinforce good teaching practices; and should be solidly grounded in the latest research on language learning. These assumptions are explained fully in the first section below; after that, we list the rights and responsibilities generated by these assumptions; and in the third section we provide selected references that furnish a point of departure for literature in the discipline. ASSUMPTIONS: All writing assessment--and thus all policy statements about writing assessment--make assumptions about the nature of what is being assessed. Our assumptions include the following. First, language is always learned and used most effectively in environments where it accomplishes something the user wants to accomplish for particular listeners or readers within that environment. The assessment of written literacy must strive to set up writing tasks, therefore, that identify purposes appropriate to and appealing to the particular students being tested. Additionally, assessment must be contextualized in terms of why, where, and for what purpose it is being undertaken; this context must also be clear to the students being assessed and to all others (i.e., stakeholders/participants) involved. Accordingly, there is no test which can be used in all environments for all purposes, and the best "test" for any group of students may well be locally designed. The definition of "local" is also contextual; schools with common goals and similar student populations and teaching philosophies and outcomes might well form consortia for the design, implementation, and evaluation of assessment instruments even though the schools themselves are geographically separated from each other. Second, language by definition is social. Assessment which isolates students and forbids discussion and feedback from others conflicts with current cognitive and psychological research about language use and the benefits of social interaction during the writing process; it also is out of step with much classroom practice. Third, reading--and thus, evaluation, since it is a variety of reading--is as socially contextualized as all other forms of language use. What any reader draws out of a particular text and uses as a basis of evaluation is dependent upon how that reader's own language use has been shaped and what his or her specific purpose for reading is. It seems appropriate, therefore, to recognize the individual writing program, institution, consortium, and so forth as a community of interpreters who can function fairly--that is, assess fairly--only within that community. Fourth, any individual's writing "ability" is a sum of a variety of skills employed in a diversity of contexts, and individual ability fluctuates unevenly among these varieties. Consequently, one piece of writing--even if it is generated under the most desirable conditions--can never serve as an indicator of overall literacy, particularly for high stakes decisions. Ideally, such literacy must be assessed by more than one piece of writing, in more than one genre, written on different occasions, for different audiences and evaluated by multiple readers. This realization has led many institutions and programs across the country to use portfolio assessment. Fifth, assessment is defensible primarily as a means of improvement of learning. Both teachers and students must have access to the results in order to be able to use them to revise existing curricula and/or plan programs for individual students. And, obviously, if results are to be used to improve the teaching-learning environment, human and financial resources must be in place in advance of the assessment. If resources are not available, institutions should postpone assessment until they are. Furthermore, when assessment is being conducted solely for program evaluation, all students should not be tested, since a representative group can provide t he desired results. Neither should faculty merit increases hinge on their students' performance on any test. Sixth, regardless of the best intentions, assessment tends to drive pedagogy. Assessment thus must demonstrate "systemic validity": it must encourage classroom practices that harmonize with what practice and research have demonstrated to be effective ways of teaching writing and of becoming a writer. What is easiest to measure--often by means of a multiple choice test--may correspond least to good writing, and that in part is the point: choosing a correct response from a set of possible answers is not composing. As important, just because students are asked to write does not mean that the "test" is a "good" one. Essay tests that ask students to form and articulate opinions about some important issue, for instance, without time to reflect, to talk to others, to read on the subject, to revise and so forth--that is, without allowing for what good writers need--encourage distorted notions of what writing is. They also encourage poor teaching and little learning. Even teachers who recognize and employ the methods used by real writers in working with students can find their best efforts undercut by the assessments such as these. Seventh, standardized tests, usually developed by large testing organizations, tend to misrepresent disproportionately the skills and abilities of students of color. This imbalance tends to decrease when tests are directly related to specific contexts and purposes, in contrast to tests that purport to differentiate between "good" and "bad" writing in a general sense. Furthermore, standardized test tends to focus on readily accessed features of the language- -on grammatical correctness and stylistic choice--and on error, on what is wrong rather than on what appropriate rhetorical choices have been made. Consequently, the outcome of such assessments is negative: students are said to demonstrate what they do "wrong" with language rather than what they do well. Eighth, the means used to test students' writing ability shapes what they, too, consider writing to be. If students are asked to produce "good" writing within a given period of time, they often conclude that all good writing is generated within those constraints. If students are asked to select--in a multiple choice format--the best grammatical and stylistic choices, they will conclude that good writing is "correct" writing. They will see writing erroneously, as the avoidance of error; they will think that grammar and style exist apart from overall purpose and discourse design. Ninth, financial resources available for designing and implementing assessment instruments should be used to do that and not to pay for assessment instruments outside the context within which they are used. Large amounts of money are currently spent on assessments that have little pedagogical value for students or teachers. However, money spent to compensate teachers for involvement in assessment is also money spent on faculty development and curriculum reform since inevitably both occur when teachers begin to discuss assessment which relates directly to their classrooms and to their students. Tenth, and finally, there is a large and growing body of research on language learning, language use, and language assessment that must be used to improve assessment on a systematic and regular basis. Our assumptions are based on this scholarship. Anyone charged with the responsibility of designing an assessment program must be cognizant of this body of research and must stay abreast of developments in the field. Thus, assessment programs must always be under review and subject to change by well-informed faculty, administrators, and legislators. Assessment of writing is a valid undertaking. But by its very nature, it flies in the face of contextualization as it works toward what is generalizable. There are times when re-creating or simulating a context (as in the case of assessment for placement, for instance) is limited. Even in this case, however, assessment--when conducted sensitively and purposefully--can have a positive impact on teaching, learning, curricular design, and student attitudes. Writing assessment can serve to inform both the individual and the public about the achievements of students and the effectiveness of teaching. On the other hand, poorly designed assessments, and poorly implemented assessments, can be enormously harmful because of the power of language: personally, for our students as human beings; and academically, for our students as learners, since learning is mediated through language. Students who take pleasure and pride in using written language effectively are increasingly valuable in a world in which communication across space and a variety of cultures has become routine. Writing assessment that alienates students from writing is counterproductive, and writing assessment that fails to take an accurate and valid measure of their writing even more so. But writing assessment that encourages students to improve their facility with the written word, to appreciate their power with that word and the responsibilities that accompany such power, and that salutes students' achievements as well as guides them, should serve as a crucially important educational force. RIGHTS AND RESPONSIBILITIES: Students should: 1. be informed about the purposes of the assessment they are writing for, the ways the results will be used, and avenues of appeal. 2. demonstrate their accomplishment and/or development in writing by means of composing, preferably in more than one sample written on more than one occasion, with sufficient time to plan, draft, rewrite or revise, and proofread each product or performance. 3. write on prompts developed from the curriculum and grounded in "real- world" practice. 4. have their writing evaluated by more than one reader, particularly in "high stakes" situations (e.g., involving major institutional consequences such as getting credit for a course, moving from one context to another, or graduating from college). 5. receive response, from readers, intended to help them improve as writers attempting to reach multiple kinds of audiences. Faculty should: 1. play key roles in the design of writing assessments, including creating writing tasks and scoring guides, for which they should receive support in honoraria and/or release time; and should appreciate and be responsive to the idea that assessment tasks and procedures must be sensitive to cultural, racial, class, and gender differences, and to disabilities, and must be valid for and not penalize any group of students. 2. participate in the readings and evaluations of student work, supported by honoraria and/or release time. 3. assure that assessment is "authentic"; i.e., it measures and supports what is taught in the classroom. 4. should make themselves aware of the difficulty of constructing fair and motivating prompts for writing, the need for field testing and revising of prompts, the range of appropriate and inappropriate uses of various kinds of writing assessments, and the norming, reliability, and validity standards employed by internal and external test-makers, as well as share their understanding of these issues with administrators and legislators. 5. help students to prepare for writing assessments and to interpret assessment results. 6. use results from writing assessments to review and (when necessary) to revise curriculum. 7. encourage policy makers to take a more qualitative view toward assessment, encouraging the use of multiple measures, infrequent large- scale assessment, and large-scale assessment by sampling of a population rather than by individual work whenever appropriate. 8. continue conducting research on writing assessment, particularly as it is used to help students learn and to understand what they have achieved. Administrators and Higher Education Governing Boards should: 1. educate themselves and consult with rhetoricians and composition specialists teaching at their own institutions, about the most recent research on teaching and assessing writing and how they relate to their particular environment and to already established programs and procedures, understanding that generally student learning is best demonstrated by performances assessed over time and sponsored by all faculty members, not just those in English. 2. announce to stakeholders the purposes of all assessments, the results to be obtained, and the ways that results will be used. 3. assure that the assessments serve the needs of students, not just the needs of an institution, and that resources for necessary courses linked to the assessments are therefore available before the assessments are mandated. 4. assure opportunities for teachers to come together to discuss all aspects of assessments: the design of the instruments; the standards to be employed; the interpretation of the results; possible changes in curriculum suggested by the process and results. 5. assure that all decisions are made by more than one reader. 6. should never use any assessment results as the primary basis for evaluating the performance of or rewards due a teacher; they should recognize that student learning is influenced by many factors such as cognitive development, personality type, personal motivation, physical and psychological health, emotional upheavals, socio-economic background, family successes and difficulties which are neither taught in the classroom nor appropriately measured by writing assessment. Legislators should: 1. never mandate a specific instrument (test) for use in any assessment; although they may choose to answer their responsibility to the public by mandating assessment in general or at specific points in student careers, they should allow professional educators to choose the types and ranges of assessments that reflect the educational goals of their curricula and the nature of the student populations they serve. 2. understand that mandating assessments also means providing funding to underwrite those assessments, including resources to assist students and to bring teachers together to design and implement assessments, to review curriculum, and to amend the assessment and/or curriculum when necessary. 3. educate themselves, and consult with rhetoricians and composition specialists engaged in teaching, about the most recent research on the teaching of writing and assessment. 4. understand that different purposes require different assessments and that qualitative forms of assessment can be more powerful and meaningful for some purposes than quantitative measures are, and that assessment is a means to help students learn better, not a way of unfairly comparing student populations, teachers or schools. 5. invite teachers to help with the drafting of legislation concerning assessments. 6. recognize that legislation needs to be reviewed continually for possible improvement in light of actual results and ongoing developments in theory and research. SELECTED REFERENCES: Belanoff, Pat and Marcia Dickson, eds. Portfolios: Process and Product. Portsmouth: Boynton Cook,1991. Black, Laurel, Donald Daiker, Jeffrey Sommers, and Gail Stygall, eds. New Directions in Portfolio Assessment. Portsmouth: Boynton Cook, 1994. Cooper, Charles and Lee Odell, eds. Evaluating Writing: Describing, Measuring, Judging. Urbana: NCTE, 1977. CCCC Committee on Assessment. "A Selected Bibliography on Postsecondary Writing Assessment, 1979-91." College Composition and Communication 43.2 (May 1992): 244-55. Elbow, Peter. "Ranking, Evaluating, and Liking: Sorting Out Three Forms of Judgment. " College English 55.2 (Feb. 1993): 187-206. Gordon, Barbara. "Another Look: Standardized Tests for Placement in College Composition Courses." WPA: Writing Program Administration 10 (1987): 29-38. Greenberg, Karen. "Validity and Reliability: Issues in the Direct Assessment of Writing." WPA: Writing Program Administration 16.1-2 (Fall/Winter 1992): 7-22 Greenberg, Karen, Harvey Wiener, and Richard Donovan, eds. Writing Assessment: Issues and Strategies. New York: Longman, 1986. Huot, Brian. "Reliability, Validity, and Holistic Scoring: What We Know and What We Need to Know." College Composition and Communication 41.2 (May 1990): 201-13. Moss, Pamela. "Can There Be Validity without Reliability?" Educational Researcher 23.2 (March 1994): 5-12. ----- "Validity in High Stakes Writing Assessment: Problems and Possibilities." Assessing Writing 1 (Spring 1994): 109-28. Odell, Lee. "Defining and Assessing Competence in Writing. " In Charles Cooper, ed. The Nature and Measurement of Competency in English. Urbana: NCTE, 1981: 95-139. White, Ed. "Issues and Problems in Writing Assessment. " Assessing Writing 1.1 (Spring 1994): 11-29. ---- Teaching and Assessing Writing, 2nd ed. San Francisco: Jossey Bass, 1994. Wiggins, Grant. Assessing Students' Performance: Exploring the Purpose and Limits of Testing San Francisco: Jossey Bass, 1993. ---- "Assessment, Authenticity, Context, and Validity," Phi Delta Kappan 75.3 (Nov 93), 200-14. Williamson, Michael and Brian Huot, eds. Validating Holistic Scoring for Writing Assessment. Cresskill: Hampton Press, 1993. Yancey, Kathleen Blake, ed. Portfolios in the Writing Classroom: An Introduction.
Urbana: 1992. Return to Main menu; return to Resources menu |
Site
maintained by comppile@gmail.com Pages originally compiled and maintained by Keith Rhodes Last updated February 14, 2010 |