Print this document: MS Word | Print this document: PDF

Post-secondary Entrance Writing Placement

March 1, 2005

Rich Haswell
Texas A&M University, Corpus Christi
rhaswell@falcon.tamucc.edu

Most colleges and universities in the USA have some system of placement into writing courses at the entry level. WPAs may find their system a boon or a bane, or both at the same time. Placement systems have many stakeholders, often with agendas that clash. Institutions use writing placement to recruit students, commercial firms use it to make money, teachers use it to define their courses, students use it to confirm their self-image. WPAs are often in the middle and in need of advice, resources, and documentation. This document hopes to provide all three.

It is organized in four main segments.

       I   Systems of entry-level writing placement
      II   Problems with existing systems
    III   Solutions, with annotated bibliography
                  A   Reports of research findings
                  B   Descriptions of successful programs
     IV   References

I. Systems of entry-level writing placement

Placement measures a student’s readiness for instruction and on that basis assigns the student a point of entry within a curricular sequence. Post-secondary writing placement operates at various points in a student’s course of studies. Testing may assign students to a basic, regular, or honors course on admission to college (“entry proficiency exam”), or require more coursework at the end of the first writing course (“first-year course exit exam”), at the 60 credit hour point (“rising junior exam”), on acceptance into a major (“qualifying exam”), or as part of degree completion (“graduation exit exam”). This page will discuss only the first, entry placement testing.

For many students writing placement is a high-stakes rite of passage, and they will feel demeaned if it puts them into a “basic” (remedial) writing course. For teachers placement may force limits to their teaching since the prior knowledge and skills of the students entering classrooms shape what a teacher can and cannot teach. More so than tests for proficiency, that purport to measure a general level of writing capability, or tests of equivalency, that may give a student advance credit for specific writing courses, placement functions as a tense interface, between secondary and tertiary institution, between measurement of generalized skills and work for specific courses, often between commercial interests and academic objectives. The interface is anything but stable. Often in middle, trying to hold things together or find a better system, stands the writing program administrator.

In the USA, matriculation writing-placement systems are of long vintage, some more than a century old, some older than the courses they still support. In 1874 for the first time in their entrance examinations Harvard University asked students to show writing proficiency by composing a short essay (read by teachers), and ten years of poor performance on it finally prompted the establishment of the first freshman composition courses as we now know it. Since then many systems of writng placement have been tried.

  1. Essay written by the student and read by English department teachers, sometimes by teachers across campus.
  2. Essay written by the student and read by a teacher of the target course, who sometimes accepts or rejects the student into his or her course.
  3. Essay written by the student and read and scored by a certifying agency or testing firm outside the university, the score then used by the university to place the student.
  4. Essay written by the student and scored by computer software (e.g., Educational Testing Service’s E-rater, ACT’s e-Write, College Board’s WritePlacer), which score is used to place the student.
  5. Folder or portfolio of the student’s high-school writing submitted and read by faculty to place.
  6. Short-answer or bubblesheet test taken by the student, on which evidence of “verbal skill”—criterion-referenced score or norm-referenced percentile—the student is placed (“indirect testing”).
  7. Placement by the student herself or himself in the writing sequence, the decision based on information provided by the college, such as high-school GPA in English courses, scores on a writing or verbal examination, average success of test groups in variouis courses (“informed self-placement”).
  8. Placement by the student himself or herself using the same kind of information but also relying on advice from counselors (“directed self-placement”).
  9. Enrollment of all students in the regular writing course, but after a few weeks some students are placed into a more basic course or or given the added requirement of hours in a writing center, the decision made by their teacher or by a panel of faculty on evidence of course performance up to that point.
  10. Delay of enrollment in writing courses, with possible requirement of a first-year writing course on recommendation from teachers of other courses taken during the first semester or first quarter.

Variants and combinations of these systems are not unusual.

Where writing-placement systems work well, they protect the academic level of the course, support retention into the second year, and maintain and enrich faculty conversation about writing instruction. But they do not always work well.

Return to top

II. Problems in entry-level writing placement

In fact problems with entry writing placement are legion and partly account for the variety of systems and the frequency with which college switch from one system to another. Some problems are general since they inhere in the difficulties of assessing writing skill at all—difficulties in getting raters to agree, in obtaining a reliable sample of the student’s writing skills, in providing a fair opportunity for the student to demonstrate those skills. There also may be systemic or logistical problems peculiar to the placement testing per se.

  • Students are little in the mood for writing if they are testing during summer orientation sessions or just before classes begin in the fall.
  • Teachers are little in the mood for reading if they are placing students in the same circumstances.
  • Topics, modes, criteria, and readers used by testing firms may be far removed from the local writing curriculum.
  • Decisions to test or to use a particular testing system often are not made by the writing faculty, who may not even be consulted.

Other problems adhere to the particular placement system used.

  • The method of reading—holistic or general impression—used by commercial testing firms to score essays is simplistic and designed to be cost efficient (e.g., quick and with high rater reliability) rather than to be course appropriate.
  • Automatic computer scoring (D) just duplicates that simplistic kind of reading.
  • Portfolios (E) are time consuming to read, have low rater agreement, and seem biased in favor of women writers.
  • Indirect tests of writing (F) may be biased against marginalized or minority groups.
  • Self-placement systems (G and H) depend much on a doubtful factor, the students’ sense of self-efficacy, how accurately they can judge their own abilities, and upon knowledge of a course about which they may know little.
  • Re-enrollment after the beginning of classes (I) can be a logistical nightmare.
  • The judgment of non-composition teachers (J) may be spotty, eccentric, or unattuned to the actual writing instruction or objectives.

Many of these problems are entangled with the one most crucial problem, the low predictive validity of writing placement. When research has compared placement decisions with the subsequent student performance in the courses, good match is rarely if ever found. It does not matter when the performance used as target criterion is final grade in the course, passing the course, student judgment of course fit or learning, teacher judgment of fit or learning, or academic success in future courses. The correlation between placement test score and performance ranges between .2 and .4. This is an embarrassingly weak correlation. It explains only about a tenth of the variability of student performance; the rest, nine tenths of it, must be accounted for by other factors. Placement by diect testing, preferred by writing teachers, is as poor a predictor as indirect testing.

Such extremely weak predictability of writing placement tests raises a fundamental question. Do we want our new students assigned to a stigmatized course such as basic writing on evidence so inadequate? How many students thus placed in a basic course would have done well enough in the regular course?

Faced with these problems in writing placement, program administrators may have to answer some very tough questions.

  • For reasons of politics or public relations, the administration wishes to install a dysfunctional placement system. How argue against it?
  • Out of mere expediency or cost, the administration wishes to replace a better placement system with a worse. How defend the old system?
  • Teachers may be tired of struggling with a labor-intensive method. How revive their energies?
  • A current system of placement may be forcing changes for the worse in current courses. How keep this from happening?
  • Students, already over-tested in the schools, may object more and more to a writing placement examination. Are there ways to test less or test less objectionably and still place students well?
  • A system with traditional credentials behind it—e.g., The College Board’s Advanced Placement Examinations—does not work well in placing students within a local writing curriculum (it’s an equivalency exam, not a placement exam). How argue against such a well-respected brand name?
  • For whatever reasons, the WPA needs to provide evidence that the current method of placement is working. How do that?
  • The WPA must provide reasons to change to a different system of writing placement, or to eliminate placement entirely. How convince administration and other faculty?

Return to top

III. Toward solutions

What will it take to revamp writing placement systems already in place, and often unthinkingly in place, in college across the land? First, I suppose, is the need to step back, take stock, and engage in some creative rethinking. Peter Elbow (1996, 2003) has led the way in this. There are not many Peter Elbows around, alas, and arguments against dysfunctional systems and routes toward more functional ones will have to rely mainly on (A) research that we can trust and (B) programs that have proved successful.

A. Reports of research. Solid formal research studies exist, both questioning and defending writing-placement methods. Administrators and faculty will listen when research findings are presented. Here is a sampling of studies, arranged in chronological order. More studies can be found annotated in Speck (1998), and the trends in the research are synopsized in Haswell (2004).

Breland, Hunter M. (1977). Correlations with course performance and three kinds of tests: TSWE .51; SAT Verbal .44; essay .47. One of many studies that show direcet testing with little greater predictive power than indirect testing.

Olson and Martin (1980). Compared three different methods of placement testing and found that 1,002 (61%) of their entering students would be placed differently by indirect proficiency testing than by teacher rating of an essay, and 1,051 (64%) would be placed differently by that teacher assessment than by the student’s self-assessment.

Meyer, Russell J. (1982). Compared take-home essays, written and revised at the student’s leisure and then read by faculty, with objective tests: 3% were placed in a higher course than the objective test would have done, 44% were placed in a lower course. A number of studies concur that faculty seem to be tougher with placement decisions than off-campus commerical scores.

Shell, Duane F.; Carolyn Colvin Murphy; Roger Bruning (1986). Three types of self-efficiency (student’s estimate of their own writing abilities) correlated at .32, .17, .13 with a 20-minute essay, scored holistically and analyticallly.

Stock, William P.; Juan M. Flores; Linnea M. Aycock (1986). At California State University, Fresno, correlation of final course grade in freshman composition was .24 with the CSU English Placement Test; .29 with the SAT Verbal score; and .35 with the Test of Standard Written English.

Bers, Trudy H.; Kerry E. Smith (1990). Thirty-seven percent of the students who failed the regular writing courses had been placed into it by a local holistic rating of a placement essay.

Bridgeman, Brent (1991). An essay correlated with an end-of-the-semester essay.at .56 and with grade at .27; same with SAT verbal, .44 and .22; same with TSWE, .51 and .25.

Hughes, Ronald Elliott; Carlene H. Nelson (1991). The correlation of ASSET with passing the freshman course was .23. In head count, 60 of 162 students passed but would have been predicted as not passing. This is an important study because it tests predictability in a way few other studies dare to do: get predictive scores, then put all students regardless of score into the mainstream course and see what happens, then compare what happened to had the pre-course scores been used.

McKendy (1992). Reviews twelve recent studies of the predictability of holistic scores of placement essays as well as another review of indirect tests published in 1954.. He shows that a single-sample writing essay correlation with course grade typically runs from .2 to .4, and best efforts to create multiple-regression combination other predictors such as high-school GPA can raise this only to around .6. All told, a very strong argument against using a single test or single essay to place a student.

Smith, William (1993). The best and most thorough study of predictive validity and faculty rating of student essays, at the University of Pittsburgh from 1985-1993.

Galbato, Linda; Mimi Markus. (1995). On evidence of writing during the first week of class. a third of students were recommended to change the course that a standardized test (ASSET, SAT, TSWE, or ACT) had placed them in,

Larose, Simon; Donald U. Robertson; Roland Roy; Fredric Legault (1998). SAT plus high-school rank accounted for only 4.2% of GPA with high-risk students. One of a number of studies that show that the predictive validity of scores produced by minority students is lower than scores produced by majority students.

Leonhardy, Galen; William Condon (2001). Describes difficult decisions as raters are deciding placement from a two-sample direct placement testing at Washington State University.

Blakesley, David (2002). Describes diirected self-placement at Southern Illinois State University with some careful validation of it. Argues that the system is no worse than previous systems that placed students by exam.

Huot, Brian (2002). Describes the portfolio placement system at the University of Louisville, now dismantled (pp. 156-163). During the first five years of the program students who chose the portfolio placement route also took the ACT. If the two placement decisions differed, the student was allowed to pick, and almost always picked the higher placement: “for the most part they were successful in the courses they chose” (p. 161).

Shermis, Mark D.; Jill Burstein (Eds.) (2003). Defends computer scoring programs largely by comparing the machine’s score with a human holistic score. In the entire book there is not one report of a completed study of instructional validity.

Ericsson, Patty; Richard H. Haswell (Eds.) (forthcoming). Contains a number of studies and discussions of research findings showing that automated essay scoring systems  sold by ETS, ACT, and the College Board are no better at predicting success in writing courses than are the standardized tests.

Return to top

B. Successful programs. There are a number of college writing-placement programs that can claim better than usual success. Much can be learned from them, especially if one keeps in mind the axiom that what works in one place may not work in another, that even the best placement designs need to be adapted to fit local conditions. But four methods have more generalizability than others in that they have been applied with good results across a variety of sites:

  • using teachers of the target courses to read essays
  • gathering multiple samples of writing from students
  • re-assigning students early in the semester on evidence of course performance
  • bringing the student into the decision-making process
  • aligning design and scoring of placement procedures with the content and pedagogy of the courses.

Everywhere writing placement can be bettered, made more adequate to its purposes, if those purposes are kept in mind and if innovation and modification of existing programs are welcomed. Here are a few useful accounts, most with some formal validation.

Ball State University: Hanson, Linda K. (1996). Explains a system where a formula places the great majority of students, but some six percent, mainly non-traditional students, are placed by a portfolio of writings.

Belmont University: Pinter, Robbie; Ellen Sims (2003). With directed self-placement, they found that enrollment rose in an ancillary course to regular composition.

City University of New York, City College: Janger, Matthew (1997). Validates by means of retention and progress toward degree an experiment in mainstreaming, where some students, regardless of how they had been placed, took a two-semester, non-trackeded collegel-level writing sequence.

Colgate University: Howard, Rebecca Moore (2000). Entering students who declared themselves “not yet prepared for college reading and writing” were encouraged to enroll in a summer preparatory course.

De Pauw University: Cornell, Cynthia E.; Robert D. Newton (2003). Includes useful information about student satisfaction with the process of self-placement, including the feelings of at-risk students.

Grand Valley State University: Royer, Daniel J.; Roger Gilles (1998). Students do not write a placement exam, but rather are informed about courses and then choose on their own. For descriptions of applications of directed self-placement at other institutions, see Daniel Royer; Roger Gilles (Eds.) (2003).

Miami University [Ohio]: Daiker, Donald A; Jeff Sommers; Gail Stygall (1996).  One of the best know and better studied applications of placement by portfolio, which are carefully defined and constructed with the help of high-school teachers. See also Black, Laurel; Donald A. Daiker; Jeffrey Sommers; Gail Stygal (1992).

Missouri Western State College: Frick, Jane: Karen Fulton (1991). Describes modifications to the administration and reading of essays by faculty that improved inter-rater reliability and placement adequacy.

Southern Illinois State University: Blakesley, David (2002). Careful validation of their version of directed self-placement. See also Blakesley, David; Erin J. Harvey; Erica J. Reynolds (2003).

St. John Fisher College: Nicolay, Theresa Freda (2002). They eliminated basic writing and the use of a standardized placement test, and instead relied upon an essay written at the third week of the semester. At-rish writers were assigned writing-center work and one-on-one conferences with teachers.

State University of New York, Stony Brook: Robertson, Alice (1994). During summer orientation the placement test is run first as a class with a teacher, pre-writing exercises, and discussion with other students over the topic.

University of Arizona: Hindman, Jane E. (1994). Once in the course students study the examination that placed them there, looking at the criteria teacher-raters used, even scoring the essays themselves, thus turning a testing process into a learning one. See also McKendy (1990).

University of Louisville: Huot, Brian (1994). Students can submit a portfolio of high-school writings, read by university instructors, but also take the ACT; if placement differs, students choose which one they want. See also  Huot, Brian (2002), pp. 156-163, and Lowe, Teres J; Brian Huot (1997).

University of Michigan: Willard-Traub, Margaret; Emily Decker; Rebecca Reed; Jerome Johnston (1999). One of the first large universities to attempt placement of students through scrutiny of a portfolio of their work. See also Clark, Michael (1983) for an account of the benefits of teachers reading placement essays.

University of Pittsburgh: Smith, William L. (1993). Detailed analysis of the way changes in teacher scoring of placement essays improved the students’ chances in the courses.

University of South Carolina: Grego, Rhonda C.; Nancy S. Thompson (1995). The university phased out basic writing and hence placement and met the needs of students struggling in regular composition with a “writing studio,” where students meet in groups to share problem and discuss solutions. See also Grego, Rhonda C.; Nancy S. Thompson (1996).

Washington State University: Haswell, Richard H.; Susan Wyche-Smith (1994). Students are placed on the basis of two essays, one a post-write piece, and teachers make placement decisions on this abridged portfolio by means of an especially efficient system in which obvious regular-course placements are read only once (and at-risk students are assigned the regular course plus an additional hour in a writing-center tutorial course). See also Haswell, Richard H.; Susan Wyche-Smith (1996).

Yale University: Hackman and Johnson (1981). Sent students their scores and the averages score on national tests and allowed them to place themselves.

Return to top

IV. References

Bers, Trudy H.; Kerry E. Smith (1990), Assessing assessment programs: The theory and practice of examining reliability and validity of a writing placement test. Community College Review, 18 (3), 17-27.

Black, Laurel; Donald A. Daiker; Jeffrey Sommers; Gail Stygal (1992), Handbook of writing portoflio assessment: A program for college placement. ERIC Document Reproduction Service, ED 350 617.

Blakesley, David (2002), Directed self-placement in the university. WPA: Writing Program Administration 25.2, 9-39.

Blakesley, David; Erin J. Harvey; Erica J. Reynolds (2003), Southern Illinois University Carbondale as an institutional model: The English 100/101 stretch and directed self-placement program. In Daniel Royer; Roger Gilles (Eds.), Directed self-placement: Principles and practices (207-241), Cresskill, NJ: Hampton Press.

Breland, Hunter M. (1977), Group comparisons for the Test of Standard Written English. ERIC Document Reproduction Service, ED 146 228.

Bridgeman, Brent (1991), Essays and multiple-choice tests as predictors of college freshman GPA. Research in Higher Education, 32 (3), 319-332.

Clark, Michael (1983), Evaluating writing in an academic setting. In Patricia L. Stock (Ed.), Fforum: Essays on theory and practice in the teaching of writing (59-79), Upper Montclair, NJ: Boynton/Cook.

Cornell, Cynthia E.; Robert D. Newton (2003), The case of a small liberal arts university: Directed self-placement at De Pauw. In Daniel Royer; Roger Gilles (Eds.), Directed self-placement: Principles and practices (149-178), Cresskill, NJ: Hampton Press.

Daiker, Donald A; Jeff Sommers; Gail Stygall (1996), The pedagogical implications of a college placement portfolio. In Edward M. White, William D. Lutz, & Sandra Kamusikiri (Eds.), Assessment of writing: Politics, policies, practices (257-270); New York: Modern Language Association of America.

Daniel Royer; Roger Gilles (Eds.) (2003), Directed self-placement: Principles and practices, Cresskill, NJ: Hampton Press.

Elbow, Peter (1996), Writing assessment in the twenty-first century: A utopian view. In Lynn Bloom; Donald Daiker; Edward White (Eds.), Composition in the 21st century: Crisis and change (83-100), Kresskill, NJ: Hampton Press.

Elbow, Peter (2003), Directed self-placement in relation to assessment: Shifting the crunch from entrance to exit. In Daniel J. Royer; Roger Gilles (Eds.), Directed self-placement: Principles and practices (15-30). Carbondale, IL: Southern Illinois University Press.

Frick, Jane: Karen Fulton (1991), Promises kept and broken: Holistically scored impromptu writing exams. ERIC Document Reproduction Service, ED 333 449.

Galbato, Linda; Mimi Markus (1995), A comparison study of three methods of evaluating student writing ability for student placement in introductory English courses. Journal of Applied Research in Community College, 2 (2), 153-167.

Grego, Rhonda C.; Nancy S. Thompson (1995), The writing studio program: Reconfiguring basic writing/freshman composition. WPA: Writing Program Administration 19.1-2, 66-79.

Grego, Rhonda C.; Nancy S. Thompson (1996), Repositioning remediation: Renegotiating composition's work in the academy. College Composition and Communication 47.1, 62-84.

Hackman, Judith D.; Paula Johnson (1981), Using standardized test scores for placement in college English courses: A new look at an old problem. Research in the Teaching of English 15.3, 275-279.

Hanson, Linda K. (1996), Placement for success: The writing program placement/credit portfolio. In Trudy W. Banta et al. (Eds.), Assessment in practice: Putting principles ot work on college campuses (199-203), San Francisco, CA: Jossey-Bass.

Haswell, Richard H. (November, 2004), Writing placement in college: A research synopsis. http://comppile.tamucc.edu/writingplacementresearch.htm.

Haswell, Richard H.; Susan Wyche-Smith (1994), Adventuring into writing assessment. College Composition and Communication 45.2, 220-236.

Haswell, Richard H.; Susan Wyche-Smith (1996), A two-tier rating procedure for placement essays. In Trudy W. Banta et al. (Eds.), Assessment in practice: Putting principles ot work on college campuses (204-207), San Francisco, CA: Jossey-Bass.

Hindman, Jane E. (1994), Letting students in on the secrets of evaluation and placement. ERIC Document Reproduction Service, ED 375 402.

Howard, Rebecca Moore (2000), Assumptions and applications of student self-assessment. In Jane Bowman Smith and Kathleen Blake Yancey (Eds.), Student self-assessment and development in writing: A collaborative inquiry (35-58), Cresskill, NJ: Hampton Press.

Hughes, Ronald Elliott; Carlene H. Nelson (1991), Placement scores and placement pracctices: An empirical analysis. Community College Review, 19 (1), 42-46.

Huot, Brian (1994), Beyond the classroom: Using portfolios to assess writing. In Black, Laurel; Donald A. Daiker; Jeffrey Sommers; Gail Stygall (Eds.), New directions in portfolio assessment: Reflective practice, critical theory; and large-scale scoring (325-334); Portsmouth, NH: Boynton/Cook.

Huot, Brian (2002), (Re)articulating writing assessment for teaching and learning (pp. 156-163). Logan, UT: Utah State University.

Huot, Brian (1994), A survey of college and university writing placement practices. Writing Program Administration, 17 (3), 49-67.

Janger, Matthew (1997), A statistical analysis of student progress and achievement in the pilot writing project at City College of New York. ERIC Document Reproduction Service, ED 416 805.

Larose, Simon; Donald U. Robertson; Roland Roy; Fredric Legault (1998). Nonintellectual learning factors as determinants for success in college. Research in Higher Education, 39 (3), 275-297.

Leonhardy, Galen, & William Condon (2001), Exploring the difficult cases: In the cracks of writing assessment, In Richard H. Haswell (Ed.), Beyond outcomes: Assessment and instruction with a university writing program (pp. 65-80). Ablex,

Lowe, Teres J; Brian Huot (1997), Using KIRIS writing portfolios to place students in first-year composition at the University of Louisville. Kentucky English Bulletin 46, 46-64.

McKendy, Thomas (1992), Locally developed writing tests and the validity of holistic scoring, Research in the Teaching of English, 26 (2), 149-166.

McKendy, Thomas (1990), Legitimizing peer response: A recycling project for placement essays. College Composition and Communication 41.1, 89-91.

Meyer, Russell J. (1982), Take-home placement tests: A preliminary report. College English, 44 (5), 506-510.

Nicolay, Theresa Freda (2002), Placement and instruction in context: Situating writing writhin a first-year program. WPA: Writing Program Administration 25.3, 41-60.

Pinter, Robbie; Ellen Sims (2003), Directed self-placement at Belmont University. In In Daniel Royer; Roger Gilles (Eds.), Directed self-placement: Principles and practices (107-125), Cresskill, NJ: Hampton Press.

Robertson, Alice (1994), Teach, not test: A look at a new writing placement procedure. WPA: Writing Program Administration 18.1-2, 56-63.

Royer, Daniel J.; Roger Gilles (1998), Directed self-placement: An attitude of orientation. College Composition and Communication 50.1, 54-70.

Shell, Duane F.; Carolyn Colvin Murphy; Roger Bruning (1986), Self-efficacy and outcome expectancy: Motivational aspects of reading and writing performance. ERIC Document Reproduction Service, ED 278 969.

Shermis, Mark D.; Jill Burstein (Eds.) (2003), Automated essay scoring: A cross-disciplinary perspective. Mahwah, NJ: Erlbaum.

Smith, William L. (1993), Assessing the reliability and adequacy of using holistic scoring of essays as a college composition placement technique. In Williamson, Michael M.; Brian Huot (Eds.), Validating holistic scoring for writing assessment: Theoretical and empirical foundations (142-205), Cresskill, NJ: Hampton Press.

Stock, William P.; Juan M. Flores; Linnea M. Aycock (1986), A study of English placement test subscores and their use in assigning CSU, Fresno freshmen to beginning English courses. ERIC Document Reproduction Service, ED 289 453.

Whitman, A. D. (1927), The selective value of the examinations of the College Entrance Examination Board. School and Society, 25 (April), 524-525.

Willard-Traub, Margaret; Emily Decker; Rebecca Reed; Jerome Johnston (1999), The development of large-scale portfolio placement assessment at the University of Michigan: 1992-1998. Assessing Writing 6.1, 41-84.

Return to top


March 1, 2005 | Copyright, Rich Haswell