The current trend toward machine-scoring of student work,
Ericsson and Haswell argue, has created an emerging issue with
implications for higher education across the disciplines, but with
particular importance for those in English departments and in
administration. The academic community has been silent on the
issue-some would say excluded from it-while the commercial entities
who develop essay-scoring software have been very active.
Machine Scoring of Student Essays is the first volume
to seriously consider the educational mechanisms and consequences
of this trend, and it offers important discussions from some of the
leading scholars in writing assessment.
Reading and evaluating student writing is a time-consuming
process, yet it is a vital part of both student placement and
coursework at post-secondary institutions. In recent years,
commercial computer-evaluation programs have been developed to
score student essays in both of these contexts. Two-year colleges
have been especially drawn to these programs, but four-year
institutions are moving to them as well, because of the
cost-savings they promise. Unfortunately, to a large extent, the
programs have been written, and institutions are installing them,
without attention to their instructional validity or adequacy.
Since the education software companies are moving so rapidly
into what they perceive as a promising new market, a wider
discussion of machine-scoring is vital if scholars hope to
influence development and/or implementation of the programs being
created. What is needed, then, is a critical resource to help
teachers and administrators evaluate programs they might be
considering, and to more fully envision the instructional
consequences of adopting them. And this is the resource that
Ericsson and Haswell are providing here.
Table of Contents
You are viewing the table of contents
You do not have access to this
on JSTOR. Try logging in through your institution for access.