5/3/09

Unlearned Lessons - 6 mistakes of American Schools

Six stumbling blocks to our schools' success
by W. James Popham

If there’s any truth in the saying, “Those who don’t learn from their mistakes are destined to repeat them,” why is it that today’s educators seem almost compelled to replicate their predecessors’ blunders?

Having been a public school educator for well over a half-century, I am fed up with seeing today’s educators making precisely the same sorts of mistakes I’ve seen their predecessors make, again and again, in earlier years.

In the following analysis, I will identify a half-dozen mistakes we’ve made in our schools. Some are mistakes I made myself long ago, as a high-school teacher in Oregon or as a teacher educator at state colleges in Kansas and California and at UCLA. Others are mistakes I’ve observed up close and personal in the course of several decades of work in the field of assessment. Some are errors of commission, while others are errors of omission. All of them are mistakes that have a negative impact on a large number of classrooms. All of them diminish the quality of schooling we provide to our students.

Mistake #1: Too Many Curricular Targets

Educational policymakers have laid out an unreasonably large number of curricular aims for teachers to teach—too many to be taught and too many to be tested. These target aims—usually referred to as content standards, objectives, benchmarks, expectancies, or some synonymous descriptors—tend to resemble a curricular “wish list” rather than a realistic set of attainable educational outcomes. In some states, for example, elementary teachers at each grade level are supposed to teach their students to master 300 or more curricular aims by the end of a given year. Any of the 300 aims may appear on the state’s annual accountability assessments.

This problem has plagued us since the 1960s, when the growth of the “behavioral objectives” movement—an effort to identify what learners would be able to do as a result of instruction—resulted in the proliferation of highly detailed objectives (the equivalent of today’s curricular aims). While the movement itself was well intended, it ultimately failed because of this problem.

Because teachers are patently unable to get their students to master so many different aims at any defensible level of depth, what happens in many classrooms is a desperate effort in which teachers try to touch on all the skills and knowledge their students might encounter in the upcoming accountability tests. But content that has only been touched on is unlikely to make a real difference to students. Superficiality rather than meaningful instruction is fostered.

Moreover, because accountability tests can’t possibly include enough items to measure accurately every curricular aim, the people who build those tests are required to sample from the skills and knowledge that are eligible to be tested. As a result, many teachers try to guess which curricular aims will actually be on the test. If they guess wrong, the tests wind up measuring content the teachers didn’t teach and failing to measure content that they did. Thus the state’s annual accountability tests provide a misleading picture of educators’ instructional success.

To fix this problem, state educational leaders must reframe their state’s curricular aims at a more appropriate level of breadth—that is, at a larger “grain size.” Second, they must prioritize the resultant curricular aims so that only the most important can be designated as potentially assessable each year. These broad aims can encompass subsets of lesser skills and knowledge, but must be stated with sufficient clarity so teachers can properly target their instructional activities. Moreover, with fewer curricular aims to assess, the state’s test developers can include enough items to measure each one accurately.

Mistake #2: The Underutilization of Classroom Assessment

For more than a decade we have had access to empirical research showing conclusively that when teachers employ formative assessment in their classrooms, whopping improvements in students’ learning will take place. Yet too many teachers continue to employ classroom assessments exclusively to grade students or to motivate those students to study harder.

Formative assessment is a planned process in which assessment-elicited evidence of students’ status is used by teachers to adjust their ongoing instructional procedures or by students to adjust their current learning tactics. Although two seminal articles on formative assessment were published as far back as 1998— one (PDF) in a prestigious scholarly journal, the other in the widely read Phi Delta Kappan, both authored by British researchers Paul Black and Dylan Wiliam—these insights have yet to be applied in classrooms on a large scale. To the extent that our teachers are not routinely employing classroom formative assessment as part of their regular instructional activities, students are not being as well educated as they could be.

We need to get the word out to the nation’s teachers that formative assessment is capable of triggering big boosts in students’ achievement—the educational equivalent of a cure for the common cold.

For instance, in 2006, the Council of Chief State School Officers (CCSSO) established a standing advisory committee to investigate ways to promote the use of formative assessment. At the same time, CCSSO created a collaborative of about 20 states committed to implementing formative assessment. By 2008, the CCSSO National Conference on Student Assessment was giving substantial attention to formative assessment, in addition to its traditional focus on large-scale assessment. The CCSSO story could obviously be replicated in other associations—associations of teachers and educational administrators, school boards, or other education advocacy groups.

We must also create a variety of mechanisms, for instance, diverse professional development programs and the provision of supportive assessment materials, to help more teachers apply this potent process in their classrooms.

Mistake #3: A Preoccupation with Instructional Process

Many teachers focus almost obsessively on the instructional procedures they use, rather than on the impact those procedures have on students. This overriding attention to “what teachers do in class” instead of “whether students learn” seems to have plagued teachers from time immemorial. In practice, it means that teachers spend too little time evaluating the quality of their instructional activities. If instructional activities have not been evaluated, they may be less effective than teachers believe them to be. As a result, many teachers persist in employing instructional activities that are of limited benefit to students.

Teachers who have a clear grasp of the relationship between educational ends and means are more likely to understand the importance of routinely verifying the quality of their instructional procedures (means) according to the impact those procedures have on students (ends). The nature of this means-ends relationship—and the need to evaluate means according to the ends they produce—must be emphasized in the preservice preparation of teachers and administrators. In addition, the nation’s leading professional organizations should collaborate to urge educators to pay greater attention to the outcomes of instruction rather than the nature of instructional procedures per se.

Mistake #4: The Absence of Affective Assessment

At this moment in our schools, there is a dearth of assessment instruments suitable for measuring students’ affect—that is, students’ attitudes, interests, and values. Although most educators, if pushed, will agree that the promotion of appropriate affective outcomes is as important as—and in some instances even more important than—the promotion of students’ cognitive achievements, almost no systematic attention is given in our nation’s classrooms to the promotion of appropriate affect among students.

This problem dates back at least to the 1950s and 1960s. When Benjamin Bloom and his colleagues published the influential Taxonomy of Educational Objectives in 1956, they separated educational objectives into three major categories: cognitive, affective, and psychomotor. The book’s focus, however, was almost completely on cognitive objectives. A subsequent volume by David Krathwohl and two coauthors, published in 1964, laid out a hierarchical taxonomy of affective objectives. But the book never enjoyed the success of its cognitive cousin.

Because of this overemphasis on cognition, the affective consequences of instruction are unpredictable and sometimes harmful. For instance, a child who dislikes reading or is intimidated by mathematics develops attitudes in school that are almost certain to have a negative impact on his or her life.

A prominent reason that schools pay little attention to students’ affect is the absence of assessment instruments suitable for measuring students’ attitudes, interests, and values. At a fairly modest cost, however, governmental and/or nongovernmental agencies could provide teachers with a wide range of survey instruments to be completed anonymously by students, so that teachers can adjust their instructional activities accordingly. Teachers may also benefit from professional development regarding ways to promote appropriate student affect.

Mistake #5: Instructionally Insensitive Accountability Tests

Almost all the accountability tests being used to evaluate our nation’s schools are incapable of doing so. From the earliest beginnings of the educational accountability movement in America—a movement nearly a half-century old—we’ve been using the wrong measurement tools to judge the quality of America’s schools. Nearly every state-level accountability test tends to measure the composition of a school’s student body—what students bring to school in terms of socioeconomic status or inherited academic aptitude—rather than the success with which those students have been taught. As a result, enormous numbers of U.S. schools are currently being inaccurately evaluated. Effective schools are thought to be failing; so-so schools are seen as successful.

These inaccurate evaluations of school quality have both an immediate and long-term harmful impact on our students. For instance, teachers who are doing a good instructional job, but whose students’ test results (inaccurately) indicate otherwise, may abandon effective instructional techniques and adopt less effective ones. Teachers who are doing an inept instructional job, but whose students’ test results (inaccurately) indicate otherwise, are apt to continue using unsound teaching procedures. In both of these scenarios, students end up as the losers.

What we must do—immediately—is replace today’s instructionally insensitive accountability tests with those that can, with accuracy, sort out schools where students are being well taught from schools where students are not. Evaluating tests for instructional sensitivity could follow the same basic strategy as that used to evaluate items for racial, gender, or class bias—a combination of expert judgment and empirical evidence. An overhaul on this scale will be an expensive undertaking, but the cost and effort are justified by the educational damage caused by our current reliance on instructionally insensitive tests.

Mistake #6: Abysmal Assessment Literacy

At a time when test-based accountability dramatically influences what goes on in our schools, far too few educators understand the fundamentals of educational measurement. Increasingly, however, educational decisions for the nation’s youth depend directly on the role of educational tests. Assessment-dependent educational decisions call for assessment-knowledgeable educators. Yet teacher education has not changed to accommodate this demand. Preservice and professional development initiatives need to address two key areas: classroom assessment and accountability assessment. It is patently absurd for teachers and administrators not to understand the instruments by which their professional competence is determined, and on which critical educational decisions are based.

Moxie, Not Money

Fortunately, almost all of these mistakes can be solved with moxie—not money.
“Moxie”—the slang synonym for courage or boldness—traces its roots to America’s first mass-market soft drink. Distributed in Lowell, Mass., during the mid-1920s, Moxie was a fizzed-up version of an 1884 patent-medicine tonic said to cure “brain and nervous exhaustion, loss of manhood, softening of the brain, and mental imbecility.” It is small wonder, then, that the name of this popular New England soda soon became a descriptor for someone with plenty of nerve.

Given the current status of the U.S. economy, most improvement strategies we adopt will need to be based on educators’ moxie rather taxpayers’ largesse. Those involved must be committed to the belief that the problem under consideration warrants attention. I personally believe that each of the six deficits identified here could, if fixed, make a dramatic difference in the way we educate our students. What’s needed is a clear commitment to remedying them—and sufficient moxie to make that remedy work.

W. James Popham is an emeritus professor at the UCLA Graduate School of Education and Information Studies. This article is adapted from his most recent book, Unlearned Lessons: Six Stumbling Blocks to Our Schools’ Success (Harvard Education Press, 2009).

No comments: