Everybody agrees: We need Common Core standards because “many 17-year-olds do not possess the higher order intellectual skills we should expect of them.”
The thing is, that quote doesn’t come from recent work on the Common Core. It comes from A Nation at Risk, the document that launched the standards and accountability movement close to 30 years ago. Thirty years earlier, the same critique launched the curricular reforms of the 1960’s and before that, spawned the Progressive Era of John Dewey.
We’ve been agreeing about this problem for quite a while. But we’re still not very good at teaching all or most of our students “the higher-order intellectual skills we should expect of them.”
What are we missing?
The Common Core is all about deep understanding. One way to think about deep understanding is that it’s mostly about adding more to what you already know. But the evidence from modern learning science points in a different direction: It says deeper understanding typically starts by letting go of something you already “know” so you can reincorporate that knowledge into a deeper, more comprehensive system of explanation.
Consider this example. Despite what all of us were taught in grade school, most of the American population still believes that temperatures get warmer in summer because the Earth moves closer to the sun. That’s a good guess, but bad science. One explanation for why we keep getting this one wrong is that most of us aren’t very smart. A better explanation is that common sense and intuition always trump formal knowledge until there’s a compelling reason to let intuition go.
How does all this relate to the Common Core? For generations, our intuitions have told us that “teaching the curriculum” is mostly about adding skills and information, and filling in gaps left by holes in prior experience. Over the last two decades, the standards and accountability movement has doubled down on those intuitions, first by formalizing them into massive constellations of standards and performance indicators, and second by pairing those constellations up with equally massive systems of commercially-developed assessments.
So we trusted our intuition, we doubled down our bet, and we lost. Now we have a choice. Do we double down again, or do we let go of some comfortable intuitions and start putting our money on a different horse?
Rethinking our intuitions
Here are four familiar intuitions that we need to confront which are deeply reinforced by existing approaches to assessment. If we don’t find a way to get past them, they’ll kill the Common Core.
Intuition #1: Mastery of skills and procedures is the main show
James Stigler, James Hiebert and their international research team have spent over 15 years reviewing thousands of hours of videotaped instruction from around the world as part of the Third International Math and Science Study (TIMSS). One of their most unexpected findings was that teaching is more of a “cultural activity” than an individual one. They found that underlying patterns in the way people teach are very different from one country to the next, but are remarkably similar within countries.
The pattern that Stigler and Hiebert found in every single American classroom they studied was that teachers spent large amounts of time reviewing material and practicing mathematical procedures without expecting students to grasp the underlying concepts on which skills and procedures were based.
By contrast, teaching strategies in every one of the world’s higher-achieving countries regularly engaged students in active struggle with core mathematics concepts and procedures. Teachers in some high-achieving countries might lecture while teachers in others might focus more on group problem-solving. The technique wasn’t the key. The key was regularly engaging students in active struggle with core mathematics concepts and procedures.
Intuition #2: Commercial test design is objective, precise and scientific
The American bias toward teaching skills and procedures in ways that are divorced from conceptual underpinnings is a familiar target of progressive reform. A major irony of No Child Left Behind is that, far from confronting this bias, NCLB led us to write standards and report test results in ways that reinforced the bias more systematically than ever.
The process began when states and districts formalized standards by reducing them to lengthy lists of discrete skills and procedures. Then commercial publishers were contracted to produce testing systems that matched. On its face, this approach seemed pretty straightforward. But test publishers knew they had a problem. They knew that standardized tests are poorly designed to measure discrete skills and procedures.
Publishers finessed this problem by sorting test questions into a small number of “content strands” that purport to measure mastery of specific standards. They did that knowing full well that standardized test items almost always measure more than one standard at a time, and are less about specific skills than about students’ ability to handle different kinds of academic complexity. In the end, states and test publishers fulfilled their NCLB obligations by putting their stamp of approval on a deeply compromised reporting procedure that is at best ambiguous, and at worst downright misleading. So much for scientific precision.
How this could have happened on a nationwide scale without someone blowing the whistle is not really clear. What is clear is that both content strands on standardized tests, and the “know and be able to do” mantra from which they derive, reflect a skill-based mindset that is out of sync with modern learning science and runs contrary to the goals of the Common Core.
An old adage in systems theory is that, “Your system, any system, is perfectly designed to produce the results you’re getting.” In recent years, we’ve done a more perfect job of designing our system so that it reduces what we teach to discrete skills and procedures. Without confronting that bias, we will continue to assess and report learning in ways that will doom the Common Core.
Intuition #3: The best way to improve assessment at scale is to do that job for teachers so that teachers have more time to “just teach”
Another powerful irony of No Child Left Behind is that the rise of outsourced assessment coincided with strong evidence from the research community that frequent, high-quality classroom assessment produces achievement gains that far exceed those of any other single intervention strategy. “Inside the Black Box,” the now-classic study of classroom assessment practices, reported strong academic gains when frequent, high-quality classroom assessment was practiced. At the top end of the range, these gains were roughly equivalent to the difference between overall averages on state and national tests, and the averages posted by our lowest-achieving schools.
Given what we know about the culture of American teaching and the power of high-quality classroom assessment, the troubling thing about current work on Common Core assessment is that we seem to be doubling down again on outsourcing, this time with tests that are being developed for teachers by the PARCC and SMARTER multi-state consortia. We’re not hearing much yet about how these systems will help teachers learn more about classroom assessment practices like the ones described in “Inside the Black Box”.
What would it take to improve local assessment at scale? Finland confronted the problem in the 1970s and 80s by investing deeply in teacher learning. A generation later, Finland vaulted from the middle of the pack to one of the highest-achieving nations in the world.
During the same period, American schools invested heavily in externally-developed systems and tightened administrative oversight at the school and district level. In 2006, Dennis Shirley and Andy Hargreaves assessed the impact of this approach in an article for Education Week called, “Data Driven to Distraction.” Their conclusion was that teachers were typically left with “little chance to consider how best to respond to the figures in front of them . . . There are few considered, professional judgments . . . just simplistic solutions driven by the scores and the political pressures behind them.”
Intuition #4: Standardized testing is inherently sterile and inauthentic
So why not just get rid of standardized assessment altogether? Grant Wiggins, a longtime advocate of progressive curriculum and assessment reform and a vocal critic of rote teaching and learning, offered up some solid reasons in a recent article for Education Leadership called “Why We Should Stop Bashing State Tests.”
In it, he writes that test-prep ‘teaching’ and test bashing both have it wrong. Why? His analysis of released test items from Massachusetts, Florida and Ohio shows that “. . . the test items that our students do most poorly on demand interpretation and transfer, not rote learning and recall. Better teaching and (especially) better local testing would raise state test scores.”
The surprising implication of Wiggins’ analysis is that standardized testing doesn’t have to be the Darth Vader of school reform. Released test items and full reports of student responses can actually deepen the way we think about teaching and learning in ways that other forms of assessment cannot. They can also give us better insights about how to improve local assessment practices in ways that directly support the goals of the Common Core.
The Urban Education Leadership Program at the University of Illinois at Chicago has been working at this kind of reporting for a number of years now. The result has been a growing range of protocols that are designed to help grade-, department-, and school-level learning teams get a clearer picture of where students are getting stuck and why. Rather than producing laundry lists of discrete skills that need to be remediated, these protocols help teachers identify patterns of thinking and forms of academic complexity that stump students.
Rather than producing pre-packaged sets of answers about who-should-be-taught-what-tomorrow-morning, the aim of these protocols is to support collective analysis and adult learning. The purpose of that learning is to produce more thoughtful and challenging assignments that can only be created by classroom teachers and collaborative teacher teams.
Better thinking about assessment
In School Reform from the Inside-Out, Richard Elmore writes that, “Improvement at scale is largely a property of organizations, not the pre-existing traits of the individuals who work in them.” One key property that supports improvement at scale is information systems that foster adult collaboration and accelerate adult learning. In most American schools, systems like that are still in short supply.
Teachers and students both learn best when they can depend on frequent, high-quality feedback about the work they do. For the most part, the feedback we’re getting from outsourced assessment systems is poorly designed to improve either student or adult learning and does not support the goals of the Common Core. This problem has less to do with assessments themselves than with how assessment results are reported. Reporting that emphasizes mastery of discrete skills works against Common Core goals by steering attention away from more complex aspects of teaching and learning. Reporting that pre-packages results for teachers denies teachers access to more nuanced aspects of student thinking that hold the key to deeper learning.
Common Core standards pose the most fundamental challenge to the culture of American teaching since the Progressive Era of John Dewey. To succeed where Dewey and others have failed, we need to build coordinated systems of local and external assessment that work together to support ambitious learning by students and adults.
Insisting on more thoughtful reporting of state and district assessments will be an important first step toward scaling up improvement of local assessment, where the pay-offs can be huge and where the potential for improvement is enormous.
Paul Zavitkovsky is a former CPS principal and currently works as a leadership coach and assessment specialist at UIC’s Urban Education Leadership Program.