Is it possible to measure a teacher’s effect on student performance?
Not only does William Sanders think it’s possible, the former University of Tennessee professor has created a statistical program that inputs hundreds of test scores for individual students and produces ratings for their teachers.
Analyzing his own results, Sanders has found that individual teachers have an enormous impact on student learning. In a study of teachers in Tennessee, he found that low-performing students who had three good teachers in a row were much more likely to succeed in high school than those who had a succession of three bad teachers.
Adopted by the Tennessee legislature in 1992, Sanders’ so-called Value-Added Assessment System is now being used statewide there and in more than 300 districts across the country.
By tracking individual students’ test scores over time, Sanders, who now manages value-added assessment and research at SAS Institute in Cary, N.C., says his program can identify which teachers raise test scores against the odds and which simply were blessed with easy-to-teach students.
Joel Giffin, principal of Maryville (Tenn.) Middle School, says Sanders’ data are an ideal way to begin discussions about teacher performance and pinpoint areas of weakness. “We’ve been able to analyze what’s going on, look at curricular elements, and look at teacher performance to make the right changes,” he says.
A few years ago, for instance, the data zeroed in on a group of 20 students who were struggling more than others in math. Teachers decided to add a second math period for those students to get assistance with homework, tutoring and feedback on their work. “We changed the world for those 20 kids,” Giffin says. Without the data, “we wouldn’t have ever found the problems.”
Principals elsewhere share similar experiences. “You talk to some of the principals who have this information, and they say it’s really an eye-opener,” says Kevin Carey, senior policy analyst at the Washington-based Education Trust. “It allows them to design their professional development programs in a way that targets different teachers.”
However, critics of Sanders’ work say the system lacks transparency and provides no insight into why certain schools, departments or teachers are failing.
Most educators will support the concept of value-added but are suspicious of Sanders’ reluctance to disclose the formulas the system uses to calculate its results, says Rob Weil, deputy director of educational issues for the American Federation of Teachers.
“Our methodology is published for God and everyone to see,” Sanders responds, but he maintains that the computing formulas are proprietary.
Carey backs him up. Value-added assessment is complicated “because it goes to great lengths to be fair,” he says. “There’s a bit of a trade-off in terms of transparency.”
Tom Blanford of the National Education Association says Sanders’ system offers no guidance to help teachers improve. “It just serves a sorting function.”
Giffin agrees but points out that data can help identify where problems are occurring. “It could be the high kids, it could be the low kids, it could be the black kids, it could be the special ed kids, whatever,” he says. “What we do is sit down as a group and dig into that.”
Still, Weil suggests value-added data are better at identifying teachers at the top and bottom of the scale than they are at measuring differences among teaching in the middle categories. “The technology does not allow you to cut it that fine,” Sanders concedes.
For that reason, critics and some supporters of value-added assessment say principals should be cautious in using the data to make decisions about teachers. Carey of the Education Trust says such decisions should not be based solely on value-added measures.
Blanford of the NEA rules it out altogether. “If the point is to punish and reward, we don’t think that’s of value to the teaching profession,” he says. “If the point is to help teachers identify areas in which they can focus or work on to improve their instructional practice, that’s a different ballgame.”
Charlotte Danielson, who developed a standards-based evaluation tool, suggests value-added data is too “simplistic” to capture teachers’ performance.
“Politicians like it because it makes intuitive sense.”
However, Sanders says by looking at students’ test scores over a number of years, “you’re looking at hundreds of pieces of information.”