The Oct. 10 luncheon in a Dallas Hyatt Regency ballroom was one part pep rally and one part political convention.
As college fight songs blared from loudspeakers, the guests of honor, 26 school principals, entered through a towering star and strode across a stage festooned with yellow and black balloons.
Cheering them on were Texas Gov. George W. Bush, Dallas Mayor Ron Kirk, County Judge Lee Jackson and executives from dozens of the city’s most prominent businesses.
In a speech broadcast onto huge screens flanking the stage, the governor hailed the 26 schools as “centers of excellence that can be used as examples across the state.”
The occasion was the fourth annual Excellence in Education ceremony honoring winners of the Dallas Independent School District’s School Performance Improvement Awards. But the hoopla was only icing on the cake for principals and teachers at the 26 schools, who already had received $1,000 bonuses. Other staff members got $500, and the schools’ activity accounts got $2,000.
The next 47 schools in the district’s rankings, whose improvement was significant but not as dramatic as the 26, also got financial rewards—$450 for each teacher and principal, $225 for every other staff member and $1,000 for the activity account. In all, the district divvied up $2.5 million at 73 of its 200 schools.
At the other end of the ranking, the lowest schools will get a year’s worth of extra scrutiny. In the past, principals have been called before the School Board and reassigned.
The winning schools are a diverse lot, including an elementary school that serves the city’s homeless shelters, a middle school that serves a largely low-income Hispanic community and a magnet high school for gifted and talented youth. At each, students achieved better results—mainly on standardized tests—than the district had predicted for them based on their past performance.
In Las Vegas, they’d call it beating the house—the campuses topped the marks set for them by a sophisticated and sometimes controversial computer-generated analysis that defines effectiveness in the nation’s 8th largest school district.
The district has fine-tuned the statistical analysis each year it has been used. This year, the ranking will be used to rate the work of administrators and principals, and a variation of it will be used to evaluate teachers.
Schools now focused
After years of stagnation, Dallas’s scores on the Iowa Tests of Basic Skills rose by and large across the board last year; school officials said the gains were the largest in 20 years. And Supt. Chad Woolery and a majority of school board members credit the accountability program.
“Clearly, student achievement has improved,” observes William Webster, the district’s research and evaluation chief and architect of the accountability system. “I’ve heard some people say this [accountability system] is the cause.”
“Well, cause is hard to assign,” he says. “I think what this has done is, it has focused the district on what is important, what it is we want our kids to do.”
For their part, educators at award-winning campuses say that the recognition reinforces, rather than motivates.
“It affirms that my commitment to my school and my students is worthwhile,” says Eunice Vera, a teacher. “We would not and did not do anything differently to win the award.”
Judy Meyer, a principal, agrees. “What it does is reaffirm what you are doing,” she says.
Critics of the accountability system, including teacher unions and some administrators, say it remains a mystery to most of the people whose work it measures. Critics also are concerned about the heavy emphasis on test scores, which they say skews the curriculum and forces teachers to spend too much time on test preparation.
The system was one of several broad recommendations made in 1991 by a school board-appointed citizens’ Commission on Educational Excellence, which studied school improvement issues for a year before issuing a report. Other recommendations were for more school-based decision making by parents and educators, better-focused staff training and stronger links between schools and social service agencies.
Tying employees’ evaluations to the new system was “the great dream of the commission,” recalls Dallas School Board President Sandy Kress, who chaired the panel before being elected to the board. With classroom observations as the basis of evaluation, nine out of 10 Dallas teachers are rated as “exceeding expectations” or “clearly outstanding.”
Until this year, technical problems stood in the way of factoring in student test scores, according to Webster. Changes in the state’s education law and modifications to make the statistical model work at the classroom level now allow test scores to be part of the new evaluation.
While the district does not intend to publish a rank order, it will select teachers at the top and bottom for special treatment: Evaluations will be waived for two years for teachers in the top 40 percent, while teachers in the bottom 10 percent will be required to take extra training to improve instruction in the areas of greatest weakness for their students.
Teacher unions, which do not have collective bargaining rights in Texas, oppose the plan, charging it fails to account for the level of support teachers get from principals or from the central office, for instance, in the form of adequate supplies and manageable class sizes.
For administrators and principals, the school effectiveness ratings have been an implicit part of their own evaluations for several years. Principals in many low-ranked schools have been reassigned, as have several of the area superintendents who oversaw the campuses.
This year, principals and administrators are setting their goals based on each “outcome” that is part of the effectiveness ranking—for example, 3rd-grade reading scores or student attendance. At the end of the year, they will be judged on improvement in those results.
Leveling the field
Listening to Webster explain how the accountability system works is a dizzying lesson in statistical analysis. While focusing on movement in individual student test scores from one year to the next, the system filters out the factors over which schools have no control, such as poverty, ethnicity, parents’ education, school overcrowding and limited proficiency in English. Webster says that 15 percent to 20 percent of the differences in school performance can be traced to the social and economic circumstances of children’s lives.
Another way the system levels the playing field among schools is by including only those students who are enrolled at a given school for the entire school year. Further, the district ranks schools by type—primary, intermediate, regular elementary, middle, high, and magnet—and allots a proportional amount of incentive money for each group of schools.
The system also places a premium on certain outcomes at each grade level, as determined by an accountability task force that includes parents, teachers, principals and community representatives. This year, for example, the outcome that carries the most weight in elementary and middle schools is the reading section of the Texas Assessment of Academic Skills, which is given in 3rd through 8th and 10th grades. The TAAS is a criterion-referenced test, meaning that student performance is measured against specific achievement standards.
For 9th-graders, the reading and math sections of the Iowa Tests of Basic Skills (ITBS) will outweigh the standardized final exams developed by the school district itself. The ITBS, which Dallas administers in 1st through 9th grades, is a norm-referenced test, which means that student performance is measured against the average performance of all other students who took the test. (Chicago uses the ITBS in 3rd through 8th grades and the counterpart Tests of Academic Proficiency in 9th and 11th.)
With the exception of a TAAS writing sample in 4th, 8th and 10th grades, all the tests are multiple-choice.
“One of the things that always surprises me when we run the equations is how much ‘face validity’ they have,” says Webster. “I look at the list, think about who runs a school, and see how it ends up where it is.” Face validity—the idea that the rankings make common sense—”obviously is one of the criteria on which we are judged.”
Whatever acceptance the accountability system has now has been earned over time, particularly as people have come to understand that it puts a premium on improvement, not simply high scores. Upon seeing the first list of top-ranked schools four years ago, one parent on the accountability task force challenged the rest of the group: “How many of you would send your students to some of the schools being awarded?”
Acknowledging the lack of understanding of the statistics involved, Webster says, “All you have to do as a teacher is improve your kids. If you improve your kids, you’ll do well under this system.”
Part of the confusion arises because the State of Texas put its own school accountability system in place at roughly the same time Dallas did.
The state’s system looks at absolute levels of performance—not improvement—using the percentage of students passing the TAAS to categorize schools as exemplary, recognized, acceptable or low-performing. As a result, a generally low-scoring school that made exceptional progress could fare poorly under the state’s system but well under the district’s.
(Last school year, the state also began reporting categories for certain groups of students within each school—whites, blacks, Hispanics and disadvantaged students.)
Teacher unions have complained about the heavy emphasis on standardized tests in both the state and Dallas systems. Last year, the Texas affiliate of the National Education Association claimed that teachers statewide were spending a third of their class time on TAAS preparation.
When Woolery became superintendent in 1993, he told principals that “unparalleled mastery” of the TAAS was the district’s foremost goal.
But Oscar Rodriguez, a principal and president of the district’s administrators association, warns: “Anytime you assign money to stuff, you begin this almost rabid competition that may not always be positive.”
The district learned that the hard way, through a cheating scandal. Two years ago, an elementary school was dropped from the ranking when investigators found “an alarming relationship” between erasures and correct answers on its 4th-graders’ norm-referenced reading test.
The school caught analysts’ attention during a routine check to detect unusual jumps in results. In this case, the 4th-grade reading score had soared from the 33rd percentile to the 90th in a year’s time. The school’s principal was demoted, but no individual educators were ever linked to the altered answer sheets.
This year, the district has had to retest students in about six schools in an effort to confirm drastically improved scores, Webster reports.
But allegations of cheating have been few, given the size of the school district.
“We’ve always published test scores. The pressure was always there,” says Webster. “What this does is eliminate cheating” by requiring more scrutiny of scores.
Joseph Garcia is an education reporter for the Dallas Morning News.