Bureaucratic teacher evaluations bring no change
Back in April 2011, the Globe editorial page touted “Education Commissioner Mitchell Chester’s proposed regulations linking teacher evaluations to student performance” as “a long-awaited step toward rewarding effective teachers and unmasking incompetent ones.” Many have seen the new evaluation system as a huge step forward, but I’ve always been highly skeptical that it will do anything but create a lot more paper.
In this regard, as I noted at the time, I think the Worcester Telegram & Gazette was the media outlet with the most detailed and most accurate view of the new evaluations:
The state’s new regulations for the evaluation of educators… establish that MCAS test results will play some role in teacher evaluations; they state that student and teacher feedback are to be included in the evaluation process, eventually; and they allow for the inclusion of existing measures of progress at individual schools or in districts.
But those points don’t arrive until three-quarters of the way through a 20-page thicket of definitions, standards and indicators, most of which are painfully obvious, vaguely phrased, repetitive, or offer little specific guidance to educators. And the regulations never state exactly how much weight MCAS will have, exactly how teacher and student feedback will be factored into evaluations, and who is to decide whether a district or school’s existing evaluation process is good enough.
In fact, the regulations lay out 16 “indicators” for teacher standards in the areas of Curriculum and Planning, Teaching All Students, Family and Community Engagement, and Professional Culture. There are 20 such “indicators” for administrators, reaching into every conceivable area of day-to-day school management…
It isn’t clear to us how any of this will help districts rid themselves of bad teachers any more quickly, ensure such teachers aren’t passed around within or between systems, or, on the positive side, facilitate the recruitment, promotion and rewarding of excellent teachers.
We were hoping for a far more succinct, specific and clear set of expectations that would promote accountability and excellence. Instead, by virtue of their length, complexity and open-ended language, these new educator evaluation regulations strike us as an excellent way to create more work and worry for administrators and teachers, while ensuring plenty of new grist for the wheels of bureaucracy that revolve at the state Department of Education.
If it were up to us, we’d declare these new regulations “unsatisfactory,” take an eraser to the whole blackboard, and start over.
Of course, the proof will be in the what we see the education sector do. As would be the case in any sector (business or public), authentic evaluations of performance would translate into the identification of a number of individuals to reward, steward or remove.
Attempts at bureaucratic statements about teacher quality include the “highly qualified teacher” provision of the No Child Left Behind (the 2001 reauthorization of the Elementary and Secondary Education) Act. As goes with most of these things, the definition allows for most Massachusetts districts to tout that 98-100 percent of their teachers are highly qualified.
Thanks to Massachusetts’ unique teacher certification test, which prioritizes content knowledge (aligned with the state academic standards), the Bay State’s teacher corps is more qualified to teach required academic work than in other states that use the so-called PRAXIS test. But 98, 99 or 100 percent of our teachers highly qualified? C’mon.
And yet the federally-promoted teacher evaluations, which were driven through the Race to the Top inducements, are showing the very same pattern of overstating teacher effectiveness. Look, Massachusetts has a slightly different take on teacher evaluations than does Michigan, Florida, Tennessee and Georgia, but most of the elements of the programs are similar — and similarly bureaucratic.
It’s a little like all those states that have been using “A to F” school grading systems, where somehow the great majority of the schools fall into the A and B categories. Astounding. If that’s the case, how is it so many of our kids fail to do well? (Massachusetts is far better served by providing the straight student performance — the MCAS — data.)
So while we wait to see what the numbers will look like coming out of the Massachusetts evaluation system, let’s see how Michigan, Florida an other states have fared. EdWeek has a piece today which notes that
In Michigan, 98 percent of teachers were rated effective or better under new teacher-evaluation systems recently put in place. In Florida, 97 percent of teachers were deemed effective or better.
Principals in Tennessee judged 98 percent of teachers to be “at expectations” or better last school year, while evaluators in Georgia gave good reviews to 94 percent of teachers taking part in a pilot evaluation program.
Harumph. So predictable. File under: Another in that interminable list of process reforms driven by Race to the Top that supposedly will be game-changers and result in… more paper. Get the shredders ready.
Crossposted at Boston.com’s Rock the Schoolhouse blog. Follow me on twitter at @jimstergios, or visit Pioneer’s website.