There’s a new report out from the National Center of Teacher Quality that does a relatively deep dive into the “practical realities” of incorporating measures of student growth into teacher evaluations. Many onlookers, states the NCTQ, consider data-driven teacher evaluations to be a “fledgling” enterprise but, in fact, this practice is well-established — has a “strong foothold” across the country — and only five states ( California, Iowa, Montana, Nebraska and Vermont ) have no formal policies in place.
So, how’s that going? From the report:
There is a troubling pattern emerging across states with a track record of implementing new performance-based teacher evaluation systems. The vast majority of teachers – almost all – are identified as effective or highly effective.
For example, New Jersey, despite raucous assaults by anti-ability folk on teacher evaluations that include a small (10%) of student growth metrics, has managed a successful launch. As Education Commissioner David Hespe announced proudly at today’s NJEA annual convention in Atlantic City, 97.2% of teachers (see graph below) were rated either highly effective or effective.
But is this something that reflects well on N.J. teachers or poorly on our evaluation system?
Probably a little of both. One the one hand, it’s great that only 2 or 3 out of every 100 N.J. teachers is partially effective or ineffective. And if that were indeed true, then the State is buying minimal evaluatory improvement despite spending tons of political capital. But here’s the downside, according to NCTQ, “common sense, student achievement gaps and the research on teacher effectiveness [as well as, NCTQ could have added, inefficiency rates in other professions] suggest that not all of our teachers should be rated effective.”
The point here isn’t to try to label more teachers as ineffective. The point is to provide professional development support to teachers who need it but remain unidentified by flawed evaluation systems. And let’s just say that N.J.’s system needs a little fine-tuning.
The clearest indication that the results we are getting don’t reflect teacher performance isn’t the very small number of teachers receiving the lowest rating, but the fact that so few teachers are being identified as in need of improvement. Although this category has different names in different states, the majority of states have a category that is a higher rating than ineffective but falls short of an effective rating. States ought to consider why it is that more teachers aren’t identified as in need of further development. The dearth of teachers in need of improvement simply doesn’t ring true, even based solely on what we know from research about first year teachers – that they are very much a work in progress during their first year of teaching and often don’t maximize their effectiveness (in terms of growth in their students’ achievement) until they have three to five years of experience in the classroom.
In other words, the point isn’t to beat the system. The point is to use the system so that teachers are amply supported in their professional growth and, consequently, more students have access to effective teachers.