Welcome to the Profit of Education website. Continuing the conversation begun in the book Profit of Education, we discuss the latest economic evidence on education reform.

Value-added and evaluating teacher preparation programs

The quality of teacher training programs and schools of education is a hot topic. (It’s going to get hotter when the folks at NCTQ release their national rankings.) I’ve thought that one incredibly important piece of objective evidence in evaluating teacher programs ought to be how well the grand-students do. That is, how well do students of teachers trained in one program do compared to students trained in another program?

Missouri researchers Koedel, Parsons, Podgursky, and Ehlert raise big questions about whether there are measurable differences across teacher training programs in MO. What Koedel, et. al. did was link student test scores to teachers and teachers to their own training programs. Then they asked whether the average score for students from teachers in the best program was much different from the score from the bottom program.

At first glance, it looks like scores are quite different–implying a big difference in programs. But! Measured differences in programs are statistical estimates, meaning that even if programs all had identical results some would randomly look high and some would randomly look low. Researchers know this and adjust using estimates of the extent of the randomness. What the Missouri team did differently was to use a much more sophisticated measure of randomness. Doing so, wiped out the results. (For the techies in the audience, they used clustered standard errors to estimate coefficient variances.)

Nothing in this research says that teacher training programs can’t be different. And the authors point out that it would be easier to detect differences from programs that trained much larger numbers of students, although that’s not a great help in a world in which most programs don’t train very large numbers of students.

All-in-all, Koedel and company have done a good job raining on the parade of those of us hoping to find good objective measures of the quality of teacher training programs. (Next time, somewhat different results for Washington State.)

Share
This entry was posted in Uncategorized and tagged , , , , . Bookmark the permalink.

One Response to Value-added and evaluating teacher preparation programs

  1. Jane Close Conoley says:

    As in any professional training endeavor the outcomes we get in teacher training are affected by how/whom we select, the experiences we offer during preparation, the situations novice teachers find themselves in during their beginning years, and the ongoing support we offer novices and practicing educators. On the ground, employers notice differences among programs in terms of novice teacher readiness to take the helm of a classroom. Translating these into snapshots of student achievement, however, is difficult. If teacher skills account for about 40% of variance in student achievement and teacher preparation accounts for ??% in teacher skills the actual effects of program on a single measure is quite likely to be small.

Leave a Reply

Your email address will not be published. Required fields are marked *