Welcome to the Profit of Education website. Continuing the conversation begun in the book Profit of Education, we discuss the latest economic evidence on education reform.

Predicting future teacher performance: value-added versus principal evaluations

Douglas Harris and Tim Sass use Florida data to ask an awfully important practical question: If you have both teacher value-added numbers and principal evaluations, which does better at predicting future teacher contributions? Do the two methods each have something separate to say? Does either have something to say?

Harris and Sass use data for a lot of students and teachers, although only 30 principals, to forecast future student test score (and value-added) results. Two caveats before we hit the results: (1) Different approaches to principal-based evaluations might give different answers than the ones in this study, and (2) perhaps principals are better at predicting long-run student outcomes rather than future test scores. (Although in part of the experiment the principals were specifically asked to rate teachers on the ability to raise test scores.)

Now the results:

  • Teacher value-added ratings work pretty well on average for predicting future test based outcomes. In other words, a teacher who gets 1 point above average VAM scores for current students typically gets something like a 1 point above average VAM score for future students. The relation isn’t quite one-for-one, but typically not too far off. However, you need more than one year’s value-added observation on the teacher to get this to work well.
  • Predictions for individual teachers from current value-added to future value-added are not so good in the sense that the correlation is pretty low. One way to think about this is that current VAM scores are good average predictors, but the noise level for individual predictions is pretty high.
  • Principal evaluations just plain don’t predict anything about the future.

The upshot is that VAM-ratings are useful but imperfect. Surprisingly, principal evaluations aren’t useful.

The authors carefully compare their results with previous studies. A critical difference is that the previous studies have looked at future student test scores rather than future student value-added as the outcome. These studies show better predictions both from current value-added and from principal evaluations. (As does the Harris/Sass study.) My interpretation is that principals may be good at seeing which teachers have students with good test results, but are not so good at making the mental adjustment to account for whether teachers got handed students who score well no matter how they are taught.

m4s0n501
Share
This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.