Roughly 12 percent of teachers work in private schools. But more than twice that fraction of new teachers, 28 percent, work in private schools. (Numbers are from the Digest of Education Statistics. They’re probably not perfect, but I suspect they’re close.)
Why do private schools have to hire relatively more teachers? It’s not that private schools are growing faster than public schools. It’s not that private school teachers move around more, because a move from one private school to another doesn’t count as being “new” in the government data.
My guess is that being a private school teacher isn’t so attractive as being a public school teacher (on average, obviously lots of exceptions)–so there’s just more turnover. Alternatively, it could be that private schools kick out more teachers and so need to do more hiring. Surely that can’t account for such a huge difference.
There’s no reason why the rate of new teacher hiring should be exactly the same for public and private schools. But this different?
The Department of Education has a “First Look” study out, “Public School Teacher Attrition and Mobility in the First Five Years,” with somewhat surprising numbers on teacher attrition. (Thanks to NCTQ for the link.) The headline number is that only 17 percent of teachers who began in 2007-08 had left teaching by 2011-08. That’s very surprisingly low. A comment below on why I’m not sure I believe it. But first, a quick picture relating the numbers to one of my favorite topics–teacher salaries.
So low paid teachers are much more likely to move out of teaching. By the end of the study the difference was just short of two-to-one.
However, one caveat is called for in all this. About three-quarters of teachers surveyed responded to the questionnaire in the first year. By the fifth wave, the response rate was only 56 percent. Do you think those who didn’t respond were disproportionately those who left teaching? I do. That means that headline 17 percent dropout rate could be way, way off. I’ve presented the graph above on the theory that the survey errors for low-paid and better-paid teachers might not be too different. In other words, I suspect both lines below are lower than the truth, but maybe the gap between the two isn’t too far off.
Ladd, Clotfelter, and Holbein offer new measures comparing the performance of charter schools to traditional public schools in North Carolina. Here’s their picture for math.
So charter schools used to be a little behind and now they perform more or less the same as other public schools.
In other words, taken as a group charter schools are neither a disaster nor a panacea.
The number of new teachers hired peaked just before the Great Recession. Hiring has since plummeted 25 percent.
Government projections don’t suggest a huge rebound, and my guess is that the government numbers may be a smidgen high because government projections of the number of new students are likely to be too high.
Should someone be rethinking how many students are being enrolled in ed schools?
We’re all pretty sure that schools in some districts are better than those in other districts. There are resource differences and there are differences in who will be your kid’s companions. But it’s hard to prove that differences are due to the schools. Maybe there are other differences in neighborhoods? Maybe it’s just that some districts attract better students than others.
Peter Bergman has made clever use of a school district lottery to measure the effects of going to a “good” district. A court-ordered desegregation program set aside spaces for K-3 students living in a relatively poor Northern California district to attend one of several surrounding wealthy districts. In the poor district, 2/3rds of students had limited English proficiency, 20 percent of families were below the poverty line, students were 90%+ minority, and under half of parents had graduated high school. The wealthy districts–think Stanford University and environs–had almost all English-proficient students, hardly any families below the poverty line or without parental high school diplomas, and were pretty much entirely white and Asian.
Unsurprisingly, transfer slots were way oversubscribed. Bergman compared college enrollment of lottery winners to enrollment of students who applied but didn’t win a transfer ticket. The basic answer is that getting into the better school district increased the probability of attending college by 7 percentage points–a lot compared to the base attendance rate of 32 percent. (Note that lottery winners did not necessarily stay in the better district for their entire school career.)
Bergman also offers some more nuanced results. The improvement in college attendance is concentrated in two-year schools, no real change in four-year attendance. Most of the action is for boys rather than girls and more for black students than for Hispanic students.
My take is that parents who work to get their kids into better districts know what they’re doing.
To what extent are students assigned to classrooms randomly (with respect to ability) and to what extent does prior achievement affect assignments and teacher matches? Hedvig Horvath points out that we should consider two separate aspects of this question.
- Tracking: Are students of like ability grouped into the same classrooms?
- Teacher matching: Do some teachers persistently receive assignments with mostly high-achieving versus low-achieving students?
Horvarth measures such nonrandom assignment using data from North Carolina elementary schools. She finds:
- 60 percent of schools track.
- Half of schools that track match teachers to the same kind of student year after year.
Horvath also offers a reminder about implications for value-added models. The idea of using value added is that we only judge a teacher based on her students’ starting point at the beginning of the year. But if students with relatively low starting points are also likely to progress more slowly, then value-added estimates will be biased.
Back in 2010, the Los Angeles Times published value-added scores for LA teachers. This may have been good journalism, but from the point of view of helping education the appropriate technical term is “dumbass.” Hey, value-added is an important tool. Judging a teacher on nothing but value-added is ridiculous. More importantly, how many employers conduct job evaluations in public? Bad LA Times.
Bad idea or good, it’s useful to look back and see what resulted from the Times decision. Peter Bergman and Matthew Hill have taken a look at the effect of the publication of VAM scores. It turns out that ratings were not published for teachers with fewer than 60 student VAM scores. Presumably, teachers just above the publication cutoff were not systematically different from those just above. So looking at results for teachers with around 60 student scores identifies the effect of publication.
Some of the findings:
- Highly rated teachers subsequently found themselves teaching better students and low-rated teachers got worse students.
- Notably, teachers rated highly compared to other teachers in a school were the ones who subsequently picked up better students.
- No noticeable effect on causing teachers to leave.
So it looks like the main effect of publication was to let more savvy parents arrange for their kids to get the better teachers.
One caveat: Teachers with around 60 student scores are likely to be relatively inexperienced. The results might be different for more experienced teachers.
Suppose you have a school choice program that moves a significant number of students. It wouldn’t be surprising to find that the students who move are the students you most want to have in a school. One could imagine that programs improve things for students who move at the expense of those left behind. Altonji, Huang, and Taber look at this question.
The authors begin with a framework about how to think about the problem. For example, suppose public schools had very homogeneous student bodies. Then everyone left behind would look a lot like those who left and so there wouldn’t be any noticeable harm at the donor school. Or suppose the probability that a student chooses to leave is not in fact correlated with the characteristics that affect school performance–then students moving around would have no effect.
Altonji et. al. have to create some new statistical methodology to identify all the pieces of the puzzle. What they find is that the kind of voucher programs we’ve seen in practice appear to have little “cream skimming” effect. Using high school graduation rates as the measure of outcomes, the authors estimate that a program that moved 10 percent of public school students to private schools would reduce the graduation rates of students left behind by about one tenth of one percent.
In other words whatever your opinion of voucher programs for other reasons, harm to those left behind is no a serious concern.
If you’re a fan of value-added, a new paper by Marianne Bitler, Sean Corcoran, Thurston Domina, and Emily Penner, “Teacher Effects on Student Achievement and Height: A Cautionary Tale,” is going to give you the willies. Since hearing the results may shake you up, I do want to warn you that the research is preliminary. Results could change. I’m fairly sure the “cautionary” word in the title is likely to stick around though.
Here’s the heart of what the authors did: They estimated a standard value-added model on data for New York City 4th and 5th grade teachers. Their results look pretty standard–the difference between a good teacher and a bad teacher is real large. What they did next is the very nonstandard thing. This particular data set had also recorded each student’s height each year. So the authors estimated the usual kind of value-added model replacing math and reading gains with height gains. Now we know that teachers don’t have any effect on how fast 4th and 5th graders grow. The willies-generating result is that the estimated teacher value-added on height is nearly as large as it is on academic subjects!
If value-added models seem to show huge effects when we know there aren’t any (height), why should we believe any value-added estimated effect (like academics)?
Precisely because the current results are so shocking, the authors are continuing their investigation. In some pre-preliminary results (i.e. not even written up yet), the authors are finding that if you estimate value-added averaging over a number of years then the spurious effects for height go away while the academic effects mostly remain. If these newer findings prove robust, then one might conclude that value-added models legitimately indicate that teachers are very important for academic gains while still being nervous about the use of models based on a single year’s data
Over at Flypaper, Michael Petrilli makes a great point. Birth rates fell precipitously during the Great Recession, as birth rates usually do during hard economic times. In fact, from their 2007 peak births fell by 9 percent. Mike points out that if you know how many kids are born this year, you have a pretty darned good idea how many kids will enter kindergarten five years down the road.
Petrilli is warning everyone that we’re going to need many fewer kindergarten teachers, then fewer 1st grade teachers, and so on. Eventually, birth rates may return to their pre-recession levels, but the number of kindergarten teachers needed isn’t a long-run sort of thing; it depends on the number of five-year olds right now. Since the adjustment is likely to be large, Petrilli would like policymakers to be thinking ahead.
For fun, I decided to compare the Department of Education’s forecast of public school kindergarten students (from the Digest of Education Statistics) to forecasts from my own super sophisticated, dynamic stochastic statistical model. (I regressed the number of kindergarten students on the number of births five years earlier.) Here’s the picture-numbers from 2012 and after are forecasts.
My forecasts (red line) do a mighty fine job of matching the actual data. Looks to me like the government forecasts are overpredicting the future number of kindergarten kids by several hundred thousand.
Anyone care to put down a friendly wager on who’s right?