While this blog is mostly about K-12 education, some of the rhetoric you hear nowadays tends toward the idea that the goal of K-12 is “college readiness.” Regardless of whether you’re in agreement with “college readiness” as an over-riding goal, I’d like to point out that being ready to start college is an entirely different kettle of fish from being ready to complete college.
Here’s a graph showing the fraction of students who complete a 4-year degree within 5 years of having started a 4-year college, the horizontal axis giving the year in which the students entered college.
You’ll see that the numbers are up. But that means that about half of entering men, and a moderately higher fraction of women, actually complete their degree. More than a third don’t.
What do you think of the following: High schools should not only report how many of their students they send off to college; they should also report how many complete their degree.
Here’s a factoid that I hadn’t known. Public school students are quite a bit more likely to attend school in a large school district than was true in the past. I’ve made a little chart showing the percentage of students in districts with over 10,000 pupils.
In 1979, 46 percent of students were in large districts. Today (well 2011, which is the latest data), the number has risen to 55 percent. At the same time, the number of very small school districts–those with fewer than 300 kids–has fallen by a third.
Much of this may just be a consequence of population growth. To some extent we’d expect all districts to have more students in the past. It turns out that there’s something more than that going on, as the number of school districts has fallen 15 percent over the same period.
I’m pretty sure that closing very small districts is mostly a good thing. They’re probably too small to be efficient. But is it a good idea to have more students in mega-districts? Or is there a sweet spot somewhere in-between tiny and mega?
I had the pleasure this week of hearing Jonah Rockoff talk about a paper he’s written with several colleagues, “Teacher Applicant Hiring and Teacher Performance: Evidence from DC Public Schools.” My impression is that in general school districts do a lousy job of selecting the best future teachers from the available applicant pool. Or maybe it would be more accurate to that some districts do well at this and others do a terrible job. From the researcher’s vantage, it’s been very hard to identify factors that do a good job of predicting who’s going to be a good teacher. That makes it hard to give advice to school districts.
Rockoff and colleagues were given access to hiring data on a large number of applicants to Washington, DC schools. They also were given data on how well hired applicant performed according to DC’s Impact rating system. What they found was that DC had collected data on several factors that did a pretty good job of predicting who would turn out to be a good teacher, but scores on these factors played very little role in deciding who would actually be hired.
Here are some things that do predict teacher success according to the DC data:
- Undergraduate academic performance, including
- SAT/ACT scores
- college selectivity
- Application data, including
- score on a subject-specific written essay
- interview score
- teaching audition
But perhaps the most interesting predictor is the score on what’s called the “Haberman test.” A number of years ago, Martin Haberman interviewed good teachers and then created a multiple choice test intended to pick out applicants with attitudes that matched those of the good teachers. Rockoff and coauthors find that the Haberman test really does do a good job of predicting teacher success. And oh yeah, the test is available on the web and costs all of five dollars to administer.
So it seems that schools could do a better job of picking good teachers with relatively little difficulty. Along those lines, I’ll close with the opening quote from the paper.
The best means of improving a school system is to improve its teachers. One of the most effective means of improving the teacher corps is by wise selection.
-Ervin Eugene Lewis, Superintendent of Schools, Flint, Michigan, 1925
Roughly 12 percent of teachers work in private schools. But more than twice that fraction of new teachers, 28 percent, work in private schools. (Numbers are from the Digest of Education Statistics. They’re probably not perfect, but I suspect they’re close.)
Why do private schools have to hire relatively more teachers? It’s not that private schools are growing faster than public schools. It’s not that private school teachers move around more, because a move from one private school to another doesn’t count as being “new” in the government data.
My guess is that being a private school teacher isn’t so attractive as being a public school teacher (on average, obviously lots of exceptions)–so there’s just more turnover. Alternatively, it could be that private schools kick out more teachers and so need to do more hiring. Surely that can’t account for such a huge difference.
There’s no reason why the rate of new teacher hiring should be exactly the same for public and private schools. But this different?
The Department of Education has a “First Look” study out, “Public School Teacher Attrition and Mobility in the First Five Years,” with somewhat surprising numbers on teacher attrition. (Thanks to NCTQ for the link.) The headline number is that only 17 percent of teachers who began in 2007-08 had left teaching by 2011-08. That’s very surprisingly low. A comment below on why I’m not sure I believe it. But first, a quick picture relating the numbers to one of my favorite topics–teacher salaries.
So low paid teachers are much more likely to move out of teaching. By the end of the study the difference was just short of two-to-one.
However, one caveat is called for in all this. About three-quarters of teachers surveyed responded to the questionnaire in the first year. By the fifth wave, the response rate was only 56 percent. Do you think those who didn’t respond were disproportionately those who left teaching? I do. That means that headline 17 percent dropout rate could be way, way off. I’ve presented the graph above on the theory that the survey errors for low-paid and better-paid teachers might not be too different. In other words, I suspect both lines below are lower than the truth, but maybe the gap between the two isn’t too far off.
Ladd, Clotfelter, and Holbein offer new measures comparing the performance of charter schools to traditional public schools in North Carolina. Here’s their picture for math.
So charter schools used to be a little behind and now they perform more or less the same as other public schools.
In other words, taken as a group charter schools are neither a disaster nor a panacea.
The number of new teachers hired peaked just before the Great Recession. Hiring has since plummeted 25 percent.
Government projections don’t suggest a huge rebound, and my guess is that the government numbers may be a smidgen high because government projections of the number of new students are likely to be too high.
Should someone be rethinking how many students are being enrolled in ed schools?
We’re all pretty sure that schools in some districts are better than those in other districts. There are resource differences and there are differences in who will be your kid’s companions. But it’s hard to prove that differences are due to the schools. Maybe there are other differences in neighborhoods? Maybe it’s just that some districts attract better students than others.
Peter Bergman has made clever use of a school district lottery to measure the effects of going to a “good” district. A court-ordered desegregation program set aside spaces for K-3 students living in a relatively poor Northern California district to attend one of several surrounding wealthy districts. In the poor district, 2/3rds of students had limited English proficiency, 20 percent of families were below the poverty line, students were 90%+ minority, and under half of parents had graduated high school. The wealthy districts–think Stanford University and environs–had almost all English-proficient students, hardly any families below the poverty line or without parental high school diplomas, and were pretty much entirely white and Asian.
Unsurprisingly, transfer slots were way oversubscribed. Bergman compared college enrollment of lottery winners to enrollment of students who applied but didn’t win a transfer ticket. The basic answer is that getting into the better school district increased the probability of attending college by 7 percentage points–a lot compared to the base attendance rate of 32 percent. (Note that lottery winners did not necessarily stay in the better district for their entire school career.)
Bergman also offers some more nuanced results. The improvement in college attendance is concentrated in two-year schools, no real change in four-year attendance. Most of the action is for boys rather than girls and more for black students than for Hispanic students.
My take is that parents who work to get their kids into better districts know what they’re doing.
To what extent are students assigned to classrooms randomly (with respect to ability) and to what extent does prior achievement affect assignments and teacher matches? Hedvig Horvath points out that we should consider two separate aspects of this question.
- Tracking: Are students of like ability grouped into the same classrooms?
- Teacher matching: Do some teachers persistently receive assignments with mostly high-achieving versus low-achieving students?
Horvarth measures such nonrandom assignment using data from North Carolina elementary schools. She finds:
- 60 percent of schools track.
- Half of schools that track match teachers to the same kind of student year after year.
Horvath also offers a reminder about implications for value-added models. The idea of using value added is that we only judge a teacher based on her students’ starting point at the beginning of the year. But if students with relatively low starting points are also likely to progress more slowly, then value-added estimates will be biased.
Back in 2010, the Los Angeles Times published value-added scores for LA teachers. This may have been good journalism, but from the point of view of helping education the appropriate technical term is “dumbass.” Hey, value-added is an important tool. Judging a teacher on nothing but value-added is ridiculous. More importantly, how many employers conduct job evaluations in public? Bad LA Times.
Bad idea or good, it’s useful to look back and see what resulted from the Times decision. Peter Bergman and Matthew Hill have taken a look at the effect of the publication of VAM scores. It turns out that ratings were not published for teachers with fewer than 60 student VAM scores. Presumably, teachers just above the publication cutoff were not systematically different from those just above. So looking at results for teachers with around 60 student scores identifies the effect of publication.
Some of the findings:
- Highly rated teachers subsequently found themselves teaching better students and low-rated teachers got worse students.
- Notably, teachers rated highly compared to other teachers in a school were the ones who subsequently picked up better students.
- No noticeable effect on causing teachers to leave.
So it looks like the main effect of publication was to let more savvy parents arrange for their kids to get the better teachers.
One caveat: Teachers with around 60 student scores are likely to be relatively inexperienced. The results might be different for more experienced teachers.