Patrick Walsh has taken data from Baccalaureate and Beyond to look at math versus verbal skills for teachers versus nonteachers. Next time I’ll talk about his results on how the teacher/nonteacher salary gap is different for people really good at math as compared to the gap for people with really good verbal skills. Today I just want to share Walsh’s picture documenting that the skill gap exists.
The top panel shows the distribution of verbal skills according to the SAT for teachers versus nonteachers. Teachers have lower SAT scores, but the difference isn’t enormous. The bottom panel illustrates that the math gap is larger. Note the big difference starting roughly around 600 on the math SAT.
Part of what’s happening is that teachers with really good math skills are more likely to be tempted away from teaching than are teachers with really good verbal skills. (Think the big math gap explains some of the difficulty we have preparing American students for science and technology fields maybe?) Here’s a little table based on Walsh’s numbers
Nonteacher-teacher SAT gap
||4 years post graduation
|| 10 years post graduation
Between 4 years and 10 years out, the verbal SAT gap jumps 10 points. But the math gap jumps 19.
Next time, some evidence from Walsh that it’s the salary gap that causes the difference.
If I may depart from K-12 briefly, here’s a graph comparing the fraction of college degrees given in engineering in the U.S. versus other industrialized economies.
The U.S. produces a lower fraction of bachelor’s degrees in engineering than any other OECD country. Surely, this can not be a good thing.
Kalena Cortes and Joshua Goodman describe a Chicago Public Schools policy change in which low-performing (below median) high-school freshman math students were given a daily double dose of algebra. As a side effect of the policy, weak students were much more tracked together. We’d expect the double-dose to improve math learning, but might peer effects from being tracked together with other weak students counter-act the benefits?
Here’s the authors’ picture of how tracking changed. The vertical axis gives the average performance in a ninth grade class measured by eighth grade ability. The horizontal axis is the students’ own eighth grade performance. If tracking was perfect, we’d see a 45 degree line. If there was no tracking, the line would be horizontal at the 50 percentile line. The red dots are before the policy change–kids were pretty tightly tracked. After the policy change (blue dots), low performing kids were more likely to be assigned with other low performing kids and vice versa for high performing kids. So tracking really did increase, although my reading of the graph is that the change wasn’t huge because tracking was already pretty strong.
Did the double-dosed students improve in math, despite being grouped with low-performing peers? Indeed they did. The double-dosed students algebra grades went up by 0.22 grade points (about a quarter of the way between a D and a C). What’s more the effect persisted into the next two grades as well. In fact, for the students who had started not too far below average there was even an increase in high school graduation rates.
Well, actually the answer according to Timothy Diette and Ruth Uwaifo Oyelere is: yes, but only for some groups students and only a little bit. I interpret their estimate of the effect on native students of having large numbers of limited English speakers as being so small that we might as well call it zero.
Classes with large numbers of limited English (LE) speakers probably don’t do as well as other classes, but that’s because LE students disproportionately locate in underperforming schools. (Their parents aren’t exactly wealthy.) The question is whether increased numbers of LE students cause a change in outcomes for non-LE students. What Diette and Uwaifo Oyelere do is look at the LE proportion within a grade within a school in a particular year. This largely eliminates the effect of LE students disproportionately ending up in weak schools.
The authors’ specific finding is that there is statistically significant evidence of more LE students causing lower test scores for male students and black students, but not for white students and female students. However, the effect sizes are trivial. A 1 percentage point increase in the fraction of LE students leads to test score drops on the order of 0.0007 standard deviations…which is too small for anyone to care about.
So the answer to the question posed in the title is “no.”
A huge amount of noise is made everywhere in the country about the notion that high schools should prepare students for college. The plain fact is that states vary enormously in the fraction of high school students who proceed to a four-year college in the year after graduation.
I’ve put together some (imperfect) numbers from the Schools and Staffing survey. The number for each state is the average across high schools of each school’s fraction of graduates who go on to a four-year college. The low points are California and Washington, where fewer than a quarter of students go right on to a four-year school. The high performer is Colorado, where nearly two-thirds of students go on.
Part of what is going on here is that California and Washington have decided to put money into community colleges instead of four-year schools. (Both do have strong community college systems.) But, of course, going to community college isn’t the same as going directly to a four-year school. For some students it’s better…for many, it’s not.
Moreover, much of the difference across states has little to do with 2- versus 4-year schools. The truth is “college-readiness” varies enormously across the states.
Does gifted education work? That is, do students in gifted and talented programs do better than they would if they were in regular classes. Of course gifted and talented students do better…presumably they’re gifted and talented. But do GT programs add anything extra?
According to Is Gifted Education a Bright Idea? the answer is mostly no, at least not in the large program the authors were able to evaluate. In this program, students are accepted into the district’s GT program by accumulating a certain number of points on several criteria. We can reasonably assume that students just below the cutoff are just about as talented as students just above the cutoff. The authors measured students ability in a way that doesn’t perfectly predict GT program acceptance, but you can see in this first figure that it comes pretty close.
Students on to the right of the cutoff mostly get in; those to the left mostly don’t.
If the GT program makes a difference, we should see a jump in performance for those just to the right of the gap. Here’s the authors’ picture.
No visible break at all. Apparently, participation in this GT program just doesn’t much matter. Maybe it doesn’t matter at all.
I admit to being surprised.
There’s lots of evidence that teachers make a huge difference to student outcomes. Beth Dhuey and Justin Smith provide evidence that principals make an even huger difference. The statistical summary of their findings is:
We estimate that a one standard deviation improvement in principal quality can boost student performance by 0.289 to 0.408 standard deviations in reading and math…
When researchers make the same kind of comparison for teachers, the effect isn’t nearly this large. In other words, the difference between a really good principal and one not so good is larger than the difference between a really good teacher and one not so good.
Dhuey and Smith use data from British Columbia, Canada for their study. This data has a special advantage. Principals in B.C. move around quite a bit, so the authors can distinguish between a good principal and a principal who happens to be assigned to a good school.
None of this should be thought of as a contest between looking for good principals versus looking for good teachers. The mechanism that makes a good principal successful isn’t identified in the research. It’s entirely possible that what makes for a good principal is the ability to hire/inspire good teachers.
Should experienced teachers earn a lot more than beginners? Just a little more?
Whatever the amount of money we have available to spend on teacher salaries, we ought to divvy it up in whatever way gets us the most, best teachers. So far as I know, there’s no careful research on whether money is most effectively spent to attract new teachers or to retain more experienced teachers.
Here’s a picture I made from Education at a Glance Data on how upper secondary school teacher salaries change with experience.
The first lesson is that the U.S. pattern is not too different from the pattern in most other industrial countries. The second lesson is that, relative to starting salaries, U.S. salaries rise relatively slowly for the first 15 years of a career. (But do catch up for the very most senior teachers.)
Interestingly the U.S. pattern is not too different from Finland, which has world-renowned education. On the other hand, salaries rise much, much more in Korea, where students also perform very well.
This has got to be a good dissertation topic for someone.
You’d be really surprised if the answer to the question posed in the title were “no,” wouldn’t you? Be assured, the answer is “yes.” Rick Hanushek and co-authors have measured the returns to skills, doing two things different from what’s been done before. First, they look at returns in 23 countries around the world. Second, they look at earnings of prime-age workers. (Most other studies have looked only at young workers.)
It won’t surprise you that the returns to skills are large. Interestingly, the returns are larger in the United States than in other countries; something I suspect is due to our unequal income distribution.
I’ve run a quick calculation using the authors’ numbers to give an easy to understand interpretation. Line people up in skill order in terms of numeracy. For every buck the guy in the middle (rank 50) makes, the guy ranked up at position 84 makes $1.28.
Of course people who are more numerate are, on average, also much literate. The authors’ numbers suggest that the extra 28 percent earned in the example above is due about 2/3rds to higher numeracy and 1/3rd to higher literacy. So while you’re practicing with your slide rule, be sure to brush up your Shakespeare too.
New York City, the nation’s largest school district, has made big changes in how teacher tenure is awarded. Susanna Loeb and coauthors find that the change led to a very small increase in tenure denial, but a huge increase in the proportion of cases in which the probationary period was extended and concomitant drop in the number of cases in which tenure was awarded. Or, along the line of one Loeb/Miller/Wycoff picture is worth 1,000 of my words…
The authors tell us that “extended teachers” are more likely to leave the NYC school system than those who get tenure, as well as being more likely to transfer to another school in New York. Nonetheless, three-quarters of extended teachers are back in the same school the following year.
So what happens next? Do they up their game and earn tenure or is there departure merely delayed a year?