Welcome to the Profit of Education website. Continuing the conversation begun in the book Profit of Education, we discuss the latest economic evidence on education reform.

Careers of ed students: Yesterday and today

Today’s post appeared on the BROWN CENTER CHALKBOARD at the Brookings Institution.

Roughly 100,000 students a year earn a bachelor’s degree in education, a number that has been steady for decades. How do ed students spend their careers? Do they teach or do they follow an alternative career? At what point do they quit working entirely? And are times changing for the newest graduates?

No one tracks career paths of individual college graduates, of course. But we can get a good ballpark answer to these sorts of questions by looking at what sort of work individuals do today and then retrospectively asking when they earned a college degree. I’ve taken the 2014 American Community Survey,* which reports both work status and undergraduate major and computed an “imputed degree year” using as an approximation the assumption that the typical student receives her BA at age 22. I then look at folks who reported that their BA was in education. So we begin by asking, how many people who earned an education BA in the past were still working in 2014?

People with education BAs by labor force status


* The sample is limited to people between 22 and 70 years old with four or more years of college with an undergraduate degree in “Education Administration and Teaching”. Data source: “Steven Ruggles, Katie Genadek, Ronald Goeken, Josiah Grover, and Matthew Sobek. Integrated Public Use Microdata Series: Version 6.0 [Machine-readable database]. Minneapolis: University of Minnesota, 2015.”

Unsurprisingly many of those who graduated college 40 years ago are no longer in the labor force, presumably having retired. Looking back more than 40 years, retirement rates rise rapidly. But for more recent graduates, only 10 to 15 percent are out of the labor force (and remember that “out of the labor force” includes taking a break from teaching to raise children). In other words, most folks with education BAs keep working until around retirement age.

What about unemployment, that is how many people are trying unsuccessfully to find work? The answer is “very few.” Unemployment rates for people with education BAs are generally between one and three percent.

“What about unemployment, that is how many people are trying unsuccessfully to find work? The answer is ‘very few.’ Unemployment rates for people with education BAs are generally between one and three percent.”

For exceptions to these general findings take a look at recent graduates, those at the very right end on the graph. You can see a bit of an uptick there in “out of the labor force,” although this is no doubt due in part to recent graduates who are now in graduate school. The unemployment rate is also noticeably higher for recent graduates. The last couple of years of graduates in particular have high unemployment rates, although even the worst unemployment rate is noticeably lower than that for other young people. (For 2014 graduates, the unemployment rate with an ed BA was over 7 percent. In comparison, 2014 graduates with other BAs had an unemployment rate at the time of the survey of 12 percent. It takes a while to find a job after getting a degree.) Further, one might speculate that some part of these higher unemployment rates may be a hangover from low school hiring rates following the Great Recession. (Remember that the most recent data in the figure is from 2014.)

Basically, folks with undergraduate education degrees are working. Are they working in education? To help answer this, I’ve used the American Community Survey responses to plot out the fraction of education BAs who keep working in education. (The data defines working in education as “Education Administration and Teaching.” That means that some graduates working at the college level will be picked up as working in education and some school librarians, phys ed teachers, and others may be missed.)

Fraction of people with education BAs working in education

The first answer is that people who graduated decades ago are mostly not working in education. Presumably many are in second careers. Perhaps some are working to supplement teacher pensions.

The second answer is that for most cohorts, except the most recent, 55 to 65 percent of those employed are working in education. Are 55 to 65 percent numbers higher or lower than would seem desirable? The answer isn’t obvious. On the one hand, most college graduates don’t work exactly in the area of their major. Economics majors, for example, get good jobs, but few of those jobs are “economist.” On the other hand, an education degree is very much intended to be specific professional training. Some part of the effort that goes into training teachers is arguably wasted when graduates take non-education jobs.

The picture for the most recent graduates is more puzzling. Remembering that we are looking only at those who are employed, it’s surprising that the majority in the final year of the data are employed somewhere other than in education. Maybe some of this has to do with how the data was collected. (A respondent might have been interviewed after graduating while waiting for a teaching job that started in the fall, and may be temporarily employed in another industry.) However, the penultimate year’s data only had 54 percent of those employed, employed in education. It appears that there may be something of an increasing disconnect between training and employment in education.

It’s hard to know whether something odd is going on for those with recent education degrees, but I have a little bit of evidence that if something has changed, whatever it is isn’t unique to ed BAs. I’ve redrawn below the first figure with two changes. The new version adds in labor force status for BAs from non-education fields. And I’ve restricted the data to the last ten years to magnify the most recent evidence.

People with education or other BAs by labor force status last ten years

The figure shows that both out-of-the-labor-force and unemployment status for people with education BAs is noticeably higher for recent cohorts. However, the same increase shows up even more strongly for other BAs. So to the extent that recent BAs are not being quickly matched with jobs, the underlying cause does not seem to be special to the field of education.

Is employment for recent ed BAs different from the past? The unemployment rate for recent ed BAs is higher than the rate for those out five years or more. On the other hand, the same thing is true for recent grads in other fields. My guess—but it is just a guess—is that we are still seeing something of a roiled labor market in the aftermath of the Great Recession and that in the longer run careers of recent grads will look a lot like those of earlier cohorts.

Everyone knows that there is a great deal of turnover among teachers. Still, it appears that most education majors do end up in education, and what’s more, the majority of education majors do spend most of their career working in schools. The data here gives something of a different picture from the commonly held view of enormous attrition rates in teaching. (For related evidence, see this Washington Post story.) My suspicion is that the difference comes from how the data is collected. The charts here track education BAs, while many other studies track beginning teachers. Teachers move between schools and even between states. What’s more, teachers do drop out of teaching for childcare and other reasons and then return later. It’s likely that teachers who make such changes sometimes get lost in teacher-tracking data.

To help think about teacher attrition look at what the charts says for education BAs who received their degrees in 2004, so they would be 10 years into a career when we observe them. About 13 percent are out-of-the-labor-force or unemployed. Of those who are working, 65 percent are working in education. So a majority—about 56 percent—are working in education. Whether somewhat over half of education BAs working in education a decade after graduation is seen as a high attrition rate or a low one probably depends on the view from the eye of the beholder.

Share
Posted in Uncategorized | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Teacher perceptions and race

Today’s post appeared on the BROWN CENTER CHALKBOARD at the Brookings Institution.

Freshman students Laurah Pollonais (L) and Dalicia Barker listen during a class at Spelman College in AtlantaWhen it comes to student behavior, what’s polite or rude—what counts as acting out versus what’s seen as healthy youthful exuberance—depends not only on actual behavior but on how teachers read behavior. Black and white American cultures are still sufficiently different in that how teachers read behavior depends in part on the teacher’s race. New research shows that black and white teachers give very different evaluations of behavior of black students. When a black student has a black teacher that teacher is much, much less likely to see behavioral problems than when the same black student has a white teacher.

New research by Adam Wright, “Teachers’ Perceptions of Students’ Disruptive Behavior: The Effect of Racial Congruence and Consequences for School Suspension,” documents that black teachers have much less negative views of black student behavior than do white teachers. (Conflict of interest notice…hmmm no, braggin’ notice: Wright is one of my PhD students.) Wright looks first at teacher evaluations of behavior, and then at data on school suspensions. Let’s begin with the teacher evaluations.

Wright uses data from the Early Childhood Longitudinal Study to follow the experience of more than 20,000 students in kindergarten, first, third, and fifth grade. During the elementary school years, teachers were asked to assess a number of noncognitive skills. The measure of interest here is “externalizing problem behaviors,” which asks how often the student “argues, fights, gets angry, acts impulsively, and disrupts ongoing activities.” Notice that we see a measure of teacher perception, rather than counts of disciplinary events. Wright focuses on externalizing behavior because this measure is highly correlated with school suspensions.

On a scale in which the average measure of externalizing behavior is normalized to zero, white and Hispanic students average -0.07, while black students average +0.37. (Asian students average -0.38.) So on average, black students are viewed as having much worse behavior—which presumably reflects some combination of objectively worse behavior and perceived worse behavior.

Wright does something very clever, taking advantage of the fact that students are observed several times and that we know which students are in which classes with which teachers. Wright asks how black students are rated by black teachers, controlling for both the average rating of an individual student by all his teachers and for the average rating a particular teacher of all of her students in a given class. What this means is that Wright can identify how a black student’s behavior is perceived by a black teacher as compared to how the same student is perceived by white teachers. The procedure also adjusts for the possibility that black teachers are just more “easy going,” because the average rating given in a class is effectively subtracted off. So Wright is arguably identifying a causal effect of black students being matched with black teachers.

Being race matched matters a lot for black students but not for others

Bottom line: black teachers are much less likely to find problems with black students than white teachers are with the same students. The difference is enormous, accounting for about half the black/white externalizing behavior gap. (Remember that the data does not tell us whether black teachers have different perceptions of black students or whether student/teacher race matching leads to objectively different behavior.) For black students, being matched with a black teacher matters.

Bottom line: black teachers are much less likely to find problems with black students than white teachers are with the same students.

How about white or Hispanic students being matched with white or Hispanic teachers, respectively? Nope, no discernable differences in externalizing behavior. (To be clear, black teachers rate white students about the same as do white teachers.) In other words, being race matched matters a lot for black students but not for others.

Wright drills down further. First—and this is probably unsurprising—the effect of race matching is entirely due to the evaluations given to black boys. There isn’t a noticeable difference for black girls. Second, the effect of matching is limited to the year of the match. When Wright checked reports of black students when they were assigned to white teachers following a year with a black teacher he found no lingering effects of that year of being race-matched. This suggests that the findings reflect teacher perceptions rather than real behavioral differences since we might expect improvements in behavior to persist the following year—and that’s not what happens.

How suspension rates between black and white students play into race matching

Wright then turns to the question of suspension. As is well known, black students are much more likely to be suspended than are white students. Wright shows that the more times a black student is matched with a black teacher, the less likely that student is to be suspended. Unfortunately, the data does not note the grade in which a suspension happened. It is reasonable to speculate that most suspensions come in later grades and that the finding is due in part to the effect of student-teacher race matching in earlier grades. We can’t be sure of this however, and some part of the finding may also be due to fewer suspensions of black students during years they have black teachers.

The difference in suspension rates is large. Taking these findings at face value, Wright estimates that if we doubled exposure of black students to black teachers, the black-white suspension gap would fall in half. Because of data limitations, it’s not possible to test whether black students’ likelihood of suspension changes when they move from a black teacher to a white teacher. Instead, Wright looks at black students who enter the same school at kindergarten but are exposed to different percentages of black teachers through eighth grade. So the causal interpretations about suspensions are less certain than are the interpretations about behavior reports.

In summary, black teacher perceptions about the behavior of black boys is very different than the perceptions of white teachers. This doesn’t happen for other racial groups. None of this necessarily suggests malice or prejudice or favoritism on anyone’s part. It does suggest one more way that race still matters in our schools.

Share
Posted in Uncategorized | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Schools, black children, and corporal punishment

Today’s post appeared on the BROWN CENTER CHALKBOARD at the Brookings Institution. A story on the post appears in the Washington Post.

As we approach the annual celebration of Dr. King’s life, it is worth examining the difference in how our schools discipline black and white children. In public schools in the United States, black children are twice as likely as white children to be subject to corporal punishment. Figure 1 shows the comparison, derived from nationwide data reported by schools to the Office of Civil Rights, Department of Education.[1] (All data is for the 2011-2012 school year, the latest year available.) The continuing disproportionate corporal punishment of black children is a reminder that some aspects of the “bad old days” are not fully behind us.

Incidence of corporal punishment per 100 students, annual rate

Figure 1

The 42,000 reported incidents of black boys being beaten, and 15,000 incidents for black girls, by educators in their school reflects two facts. First, black students are more likely to be located in states that use corporal punishment extensively. Second, in many states black students are disproportionately likely to be singled out for corporal punishment. Figure 2 shows the annual incidence of corporal punishment by state, with states where the incidence is less than once per ten thousand students greyed out.

Annual incidents of corporal punishment per 100 students

Figure 2

While corporal punishment is used in almost every state, seven states account for 80 percent of school corporal punishment in the United States: Mississippi, Texas, Alabama, Arkansas, Georgia, Tennessee, and Oklahoma. [2] For black students, six of these states (Mississippi, Alabama, Georgia, Arkansas, Texas, and Tennessee) plus Louisiana account for 90 percent of corporal punishment. One reason that black students are subject to more corporal punishment is that they live in those states responsible for most of the corporal punishment of all children.

Where is corporal punishment racially disproportionate? Essentially, and sadly unsurprisingly, the first answer is that black students are disproportionately beaten in parts of the Deep South. Black students are twice as likely to be struck as white students in North Carolina and Georgia, 70 percent more likely in Mississippi, 40 percent more likely in Louisiana, and 40 percent more likely in Arkansas. Figure 3 shows the ratio of the frequency of corporal punishment for black students compared to the frequency for white students, with states where the incidence is less than once per ten thousand students or where the rate is equal or lower for black students greyed out.

States with disproportionate corporal punishment relative probability of punishment black vs white students

Figure 3

Some high corporal punishment states are not particularly racially disproportionate. Texas, notably, uses corporal punishment on black students and white with equal likelihood. Texas shows up on the lists of where black students are hit because it is a large state that administers corporal punishment at a moderately high rate. Alabama—where the rate of corporal punishment is 10 times the national average—also shows equal rates of black and white children experiencing physical violence from educators. In North Carolina, black students are twice as likely to be struck as white students, but North Carolina uses corporal punishment relatively infrequently and so accounts for a small proportion of punishment of black students. Notably, in South Carolina the rate of corporal punishment is below the national average and is not racially disproportionate.

While heavy use of corporal punishment is more common in states of the former Confederacy, racially disproportionate application happens in northern states as well. Schools in Pennsylvania and Michigan are nearly twice as likely to beat black children as white, although both have low overall rates of corporal punishment.

Perhaps most surprisingly, corporal punishment in Maine is wildly disproportionate—with black children being eight times as likely to be hurt as white children. Colorado, Ohio, and California also have rates of corporal punishment for black children that are 70 percent or more higher than for white children.

In Figure 4, I show rates of corporal punishment for white students on the horizontal axis and for black students on the vertical axis. States above the 45° line in Figure 2 have racially disproportionate corporal punishments. The states clustered at the lower left of the graph have relatively lower rates of corporal punishment, sometimes disproportionate and sometimes not.

Mississippi stands alone.

Corporal punishment per 100 students

Figure 4

While the symbolism of continued physical violence against black children is inescapable, the disproportionate application of other forms of discipline may be of even greater concern. Except in Mississippi and Arkansas, the typical black student will probably not be subjected to corporal punishment during his school career. In contrast, school suspensions are much more common. Figure 5 shows rates of suspension by race.

Suspensions per 100 students

Figure 5

Note that an astounding 15 percent of black students receive an out-of-school suspension in a given year, a rate nearly 4 times that of white students; in-school suspensions are more than twice as likely among black students. Figure 5 shows out-of-school suspension rates for black and white students by state.

Out-of-school suspensions per 100 students

Figure 6

Out-of-school suspensions are applied disproportionately in every state—all points are above the red line. And these discipline patterns do not line up with old geographic patterns. The highest suspension rates for black students are in Wisconsin. And the greatest disparities (measured as the ratio of black-to-white suspension rates) are in the District of Columbia.

Every time a child is beaten in school and every time one is suspended and thus loses learning time, something or someone has failed that child along the way, regardless of the “reason” for the punishment. So long as these failures fall disproportionately on black children, we are not yet living up to the dream that “children will one day live in a nation where they will not be judged by the color of their skin but by the content of their character.”

Appendix

Discipline per 100 students
Corporal punishment Out-of-school suspension
Black White Black White
Alabama 3.583033 3.869057 17.90929 5.468595
Alaska NA 0.103160 8.592955 3.645956
Arizona 0.029171 0.050941 12.32562 4.555046
Arkansas 5.897032 4.205989 18.15124 4.832772
California 0.051100 0.029146 14.44344 4.803916
Colorado 0.057981 0.022656 11.12507 3.413624
Connecticut 0.030811 0.025587 12.19697 2.117891
Delaware 0.000000 0.000000 18.50083 5.836653
District of Columbia 0.059520 0.000000 16.31737 0.872663
Florida 0.197324 0.239999 19.85793 8.790211
Georgia 1.125453 0.577535 16.07318 4.434872
Hawaii 0.000000 0.000000 1.538462 1.145156
Idaho NA 0.036444 5.891822 2.945636
Illinois 0.031373 0.038082 15.35005 3.406990
Indiana 0.063847 0.062103 20.57063 5.230889
Iowa 0.022850 0.061744 15.22203 2.656221
Kansas 0.011166 0.013882 12.80741 2.792517
Kentucky 0.048563 0.190368 13.02577 4.424482
Louisiana 0.759494 0.537434 13.41064 5.476110
Maine 0.432043 0.057917 8.010801 3.839565
Maryland 0.000000 NA 8.901257 3.729914
Massachusetts 0.004890 0.015254 10.63718 3.376658
Michigan 0.179947 0.090549 20.77535 5.397792
Minnesota 0.030967 0.028806 13.56956 2.204543
Mississippi 8.059325 4.747161 15.36107 5.419384
Missouri 0.683648 0.553408 20.52077 4.393734
Montana 0.000000 0.043081 5.335968 3.311940
Nebraska 0.019723 0.028463 18.29791 3.194068
Nevada 0.000000 NA 12.38813 4.214409
New Hampshire 0.000000 0.051235 13.11094 5.099147
New Jersey 0.018340 0.016058 11.39809 2.773110
New Mexico 0.088313 0.007161 10.81837 4.902616
New York 0.022768 0.031960 6.698778 3.031192
North Carolina 0.045919 0.022889 16.71766 5.230456
North Dakota NA 0.043300 2.846739 1.343501
Ohio 0.054495 0.028990 18.37799 4.323960
Oklahoma 0.937784 1.650464 15.15327 4.706784
Oregon 0.000000 0.006020 11.93396 4.788382
Pennsylvania 0.199160 0.099676 17.13759 3.469501
Rhode Island NA NA 16.29604 6.434000
South Carolina 0.021752 0.020802 17.06215 6.371353
South Dakota 0.116550 0.121998 8.071096 2.184158
Tennessee 1.020022 1.148149 19.35562 4.376236
Texas 0.825640 0.843602 12.84153 2.717089
Utah NA 0.004343 6.567593 1.978244
Vermont 0.000000 0.030065 6.702997 4.289775
Virginia 0.020342 0.017897 14.10793 4.615755
Washington NA 0.001729 11.65880 4.614335
West Virginia 0.143052 0.133395 17.50681 7.994511
Wisconsin 0.010465 0.027360 25.60784 2.697800
Wyoming 0.000000 0.016005 8.499234 3.295765

Discipline per 100 students is 100 times the ratio of discipline of a particular type reported for all students of a given race to the corresponding overall enrollment figures given by the Office of Civil Rights, for 2011-2012.

 


[1] The Office of Civil Rights collected data from all U.S. public schools. The data used here is aggregated to the state and national level by the Office of Civil Rights and can be found at http://ocrdata.ed.gov/StateNationalEstimations/Estimations_2011_12. Data is reported either directly by school districts or by states’ education agencies on behalf of the districts. Data on corporal punishment may count multiple incidents for a single student as multiple occurrences. According to the Office of Civil Rights, “Corporal punishment is paddling, spanking, or other forms of physical punishment imposed on a student.” Data for suspensions reports counts of students with multiple out-of-school suspensions as a single incident. Because all data is self-reported, it is not known whether all districts use the same standards in reporting.

[2] According to the Gundersen National Child Protection Training Center corporal punishment in schools is banned in most states. While most states do not permit corporal punishment in schools, they nonetheless report that schools beat children. Only Delaware and Hawaii report no corporal punishment. Corporal punishment is also rare in Maryland and Nevada, with fewer than three instances reported.

Share
Posted in Uncategorized | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Brown Center Chalkboard

Dear reader: As you may have surmised from recent posts, I’ve moved my writing to the Brown Center Chalkboard of the Brookings Institution. Hope to see you there!

Share
Posted in Uncategorized | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Professional non-development: Do teacher development programs work?

Today’s post appeared on the BROWN CENTER CHALKBOARD at the Brookings Institution.

Do professional development programs for teachers actually develop better teachers? Should the large amount of money spent on teacher development be re-directed to better uses? “The Mirage: Confronting the Hard Truth About Our Quest for Teacher Development,” released this summer by TNTP (aka “The New Teacher Project”), raises serious questions about whether the entire teacher development enterprise should be abandoned.

“The Mirage” raises three issues based on an in-depth exploration of teacher development programs in three large, public school districts and one charter management organization. The report begins by looking for evidence on whether professional development works; i.e., Do teachers become better as a consequence? The second question examined is whether teachers think development works. Then finally, and perhaps most surprisingly, “The Mirage” looks at the dollars spent on professional development and finds that the costs are shockingly high.

Does professional development for teachers work?

Since the districts studied engage in large-scale teacher development programs, one might expect the result to be that teachers to improve over time. Using overall evaluation scores according to each district’s own metric, the study finds no evidence that teachers improve. It’s worth pointing out this is not the first study to come to this conclusion.[1] To state these a little more carefully, teachers typically improve substantially for their first few years in the classroom, with a flat trajectory thereafter—despite spending time in development activities. You can see what happens in this figure taken from The Mirage.

Average teacher performance by experience

Source: Figure 5, “The Mirage”

In order to drill down into the data, the authors of “The Mirage” labelled teachers as “improvers” or “non-improvers.” They ran the classifications a number of different ways, but always relying on each district’s teacher evaluation metric . Notably, the authors did not rely on value-added scores alone.

While overall performance is flat, might we be seeing a mix between improving teachers who received a lot of development support and non-improvers who didn’t? Apparently not. Improvers and non-improvers look pretty much the same when it comes to teacher development. “The Mirage” compared the frequency of a variety of development activities for improving teachers and others. Teachers who didn’t improve spent pretty much the same time being developed as those who did improve.

Frequency of Development Activities
Improvers Non-improvers
Number of times observed over two years 8 7
Hours of coaching over two years 12 13
Hours of formal collaboration over two years 69 64
Hours spent per month in professional development 17 18

Source: Figure 7, “The Mirage”

As the saying goes, absence of correlation does not prove absence of causation. Maybe teachers need ongoing training just to keep from getting worse, although I’m not aware that anyone has made such an argument.

Is it even true that teachers think that professional development activities work? Basically, not so much, according to TNTP. Only half of teachers agreed with the statement that professional development “drives lasting improvements to my instructional practice.” What’s more, nearly half of teachers whose performance did not improve also thought that development “drives [apparently nonexistent] lasting improvements.” In fact, fewer than 45 percent of teachers thought professional development “is a good use of my time.”

How much money is spent on teacher development?

You may—or may not—be surprised at evidence that teacher development basically doesn’t work. What I think you will be shocked by, as I was, is how much is actually spent on teacher development. The headline number from The Mirage is that the three districts they studied spend on average $18,000 a year per teacher on professional development. To give context to $18,000 a year, average teacher salaries are around $56,000. In other words, the cost of development is just a bit under a third of the cost of salaries. That is a very large amount of money. It is an especially large amount of money to spend absent clear evidence of results. It is a disturbingly large amount of money to spend in light of evidence that there are no results, according to “The Mirage.”

Where is all this money going? “The Mirage” does not break down the totals, but does give considerable background on what went into the calculations. The authors provide “low,” “middle,” and “high” cost estimates. The $18,000 headline number is the middle figure; the low end is $13,000 and the high end is $20,000.[2] As an example of the difference, the salary bump that comes with a master’s degree is included in the middle and high figures, but not in the low estimate. Paying for a higher degree is certainly a real cost, although perhaps not the first thing that comes to mind when one thinks of a district’s support of “professional development.” But getting a master’s degree does share something with the rest of findings in “The Mirage”—we know that it doesn’t improve teaching.

Teachers spend time equivalent to almost 10 percent of the school year in professional development. Of this time, five to 10 school days each year are devoted to mandated development activities; the remainder is spent on self-initiated activities. So a non-trivial part of the cost of professional development is simply salaries and benefits for teacher time. But the larger fraction is paying for the time and resources of everyone else engaged in development. In fact, “The Mirage” compares the cost of staff training in schools (excluding the part that goes to teacher salaries and salary bumps) to the costs in other large organizations and reports that schools spend four to 15 times more than non-school organizations on staff training. One might speculate as to whether the high levels of spending by schools reflect a set of vested interests in the teacher development biz, but so far as I know there isn’t any evidence on why schools spend so much on training.

Professional development seems like an obviously good idea. Indeed, after writing “we found no set of specific development strategies that would result in widespread teacher improvement,” the authors of “The Mirage” argue “that doesn’t mean we should give up.” Well maybe we should give up. Or at least, maybe we should cut way, way back on what we spend on professional development and devote the freed up funds to higher teacher salaries and more hours in which classroom teachers are in class with their students. In my bookProfit of Education, I argued that the single most important education “reform” would be to pay teachers more…a lot more; my ballpark figure is 40 percent more. The considerable funds now spent, apparently ineffectively, on improved teacher development would be a good down payment towards improved teacher salaries.

Share
Posted in Uncategorized | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Are we facing a nationwide teacher shortage?

Today’s post appeared on the BROWN CENTER CHALKBOARD at the Brookings Institution.

Here’s a question to consider: Are teacher shortages…

A. Real?
B. Imaginary?
C. Both?
D. Neither?

Are we facing a nationwide teacher shortage? What do we mean by “shortage”—or, better, what should we mean? Let’s put some numbers on the Chalkboard, and then back up a bit and ask whether we’re asking the right questions.

Education degrees and hires of new and recent graduates

The figure shows the annual production of bachelors and master’s degrees in education. The most recent numbers are about 270,000 degrees per year. Compare this to the number of new teacher hires of current or recent graduates, which ran just under 150,000 in 2007. (New hires data courtesy of Dan Goldhaber, from the new CALDER Explainer “Missing Elements in the Discussion of Teacher Shortages.”) While the new hire data is only available every few years (it’s derived from the School and Staffing Survey (SASS)), we can take an educated guess that the low 2011 square is an anomaly and that the number of new hires is now back up at about pre-recession levels.

Conclusion from the graph? The green line is way above the black squares. We learn that we’re producing two or three times the number of teachers that we currently need. If that’s right, resources are being wasted.

Of course, not all teachers have degrees from education schools. So I took a look at numbers from the most recent SASS. 80 percent of teachers have a BA from an education school. Of those who don’t, 76 percent have a master’s from an ed school. So the fraction of teachers without any kind of ed degree is

0.20×0.24=4.8%

This number is a little rough as it comes from one survey taken at single point in time, but it’s safe to say that there are only a few people becoming teachers without some kind of education degree. So while new supply is actually higher than the green line, it’s not all that much higher.

However, most master’s degrees in education go to teachers who want further training or who want the pay bump that comes with a graduate degree. While we don’t know the educational background of graduate students in education, we can use the most recent SASS to find out how many teachers who got MAs in education already had a BA in education. The answer is 76.4 percent.

Suppose we take, as a ballpark number, that all but a quarter of MAs in education are “redundant.” This does not mean that they aren’t valuable, just that they aren’t adding to the number of new teachers produced. I’ve made this adjustment in this next graph, lowering the number of “first ed” masters as well as the total. With this adjustment, the production of first-time education degrees has been pretty flat at about 145,000. That’s somewhere around the number of newly hired teachers. In other words, the flow of new teachers supplied each year is roughly the same as the demand for new teachers.

Estimates of First Education Degrees and hires of new and recent graduates

In the short run, newly trained teachers are not the only source of supply. Slots can also be filled by hiring back experienced teachers who have left the profession temporarily. In fact, something like half to two-thirds of openings are filled that way. But in the long run, the annual supply has to meet or exceed the annual demand. We may be moving closer to a national teacher shortage than it first appears.

Now let’s turn to the question of whether talking about a “national teacher shortage” even makes any sense.

You can have a shortage in specific kinds of instructional needs, STEM teachers for example, or shortages in particular areas of the country, without there being a national shortage. On the first point, the “Missing Elements” CALDER Explainer shows that teacher production for both STEM and special education has been relatively flat for a very long time. With regard to geographic variation, Chalkboard reader Jason Dyer comments “Oklahoma has given out 842 [emergency certificates] this year alone, more than the last four years combined).” Dyer goes on to point to current teacher shortages in Nevada as well as Oklahoma.

In other words if you’re a principal trying to hire for a specific slot and not getting any applicants, words about overall national availability provide little comfort.

The second point is that discussing quantity without discussing quality is a mistake. Oklahoma only has about 41,000 teachers. If the state issued emergency certificates in a single year to more than two percent of the teacher corps, one can’t help but worry if teacher quality is at risk.

The U.S. can always get “enough” teachers. Just lower standards enough. On the flip side, even when we produce more than enough bodies to fill classrooms, that doesn’t demonstrate that all or most of the new teachers have the skills and drive for the tough job of running a classroom.

In the private sector, salaries get adjusted to get enough, good enough employees. That doesn’t happen in public schools. Rather than talking about shortages in terms of body count, we should be asking whether we are producing (and retaining) enough really, really good people in the classroom. Quantity matters when it comes to teacher supply, but it’s a mistake to talk about quantity without talking about quality at the same time.

Share
Posted in Uncategorized | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Student teaching: Can we leverage the recent teacher “shortage” to students’ advantage?

Today’s post appeared in THE BROWN CENTER CHALKBOARD at the Brookings Institution.

A possible teacher shortage has been much in the news—in California, for instance, the number of new credentials has fallen by about half since 2004, while K-12 enrollment stayed roughly flat. (Take a look at Paul Bruno’s July Chalkboard.) Whether this is a transient, post-Great Recession blip, or if it’s a major trend is not yet clear. It’s hard to say just yet what fewer teacher candidates means for teacher quality, because we don’t know much about how the characteristics of people getting certified has changed, or whether or not schools are having more trouble finding good candidates. However, I’d argue that the drop-off in credentials actually creates an opportunity to raise teacher quality through another channel: improvements in student teaching.

A chance to trade off lower quantity for higher quality

As I discuss below, evidence suggests that a high-quality student teaching experience is valuable in getting a new teacher off to the right start. But people who’ve looked at the question tell me that practice teaching is currently over-subscribed and under-managed—there aren’t enough high-quality slots available, and as I describe below, standards are low.  The current drop in the number of new teaching credentials, at least some of which have historically gone to people who never end up teaching, makes it more likely that new teachers will get the chance to learn from a top-notch mentor.

In short: Fewer teacher candidates vying for limited student teaching slots could mean better prepared newbie teachers. If education programs take the opportunity to shore up their requirements, the teacher “shortage” could in part translate into a tradeoff of smaller quantity for higher quality.

Student teaching: Exploring the data (and the lack thereof)

A first question might be “Is a high-quality student teacher experience important?” Very little data exists on student teaching experiences and outcomes. But, the available scientific evidence on the topic suggests that the answer is yes.

Donald Boyd and colleagues looked at teachers in New York City schools. They found that oversight of student teaching assignments by a teacher candidate’s program led to notably better outcomes (measured by value-added scores) in the first year of teaching. “Oversight” here meant that the cooperating teacher had a minimum amount of teaching experience and was selected by the program, and that a program supervisor made at least five observations of the student teaching. The gain from oversight held for both math outcomes and English language arts. Unsurprisingly, the effect did not seem to persist into the candidate’s second year on the job.

Boyd and colleagues also looked at whether simply having any student teaching experience at all mattered. Student teaching did matter for the first year of teaching math, with results being mostly inconclusive for English language arts.

State regulations of student teaching

Now, given that good practice teaching is valuable, do all students in education programs get such an experience? The answer appears to be no.

Here too, the data is very limited. I’ve put together a few pieces that are available; the news is not reassuring.

I had assumed practice teaching was a more-or-less universal requirement. (I was wrong.) Here are three questions about state regulations. See what your guesses are before looking at the answers.

How many states require:

  1. Student-teacher placements of at least 10 weeks?
  2. Placements represent a full-time commitment for the student equal to at least 12 credit hours?
  3. Supervising teacher must have at least three years of experience?

The answers, from the National Center on Teacher Quality’s (NCTQ) “Student Teaching in the United States”:

  1. 10 week placements: 27 states.
  2. At least 12 credit hours: 18 states.
  3. Experienced cooperating teacher: 11 states.

State regulations don’t necessarily describe actual practice, of course. Regulations are a minimum that we expect many programs to exceed. For a more direct look at what education schools do, I looked at a different metric on student teaching. Using NCTQ data, I calculated what fraction of education programs in each state are rated by the NCTQ as requiring a “capable mentor” for student teachers. (The NCTQ defines a “capable mentor” as one “possessing demonstrated mentorship skill or having taken “a substantial mentorship course/”) The following map ranks each state from yellow (most programs do require a capable mentor) to blue (they don’t).

Fraction of programs requiring capable mentors for student teachers

Source: NCTQ, personal communication

One doesn’t see a whole lot of yellow or orange.

Making the tradeoff

It’s tough to find qualified teachers to mentor candidates through the vital step of practice teaching. If we find ourselves training fewer teacher candidates, education schools should presumably have a better shot at arranging high-quality practice teaching placements. This is especially true for schools that train a fair number of undergraduates who do not end up taking teaching positions. The available good practice teaching positions won’t have to be spread quite so thin.

When it comes to teaching, most people will argue that quality is more important than quantity. If having fewer students vying for teaching posts increases the fraction of new teachers who start their career coming off a well-mentored student teaching experience, the result of a teacher “shortage” might be less worrisome than people think.

Editor’s note: This blog post was updated on October 15, 2015 with a new version of the included map. 

 

Share
Posted in Uncategorized | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Summer!

ProfitOfEducation is going on break for the summer. Hope to see you back in the new school year!

Share
Posted in Uncategorized | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

College graduation

While this blog is mostly about K-12 education, some of the rhetoric you hear nowadays tends toward the idea that the goal of K-12 is “college readiness.” Regardless of whether you’re in agreement with “college readiness” as an over-riding goal, I’d like to point out that being ready to start college is an entirely different kettle of fish from being ready to complete college.

Here’s a graph showing the fraction of students who complete a 4-year degree within 5 years of having started a 4-year college, the horizontal axis giving the year in which the students entered college.

5 year graduation rates

You’ll see that the numbers are up. But that means that about half of entering men, and a moderately higher fraction of women, actually complete their degree. More than a third don’t.

What do you think of the following: High schools should not only report how many of their students they send off to college; they should also report how many complete their degree.

Share
Posted in Uncategorized | Tagged , | 1 Comment

One Response to College graduation

  1. Nordy says:

    What do you think of the following: High schools should not only report how many of their students they send off to college; they should also report how many complete their degree.

    Would that really tell you anything about the education provided by the high school? My sense (would like to have hard data to back up), is that most college dropouts are due to external, non-academic factors. I don’t know that a high school should be held responsible if a former student is unable to pay bills, or develops a drinking problem.

Leave a Reply

Your email address will not be published. Required fields are marked *

Large districts

Here’s a factoid that I hadn’t known. Public school students are quite a bit more likely to attend school in a large school district than was true in the past. I’ve made a little chart showing the percentage of students in districts with over 10,000 pupils.

students in large districts

In 1979, 46 percent of students were in large districts. Today (well 2011, which is the latest data), the number has risen to 55 percent. At the same time, the number of very small school districts–those with fewer than 300 kids–has fallen by a third.

Much of this may just be a consequence of population growth. To some extent we’d expect all districts to have more students in the past. It turns out that there’s something more than that going on, as the number of school districts has fallen 15 percent over the same period.

I’m pretty sure that closing very small districts is mostly a good thing. They’re probably too small to be efficient. But is it a good idea to have more students in mega-districts? Or is there a sweet spot somewhere in-between tiny and mega?

Share
Posted in Uncategorized | Tagged , | Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *