Can Traditional Public Schools Replicate Successful Charter Models? A Different Take

Can Traditional Public Schools Replicate Successful Charter Models? A Different Take
X
Story Stream
recent articles

RCEd Commentary

Although the politics concerning charter schools remain contentious, most education observers agree that some charters have had real success in helping children from impoverished homes learn more. (Charter schools are publicly funded, but operate under fewer regulations than other public schools, in exchange for some type of accountability, set forth in the school’s charter.)

If you believe that some charter schools are successful, a natural next step is to ask what those charters are doing and whether it could be replicated in other schools. A recent study tried to do that, and the results looked disappointing. But I think the authors passed over a telling result in the data.

The researcher is Roland Fryer, and the first study was published in in 2011 with Will Dobbie. They analyzed successful charter schools on a number of dimensions, and concluded that some factors one might expect to be associated with student success were not significant: class size, per-pupil expenditures, and teacher qualifications, for example. They identified five factors that did seem to matter: frequent feedback to teachers, the use of data to drive instruction, high-dosage tutoring to students, increased instructional time, and high expectations. Fryer (2014) sought to inject those five factors into some public schools in high-needs districts, starting with 20 schools in Texas. They increased the number of occasions for teacher feedback from 3 times each year to 30. Staff learned instructional techniques developed by Doug Lemov and Robert Marzano. They had parents sign contracts and students wear uniforms, along with other marks of a high-expectations school culture. Outcome measures of interest were school averages on state-wide tests.

So what happened? In math, it helped a little. The effect size was around 0.15—that is, a student who would have performed at exactly the average (50th percentile) would perform at the 56th percentile because of the intervention. In English Language Arts, there was no effect at all.

Fryer tried the same thing in Denver (seven schools) and got identical results. In Chicago (29 schools) there was no effect in either math or reading.

Two questions arise. Why is the effect so small? And why the difference between math and reading?

Fryer does not really take on the first question, I guess because there is an effect on math achievement. In the conclusion he claims, “These results provide evidence suggesting that charter school best practices can be used systematically in previously low-performing traditional public schools to significantly increase student achievement in ways similar to the most achievement-increasing charter schools.” Whether or not the cost (about $1,800 per student) was worth the benefit is a judgment call, of course, but the benefit strikes me as modest.

Fryer does address the different impact of the intervention for reading and math. He speculates that it might be harder to move reading scores because many low-income kids hear and speak non-standard English at home. There’s some grounded speculation that hearing different dialects of English at home and at school may impact learning to read — see Seidenberg, 2013. I doubt non-standard English is decisive in fourth grade and up, and those were the students tested in this study.

My guess is that another factor is relevant to both the size of the math effect and the lack of effect in reading. Much of Fryer’s intervention is directed towards a seriousness about content. But actually getting serious about work was the factor that Fryer was least able to address. The paper says “In an ideal world, we would have lengthened the school day by two hours and used the additional time to provide tutoring in math and reading in every grade level.” But due to budget constraints they could tutor only in one grade and one subject per school. They chose 4th, 6th, and 9th grades, and they chose math. Non-tutored grades got a double-dose of whatever students were most behind in. The double-dose was whole class instruction, and so presumably not as effective as tutoring, which had a smaller student teacher ratio. Teachers tried to make the double-dose not cut into academic time. Thus, it may be that researchers saw puny effects because they had to skimp on the most important factor: sustained engagement with challenging academic content.

This explanation is also relevant to the math/reading difference. In math, if you put a little extra time in, it’s at least obvious where that time should go. If kids are behind in mathematics, it’s not difficult to know what they need to work on.

Once kids reach upper elementary school, reading comprehension is driven primarily by background knowledge; knowing a bit about the topic of the text you’re reading confers a big advantage to comprehension. Kids from impoverished homes suffer primarily from a knowledge deficit (Hirsch, 2007).

So a bit of extra time, while better than nothing, is just a start at an attempt to build the knowledge needed for these students to make significant strides in reading comprehension. And in this particular intervention, no attempt was made to assess what knowledge was needed and to build it systematically.

This problem is not unique to Fryer’s intervention. As he notes, it’s always tougher to move the needle on reading than on math. That’s because experiences outside of the classroom make such an enormous contribution to reading ability.

Thus, I find Fryer’s study perhaps more interesting than Fryer does. On the face of it, his intervention was a modest success: no improvement in reading, but at least a small bump to math. To me, this study was another in a long series showing the primacy of curriculum to achievement.


References

Dobbie, W., & Fryer Jr, R. G. (2011). Getting beneath the veil of effective schools: Evidence from New York City (No. w17632). National Bureau of Economic Research.

Fryer, R. G. (2014). Injecting Charter School Best Practices into Traditional Public Schools: Evidence from Field Experiments. The Quarterly Journal of Economics, doi: 10.1093/qje/qju011

Hirsch, E. D. (2007). The knowledge deficit: Closing the shocking education gap for American children. Houghton Mifflin Harcourt.

Seidenberg, M. S. (2013). The Science of Reading and Its Educational Implications. Language Learning and Development, 9(4), 331-360.

 

Comment
Show commentsHide Comments

Related Articles