Education Policy Research Cartels: Any Research Not Done by Us Is Trash

X
Story Stream
recent articles

In an online discussion about educational testing, an exasperated education professional interjected that the whole discussion was moot. Had there ever been even a single study showing any benefit accruing from testing students?

I sent him a copy of my 2012 meta-analysis of several hundred studies estimating the effects on student achievement from testing, and never heard back. I have since discovered hundreds more such studies. All one had to do was look for them.

Psychology professors from the early 20th century on produced one of the most common study types—randomly assigning their dozens of introductory psychology students to two or more groups, one tested more or differently than the others.

Presumably, the skeptical educator had completed a degree in a graduate school of education and kept up with the literature in his field. How could he believe that such a large research literature did not exist? He could because that is what he was told, in education school and by the professional literature which steadily drummed a beat of “problems with testing.”

It is no secret that many education professors and professionals dislike testing. One way to express that dislike would be to lay out all the evidence for and against and explain why one leans against. Another, much more effective, method is to pretend that the evidence for it does not exist. If something does not exist, there is no point in looking for it. One wins the argument by default.

The American taxpayer funded the most substantial effort to bury the bulk of the research literature on educational testing. For several decades, the US Education Department bestowed millions of dollars on an exclusive consortium of groups of testing experts from several graduate schools of education and the RAND Corporation. They started their tenure around 1980, declaring ad infinitum that little to no relevant research existed on the effects of testing. Over time, their taxpayer-funded reports cited less and less previous research conducted by others and more of their own anti-testing research. Eventually, they cited their own research almost exclusively.

Over a couple of decades, the group successfully expunged a century’s worth of policy-relevant research on educational testing from the collective working memory. Then, in 2001, the George W. Bush administration sought advice for crafting its federal intervention into school testing, the No Child Left Behind (NCLB) Act. Its education policy advisors, mostly academic economists and political scientists with little expertise in educational standards and testing, chose to believe the experts at the RAND-education schools consortium. Just like that, most relevant scholarly research was ignored—even declared nonexistent—and the Bush Administration casually adapted an idiosyncratic Texas testing program for use nationally.

Though focused on K-12 policy at first, the wave of knowledge destruction inevitably washed over higher education: “there is a serious dearth of research investigating the characteristics and effects of testing in the postsecondary sector,” wrote a researcher who worked at RAND and Harvard’s Education School. In fact, higher education had contributed one hundred ninety-three (a majority) of the quantitative studies included in my testing effects meta-analysis. Most of those studies employed experimental designs. Higher Education also contributed several dozen surveys and qualitative studies to the testing effects database.

Education professors captured control of the National Research Council’s Board on Testing and Assessment (BOTA) in the 1980s. From that point on, study committees have been staffed with a plurality of anti-testing members. One study declared nonexistent a thousand-studies-large research literature on employment testing.

Another BOTA report complained that a testing firm specializing in teacher licensure testing would not hand over the technical reports it had written for its dozens of state higher education clients. They did not because those reports were the property of those state authorities. Rather than make the effort to contact each state, the BOTA committee wrote, “Little information about the technical soundness of teacher licensure tests appears in the published literature,” and “The paucity of data ... made the committee’s examination of teacher licensure testing difficult....” In fact, the technical reports existed in abundance; I had written some of them myself during my tenure at that test development firm.

Three economics professors wrote in 2016, “Instructors are a chief input into the higher education production process, yet we know very little about their role in promoting student success." Yet, a search for studies on "instructor effectiveness in higher education" in years up to 2015 returned over 130,000 references in ERIC. The same search term in Google Scholar returned almost 18,000 references.

Several groups of economics professors agreed that “there is almost no research on” remedial education effectiveness in higher education. But a Google Scholar search on "remedial education effectiveness in higher education" returned over 18,000 references.

Some of us in the early 2000s had expected the Republican Party's education policy advisors to tear off the education establishment’s seal of censorship to reveal the rich, robust research literature on standards and testing and the optimal structure of testing programs. Alas, no.

Instead, through the 2000s to the present day, some education reformers have mirrored the behavior of the RAND-education schools consortium. They have repeatedly declared nonexistent previous educational research—mostly to be found in the Psychology and Program Evaluation literatures. In turn, the economists and political scientists advising the national Republicans declared themselves to be the first in the history of the world to conduct studies along a wide range of related topics. I’m skeptical of most claims of a “uniparty,” but it fairly describes some recent US education policymaking.

When not claiming to be research pioneers, the advisors might cite recent, related research of others in their small, exclusive group. But the work of thousands of scholars outside their group may as well have never been done, as they ignore it completely.

Experts in information networking call this type of mutual admiration society a “citation cartel.” Scholars operating strategically accelerate their career advancement by maximizing their perceived scholarly production relative to that of others. In concrete terms, this means glaring as much attention as possible (in citations, references, mentions, etc.) on oneself and one’s cooperating colleagues, while shading the work of others and their divergent evidence and points of view. Professional rivals may also be character-assassinated through whisper campaigns.

As unhelpful as strategic scholarship may be to society at large—it drastically reduces the amount, breadth, and quality of information available to the public and policymakers—it works well to supercharge the careers of those who adopt it. The professional behavior of half the current members of the prestigious National Academy of Education (NAEd) can be described this way—their literature reviews are unambiguously selective—rewarded for helping to shape the known research literature to fit established group preferences.

With copious funding from the wealthiest foundations and the federal government, aggressive attention seekers eclipse any counsel from the genuinely expert. Instead, the public and policymakers hear repeatedly from partly informed members of the dominant citation cartels across a range of topics.

Nationally focused education journalists could help fix this problem by diversifying their expertise sourcing. Instead, they have long served as publicists for those academics and think tankers they perceive to be at the top of the status order. And, generally, they regard the word of inexperienced academics as superior to that of deeply experienced professionals, so long as the academic resides at one of the country’s more prestigious universities or think tanks.

With millions in extra funding from the Gates and allied foundations over the past two decades, journalists simply increased their sourcing from the same, usual suspects—those with the resources to groom ongoing media relationships and feed them self-serving stories already sketched out. Gates-funded journalists interview Gates-funded researchers and pundits.

Journalists and policymakers appreciate the convenience offered by these information gatekeepers. They may even believe that all policy-relevant information bubbles up through a universal filter—worldwide and historical— such that the “top scholars” represent the best of the entirety of information available. These self-proclaimed top scholars aggressively feed that impression.

In fact, these top scholars are limited human beings who struggle to keep up with all the research in their own topical subfields. What sets them apart is their willingness to claim intimate and thorough familiarity with all the relevant research when, in fact, they know only a small part.



Comment
Show comments Hide Comments