Essential Missing Ingredients in Obama Plan to Address Over-Testing
President Barack Obama, with Education Secretary Arne Duncan arrive in the State Dining Room of the White House in Washington, Friday, Oct. 2, 2015. Wading into one of the most polarizing issues in education, Obama called last month for capping standardized testing at 2 percent of classroom time, while conceding the government shares responsibility for having turned tests into the be-all-and-end-all of American schools. (AP Photo/Sue Ogrocki,File)
RCEd Commentary
The Obama Administration’s recently released Testing Action Plan won’t work because it falls short in the trenches, where teaching and learning happen.
The plan and the Council of the Great City Schools testing survey report generated widespread commentary from experts, ed reform organizations, editorial boards, and pundits. Most reactions addressed such familiar policy areas as the importance of annual accountability and the high-stakes use of test results, which many experts think caused local testing to mushroom, while a few touched on the newest concept – a testing time cap.
I applaud the Administration’s mea culpa and many of the Plan’s concepts. However, the Plan and most reactions ignore or at best sidestep four areas without which we will fail to achieve more efficient assessment and enhanced student learning: accessibility, assessment literacy, formative assessment practices, and performance assessment.
The Action Plan cites accessibility and accommodations as leveling the playing field for all students. But federal policy too narrowly defines the students who can benefit from these supports, leaving millions un(der)served.
The U.S. Department of Education estimates that more than 12 percent of the student population has disabilities and classifies more than 9 percent as English Language Learners. Yet, a recent article about the Oak Foundation cited estimates that as many as 20 percent of adolescents have undiagnosed learning impairments. And data from the Smarter Balanced Assessment Consortium’s 4.2 million student field test in 2014 reveal that the accessibility supports and accommodations were used in one-third of the test sessions. Most tests don’t match the accessibility of the consortia assessments, yet ED’s Action Plan promises just a $25 million investment for multiple innovations, only one of which addresses accessibility.
Why does this matter? Inaccessible assessments frustrate students’ attempts to accurately demonstrate what they know and can do. These tests provide at best a muddy picture of student learning, leaving students – and their parents and teachers – at a loss of what to do next. The resulting failed interventions lead to more (inaccessible) testing, continuing the cycle. By making all assessments fully accessible to all students, we will help them succeed, while reducing testing.
Much in the Action Plan and the CGCS report reflects sound measurement principles and what is encompassed in assessment literacy – the skills, knowledge, and beliefs educators need to select or create effective assessments and to use the results to advance student learning. Professionally developed accountability assessments aren’t the problem. But as both documents note, these represent a small fraction of the tests students take every year.
Auditing state and local tests and ensuring future assessments are meaningful and high quality require educators to be assessment literate. But they aren’t! A lack of assessment literacy leads to ineffective tests, inaccurate or ambiguous results, misdirected interventions, hampered student learning, and ultimately, more tests. Resources are squandered, time wasted, opportunities to learn irretrievably lost.
Producing webinars and posting materials, as offered in the Plan, aren’t enough. Assessment literacy is barely covered if at all in most educator training, licensure, evaluation, and professional development programs. Meeting the challenge requires professional development, coaching, and educator collaboration focused on gathering evidence of student learning and effectively using it to promote learning, which researchers have found to be a glaring and widespread weakness. To really make a difference, ED should require educator preparation programs to effectively cover assessment literacy and should insist that licensure exams and educator evaluations appropriately address assessment literacy. The Action Plan mentions neither.
We can’t address the over-testing issue while ignoring the most researched type of assessment. The relevance of formative assessment (which I addressed in a previous commentary) to the over-testing issue is profound. The practices help students and teachers adjust learning during instruction through qualitative feedback based upon wide-ranging evidence, not just tests. They promote students’ understanding of assessment, empowering them to self-assess, peer-assess, and take ownership of their learning, reducing the need for typical tests. And they work – as effectively as one-on-one tutoring and more than reducing class size, helping all students grow, and lower-performing ones the most. Yet, this invaluable process wasn’t mentioned in the Action Plan, the CGCS report, or most reactions I read.
The Action Plan and the CGCS report are replete with such well-worn terms as critical thinking, college and career readiness, real world, complex demonstrations, application of knowledge, and accurate measure. Performance-based assessment and learning/instruction – barely mentioned – are perhaps the best ways to make these terms come alive because they ask students to demonstrate their learning through meaningful work products or performances. The results provide unambiguous pictures of student learning, which can lead to effective interventions and student growth.
Multiple-choice tests – which most people associate with standardized testing, have typically focused on factual recall and lower-level skills. What’s more, they measure student learning indirectly, with the reasons for incorrect – and correct – answers hidden. This uncertainty can lead to inappropriate interventions, a lack of student progress, and the need for more tests. So, while the up-front investment of performance assessments is greater, in the long run they can be more efficient and effective.
Transitioning to a performance-based (or project-based) curriculum isn’t easy. But it works and schools across the country – even schools that avoid standardized tests – have done so. Expanding the ranks would get a huge boost if accountability assessments were more performance-based (yes, high-stakes assessments can influence curriculum and instruction for the better). Curriculum-embedded performance assessments provide meaningful insights to student learning and can be efficiently incorporated in accountability systems, permitting a reduction in the on-demand testing component.
Performance assessment challenges the wisdom of the proposed testing time cap by adding another dimension – assessment as learning, a concept illustrated by computer games (aren’t they really a series of tests and aren’t players continuously learning). It can also be applied to performance-/project-based learning, in which case the assessments/projects are part of curriculum and instruction. Do we really want to limit this type of meaningful, higher-level learning?
Addressing over-testing requires changes in policy and practice, especially in the trenches. The four areas I’ve cited are essential to this effort and will dramatically improve the learning success of current and future students.