Assessing Deeper Learning
The American Institutes for Research, in May of this year, released a study tracking the progress of students who attended high schools in the Deeper Learning Network and comparing their outcomes to those of similar students from non-Network schools. AIR defined this as an early “proof of concept” study, though the findings were still exciting. Relative to the comparison group, students who attended Network schools were more likely to finish high school on time, went on to college in greater numbers, got higher scores on state achievement tests, did better on assessments of problem solving, and rated themselves higher on measures of engagement, motivation, and self-efficacy.
This is great news, suggesting that well-designed high schools can succeed at teaching to the ambitious goals collected under the banner of deeper learning and that the so-called “hard-to-measure” aspects of deeper learning—the development of inter- and intrapersonal competencies—can in fact be measured. The research suggests that these are meaningful outcomes that can be taught, learned, and assessed. Which brings me to David Conley’s paper, A New Era for Educational Assessment—the first in Jobs for the Future’s series of deeper learning research reports—charting a new course for assessment and accountability in the nation’s high schools.
Conley’s analysis suggests that it won’t be easy for school systems and policymakers to get over their long-standing addiction to lower-cost, one-dimensional achievement tests. Nor will it be a quick, simple matter to create and scale up better assessment systems that provide more useful, multilayered information about students’ progress. However, given recent research into how people learn and what it means for them to be “ready” for college and careers, it has become impossible to keep pretending that our current testing approaches are adequate. Our only choice is to commit to doing the sort of hard, slow, methodologically sophisticated work that AIR has started, and which will lead, over time, to the building of large-scale assessment systems that measure the things that really matter.
Conley offers a thoughtful account of the rise of standardized achievement testing, and describes how and why we must begin to tip the balance toward high-quality, low-stakes assessments that provide much more useful insights into student progress. He is interested not in railing against standardized tests but rather in showing just how out of step they are with contemporary knowledge about learning and human development. As he describes them, multiple-choice reading and math tests aren't bad or broken so much as anachronistic. Conley notes that two strands of research have been game-changing for the world of assessment:
First, advances in cognitive science have yielded important new insights into how people organize knowledge. For decades, achievement tests have reflected the assumption that learning is primarily an additive process, involving the steady accretion of discrete bits and pieces of information. However, recent evidence suggests that the brain makes sense of new input mainly by determining its overall importance and its place in the "big picture." Thus, while multiple choice tests can provide some useful information about students' grasp of particulars, they aren't nearly as informative as assessments (and, by extension, course assignments) that ask students to relate those particulars to bigger ideas, apply their knowledge to new and more complex tasks, and show that they grasp the overall significance of what they have learned.
Second is the body of research that has been central to Conley's own work, identifying the various capacities that enable students to succeed in college, the workforce, and other settings. As readers of this blog know well, recent evidence strongly suggests that academic content knowledge and skill are necessary but hardly sufficient to prepare young people for the future. Researchers have only begun to understand what educators can do to teach the inter- and intra-personal dimensions of deeper learning, as well as to help students plan for the transition to life after high school, but it is clear that these things matter, and it is equally clear that in order to teach them effectively, we will need assessments that help us gauge students' progress on multiple levels.
In the end, Conley offers no simple strategy for building the robust-but-affordable, valid-but-reliable system of assessments that we require. He does offer a number of specific recommendations for state and federal policymakers to consider, including a hopeful call for states to renew and expand upon earlier efforts—largely abandoned after the enactment of NCLB—to develop large-scale performance assessments.
It's not yet clear which way we're headed, Conley argues, but we do seem to be at a crossroads. Policymakers on both sides of the aisle, along with educators, parents, and especially students, appear to be exhausted by a dozen years of over-testing and are ready to go in a new direction. The question for advocates of deeper learning is, which direction seems most promising? In a world of limited resources, dodgy politics, and widespread educational PTSD (Post-Testing Stress Disorder), what assessments are most worth fighting for?
- AIR Deeper Learning Network study: http://www.air.org/project/study-deeper-learning-opportunities-and-outcomes
- Conley's assessment paper: http://www.jff.org/publications/new-era-educational-assessment
- This blog was adapted from two articles originally posted on Education Week's Learning Deeply blog on October 1 and 3, 2014.