Re-framing the Conversation on MCA Results

The 2014 Minnesota Comprehensive Assessments (MCA) results are out and the responses in the news are enough to confuse even the most experienced and knowledgeable education professional. It’s not the tone of the responses — positive, negative or anywhere in between — that’s bewildering, it’s the lack of framing. Ask 10 people on the street about the intended use of the MCAs, and there will likely be 10 different answers.

The answer should be simple: the MCA and other similar types of assessments were designed to provide information about overall school and program performance for a group of students at a particular point in time. These tests should be used to improve our collective understanding of how our students are performing, not to point fingers.

The MCAs are summative assessments that were created to evaluate student learning based on standards or benchmarks, typically at the end of the school year. That may be a simple statement, but if you read Beth Hawkin’s recent article and Sarah Lahm’s rebuttal you will quickly understand the complexity of assessing student knowledge and that the debate has become very polarized.

Critics are broadly saying that the MCAs (and all standardized tests) are a waste of time and resources because they are not an accurate measure of student knowledge and are not helpful for teachers.

The MCA results can actually tell us a great deal about students’ skills in specific subject areas. Take the reading MCAs for example. While the 3rd grade reading MCA test cannot tell us how well a student can read, it can tell us if students are reading at grade level.

Further, given that the MCAs are based on the Minnesota State Standards and are given to students in the spring, a few months before the end of the school year, how could they possibly be useful for teachers in working with that year’s class of students? They can’t, but they are not supposed to. Critics are mischaracterizing standardized assessments like the MCAs, and comparing them to formative assessments. Formative assessments are specifically designed to give more real-time (and helpful) feedback to teachers on student learning and instructional techniques, but the results are not designed to be aggregated. Therefore, the success of a district or a school cannot be judged based on formative assessments.

An easy way to think about the difference in the two assessment types, is as Grant P. Wiggins said, “When the cook tastes the soup, that’s formative assessment. When the customer tastes the soup, that’s summative assessment.” If we play that analogy out, when a cook tastes his/her soup, it is to refine the flavor, seasoning, consistency, etc. Adjustments can be made on the fly. When the customer tastes the soup it is to enjoy the flavor and to fill a craving. Adjustments are untimely at this point, however the consumer is assessing the chef’s ability to create a flavor profile. To judge the totality of the chef’s skills, experience and training on one bowl of soup is unreasonable.

Proponents, on the other hand, laud the MCAs as a great measure to use for accountability purposes and since both the Minneapolis and St. Paul districts have shown little to no growth compared to last year, both districts are doing a terrible job. Yet the proponents are also missing an important point: Just like a hammer by itself does not make a complete toolkit, the MCAs cannot be the only measure for identifying effective schools or districts.

The underlying point in the debate is how the results are being used. We should use the proficiency rates for what they were explicitly and/or implicitly designed: to summarize the performance of a group of students in a given year, to better target resources, and as a data point for parents to understand school options.

Let’s also help schools, teachers, and community organizations do what they do best by increasing their capacity to understand data that can improve student outcomes. Teachers can glean important information from the MCAs, especially in understanding how well previous students learned material related to the standards. This type of analysis can help teachers plan for how much time next year’s class might need to learn each standard. Principals, other district officials, and the general public can use this data to identify trends and potential problem areas in need of improvement.

And finally, let’s all promise to use this and all data as a flashlight and not as a hammer.

Jonathan May (jonathan@gennextmsp.org
Director of Data & Research, Generation Next

SIF 2015