Testing

Attribution Errors in America’s Classrooms

Cause and effect aren’t always clearly and correctly paired in America’s classrooms.

Teachers don’t always have the time, energy, or awareness to properly attribute underperformance.

  • Is a disengaged student sleeping at his or her desk being lazy or suffering from lack of sleep?
  • Is disruptive behavior the reflection of student boredom or a cover for not understanding the material?
  • Is a sudden drop in academic performance really indicative of intelligence or simply a need for reading glasses?
  • Are poor test scores more a reflection on the teacher’s failings or a lack of support and encouragement for students at home?

Fundamental Attribution Error: School Edition

The gap of understanding in the classroom can be compared to how road rage incidents get escalated.

Suppose you fail to notice a light turning green at an intersection. You might explain your negligence in any number of ways — you were caught up in a song playing on the radio, you were monitoring the progress of a pedestrian nearby, you were investigating the car behind you via rearview mirror — all of which emphasize external factors, rather than individual failings.

Now suppose you were behind a car that wasn’t moving after the light turned green. Same situation, same outcome, but many people would instinctively blame that driver in more intrinsic ways — the driver was being stupid, careless, selfish, etc.

This is the essence of fundamental attribution error: we look outside ourselves to explain behavior, but focus on internal factors to explain the behavior of others.

We see a similarly challenged dynamic in dating and romantic relationships. In the absence of good communication, each member of a couple is prone to developing his or her own narrative to explain behaviors, perceived emotions, and even the successes and failures of a couple. When he comes home at night, is he being dismissive and distant because something is wrong with the relationship, or simply because he hasn’t stopped worrying about a bad day at work? Is she struggling to come up with a diner destination because she doesn’t know what she wants, or is she being purposefully passive-aggressive?

When applied to academic settings, the same fallacy is apparent. Both students and teachers are accused of not caring enough to try harder or perform better. A national preoccupation with educational outcomes — the effect we look for from our schools — has exacerbated a lack of understanding about the inputs, or causes.

Looking Upstream in Education

It is human nature to look for patterns; in the absence of clear, verifiable patterns, it is also human nature to invent patterns even in spite of evidence. In sports, for example, this can manifest as superstition:

  • don’t shave during Stanley Cup Playoffs
  • don’t wash your team jersey until the season is over
  • don’t curse a pitcher by saying he is on track to throw a perfect game, etc.

The outcome — surviving playoffs, having a good season, pitching a perfect game — clearly has no measurable or meaningful connection to the behaviors extolled, but the belief in their significance continues undeterred. In social contexts, a similar fallacy prevents us from correctly attributing effects to their causes. In education, it is possible we have focused on desired outcomes that fail to account for the power of confounding variables.

The variables of student life today are too many to count: from home life, social life, and social media, to the quality of instruction, the presence of role models, and even the medium of instruction and assessment, there are a lot of variables getting in the way of assigning cause and effect.

To move beyond fundamental attribution error, or falling into the old habit of superstition, it is important to spend more time and energy looking upstream for the real causes that need our attention. Going upstream is a principle of public health in which caregivers go beyond treating symptoms and instead look for opportunities to prevent sickness and injury. Businesses engaged in corporate social responsibility and other forms of social entrepreneurship take a similar approach: throwing money at a problem or social ill no longer impresses consumers or shareholders. Looking upstream for opportunities to meaningfully impact communities and benefit the world not only makes for a better story, it makes for more lasting forms of giving.

Both of these examples apply in education as well. By going upstream to understand what drives student performance, classroom behavior, and any other outcomes we care to monitor, we can better connect cause and effect and control for other variables. Going upstream in education isn’t just a matter of more spending or more resources, but of aiding teachers, administrators, and the general public to focus on what really drives outcomes.

When we stop focusing on outcomes to the exclusion of understanding inputs, we create a machine for using money and resources without generating improved results. When we go upstream to identify the real cause and effect relationship surrounding school, we can put our resources where they will have the greatest benefit.

Learning vs. Testing: Can Tech Bridge the Gap?

**The Edvocate is pleased to publish guest posts as way to fuel important conversations surrounding P-20 education in America. The opinions contained within guest posts are those of the authors and do not necessarily reflect the official opinion of The Edvocate or Dr. Matthew Lynch.**

A guest column by Edgar Wilson

Somewhere over the last few decades, teaching and testing developed an adversarial relationship in America’s classrooms.

Public policy debates swarm like bees over sticky questions on the issue of assessment:

How much is too much?

How can we evaluate teachers without standardized measures of outcomes?

Are students under too much pressure from high-stakes testing?

Are teachers “teaching to the test” at the expense of more comprehensive instruction?

The conflict and controversy is damaging to everyone involved in education—students, teachers, administrators, employers; everyone has an interest in seeing America’s schools be the best they can possibly be, but there is a lot of opinion and disagreement over what “best” actually looks like. In part, this is because we can’t even agree on how to measure quality.

Taking Advantage of the Digital Future

This may be another area where the field of education can learn something from the healthcare world.

In a presentation on the future of healthcare, Northeastern University professor Carl Nelson drew parallels between the field of management, and the evolution occurring in healthcare.

“We have the Big Data movement and data analytics not only providing appropriate diagnoses, but also guiding us all,” explained Nelson. “It can be used appropriately to guide decision-making, to make judgements.”

In essence, this means best practices growing naturally from more robust data.

Data acquisition in business, as well as in medicine, is occurring right at the point of implementation: wearable sensors, product-trackers, and communication systems all connected through the growing Internet of Things (IoT) enable data scientists to watch, live, as people and organizations operate. By aggregating social media statistics, watching how customers (and potential customers) interact with a brand, and what life events, interests, and behaviors correlate with consumer activities, businesses learn both about their customers, and themselves.

Examining all the data—passively generated and actively gathered—allows analysts to then identify shortcomings, inefficiencies, bottlenecks, and missed opportunities.

“We have moved greatly in the field of management from management based on intuition certainly, over a long period of time, to a management based on evidence (evidence-based management),” Nelson said. “The same thing is happening and has been happening in the field of medicine—so-called ‘evidence-based medicine’.”

In education, the challenge has been—and in many respects, continues to be—reliable sources of “evidence” on which to base changes to best practices. As in management and medicine, intuition and expertise has an important role in education: teachers often remark at the rewards in witnessing “Ah-ha” moments in the classroom. It isn’t something measured or captured in a discrete assessment; it is an organic thing, the look on a student’s face when the elements of a lesson all click into place, that whirring of mental machinery once the fuel of understanding is suddenly injected.

Where’s the Evidence?

The trappings of business and healthcare have been upgraded to support Big Data’s newly prominent role. Analysts review measures of existing behaviors, collected and recorded on the spot, and use them to develop new, evidence-based best practices.

Tablets, laptops, mobile devices and other digital tools and toys are destined for a prominent role in America’s classrooms; that much seems safe to assume given current trends.

And just as the Internet of Things, applied in clinical settings and connected to individual patients, doctors, and institutions is providing new troves of real-time data feeds and outcomes patterns, so too can applications be deployed in classrooms and attached to individual students to give instructors a new, closer, quantifiable look into what drives learning outcomes.

Mark Oronzio, CEO of Ideaphora, is one of the innovators working to bring formative assessment—measuring the learning process as it happens—to a place of greater prominence in the classroom.

“What we’re looking to do is provide a more automatic assessment of a knowledge map,” says Oronzio.

His company’s product is based on an existing concept, knowledge maps, as a way to graphically represent and record the learning process.

When learning is captured as it happens—rather than by a summative assessment, which is aimed at determine outcomes following instruction—real-time data can be provided to both students, teachers, and even data scientists looking to correlate instructional methods with on-the-ground results.

“Aside from it being what we call an Authentic Assessment—it’s not a multiple choice quiz, where I got tricked, or because I’m not a good writer I didn’t do well on the essay—this is making connections and defining connections,” Oronzio explains. “We think it would be a more accurate assessment; it is actually a picture of what’s going on, and the connections going on in the learner’s mind. The other cool thing about it is, it is not a discrete test, it wouldn’t have to be administered necessarily.”

This is the essence of competency-based education (CBE), an ongoing method of assessment that, at best, might help displace some of the emphasis on standardized testing and high-stakes tests.

Snapshots of Learning

Abundant data makes a compelling argument.

Adding the capability to watch, measure, and analyze instruction leaves less room for politics and opinion to dictate changes to curriculum and assessment standards. By combining the principles of CBE with a method of visualizing and recording the associated data, education has an opportunity to launch into the Big Data playground.

Just as in business and healthcare, the educational revolution comes not just from the devices themselves gaining widespread adoption, but from the programs and applications whose use they make possible. It is far from a nail in the coffin of standardized testing, but it does demonstrate how technology can combine with traditional instruction to provide new windows into the academic environment.

__________

Edgar Wilson is an Oregon native with a passion for cooking, trivia, and politics. He studied conflict resolution and international relations and has worked in industries ranging from international marketing to broadcast journalism. He is currently working as an independent analytical consultant. He can be reached via email here or on Twitter @EdgarTwilson.

Read all of our posts about EdTech and Innovation by clicking here. 

Should we grade teachers on student performance?

Should teachers be judged on student performance? Is it a fair assessment of their skills as educators?

A recent study published in Educational Evaluation and Policy Analysis is the latest in a number of forms of research that cast doubt on whether it is feasible for states to evaluate teachers based partially on student test scores.  Research shows us that little to no correlation between high quality teaching and the appraisals these teachers are given.

We have seen a sharp rise in the number of states that have turned to teacher-evaluation systems based on student test scores. The rapid implementation has been fueled by the Obama administration making the teacher-evaluation system mandatory for states who want to receive the Race to the Top grant money or receive a waiver from the 2002 federal education act, No Child Left Behind.  Already the District of Columbia and thirty-five states have placed student achievement as a significant portion in teacher evaluations.  Only 10 states don’t necessitate student test scores to be factored into teacher evaluations.

Many states also use VAMs, or value-added models, which are algorithms to uncover how much teachers contribute to student learning while keeping constant factors such as demographics in mind.

These teacher-evaluation systems have drummed up controversy and even legal challenges in states like Texas, Tennessee and Florida when educators were assessed using test scores of students they never taught.

Just last month, the American Statistical Association urged states and school districts against VAM systems to make personnel decisions.  Recent studies have found that teachers are responsible for up to 14 percent of a student’s test score, in combination with other factors.

In my opinion, we need to make sure students are exposed to high quality teachers. But is it fair to subject teachers to tough standards based on how students test? I do not believe so, especially in underprivileged areas.  If we continue to scrutinize teachers with these types of stressful evaluations, it will only discourage teachers from taking jobs in urban and minority schools – perhaps where they are needed the very most.

The Future of K-12 Assessment

Many educators view standardized testing as a necessary evil of the improvement process. More cynical educators view it as a completely useless process that is never a true indicator of what students actually know. Proponents of K-12 assessments say that without them, there is no adequate way to enforce educator accountability.

Love it or hate it, K-12 standardized testing is not going away. It is just changing.

The No Child Left Behind Act uses standardized testing results to determine progress and outline areas for improvement in K-12 schools. This standards-based approach to education reform has often been attacked for its disconnection with what kids should really know and what they are simply required to regurgitate for the sake of a test.

The Gordon Commission on the Future of Assessment in Education released a report in March that outlined steps needed to make K-12 assessments vehicles “providing timely and valuable information” to both students and educators. Among the recommendations made by the 30-member commission was a permanent council to evaluate standardized testing be created. The report also calls for a 10-year research study intended to strengthen “the capacity of the U.S. assessment enterprise.” The Gordon Commission Report admits that the assessments of the future are not yet in existence but that their creation needs to begin now.

Commission chairman Dr. Edmund W. Gordon said:

“Technologies have empowered individuals in multiple ways — enabling them to express themselves, gather information easily, make informed choices, and organize themselves into networks for a variety of purposes. New assessments — both external and internal to classroom use — must fit into this landscape of the future.”

Based on the report, and what we know as educators, what do future standardized tests need to include to be successful in an increasingly digital classroom?

  • More assessment of HOW to obtain knowledge. Dr. Gordon touched on this point when he mentioned access to information and networking. There is more information available than can ever possibly be processed, so the way that this and future generations of students make informed decisions matters more than ever. Assessments of the future will need to ask more questions about the how of knowledge and not just focus on the what.
  • Higher levels of digital access. All facets of education are being impacted by the rapid evolution of technology and assessments are not immune. Not only should educators be able to tap into digital resources for assessment preparation, but students should be able to take assessments using the technology that makes them most comfortable. Filling in bubbles with number two pencils needs to become an assessment relic, replaced by convenient, streamlined technology options.
  • More critical thinking options. This goes hand-in-hand with how to obtain knowledge, but takes it a step further. Everyone can agree that applied knowledge is crucial to the learning process so standardized tests need to do better when measuring it. Every child needs to be able to articulate what he or she knows, not just repeat it.

Assessments in K-12 learning are sure to change in the next five years, and beyond, in order to adapt to changing classrooms. There will never be a perfect formula for assessment, but educators should never tire trying to make standardized testing as applicable and helpful as possible.

What changes would you like to see in K-12 assessments?

Read all of our posts about EdTech and Innovation by clicking here. 

Education officials to re-examine standardized testing

Education officials will re-examine standardized testing in the U.S. due to growing complaints from the public. The general consensus is that students pre-kindergarten to 12th grade are taking too many exams.

Michael Casserly, executive director of the Council of Great City Schools recently said, “Testing is an important part of education, and of life. But it’s time that we step back and see if the tail is wagging the dog.” The Council of Great city schools represents 67 urban school systems.

The Council of Chief State School Officers, which represents education commissioners in every state, has also joined in on the effort.

Teachers have always administered tests; but exams became a federal mandate in 2002 under the No Child Left Behind Act. It requires states to test students annually in math and reading, starting in grades 3 through 8 and ending with high school.

In the past two years, four states have delayed or repealed graduation testing requirements. Four other states, including Texas, where the idea of using these tests began, have reduced the number of exams required or decreased their consequences.

In addition to federally required tests, states have added on more assessments, many that mandate exams such as an exit test to graduate high school.

On average, students in large urban school districts take 113 standardized tests between pre-K and 12th grade.

The number of standardized tests that U.S. students take is too high. While I feel that the idea to use tests to hold schools accountable is a good one, the frequency and redundancy of standardized testing has gone too far. It is essential to measure student achievement, but I hope that further analysis of standardized testing will lead to ways to relieve some of the burden that these tests bring to our students.