Key political, business, and personal decisions are regularly made on the basis of data and, increasingly, big data. In general, that’s a good thing—intuition is often a less reliable guide. But, as shown by new research published in the American Physical Society’s journal Physical Review Physics Education Research, interpreting data is a tricky skill to master.
|
Karel Kok is a former physics teacher now researching physics education. “As a teacher I was full of ideas for labs, projects, new ways to teach certain concepts, rewriting chapters of the school book, etc. But I could never find the time to do this, up to the degree where I was really satisfied with the result. Being in research now, I do have the time to think these things through, find the critical points, and try to fix them. With my research I hope to improve education by helping teachers and students.” Image Credit: Masja Stolk. |
In recognition of the data-driven world we live in, many countries emphasize the importance of teaching students to evaluate the quality of data. You can see this in the national education standards of the United States, Germany, the Netherlands, the United Kingdom, and other countries as well. Teachers often focus on these skills during science experiments.
“Labs are an important aspect of science education,” says Karel Kok, a graduate student in physics education research at the Humboldt University of Berlin and a former physics teacher. “Often, students gather data and this data is then used as evidence for scientific claims,” he says.
The hope is that if students learn to interpret and analyze the quality of the data they collect from simple experiments, they will be able to apply those skills to the data they encounter later in life—the quarterly performance of stock, the temperature of the Earth over time, or the failure rate of an electronic component—and judge how well the evidence supports related claims.
This a noble goal, but we don’t seem to have figured out how to get there yet. Even with solid data, many people rely on their intuition for decision-making. When faced with data that contradicts their beliefs, many people ignore the data and stick to their prior beliefs. Even when people make decisions based on data, many do so based on data that doesn’t stand up to scrutiny.
In this new research, Kok and colleagues Burkhard Priemer, and Wiebke Musold from Humboldt and Amy Masnick Hofstra University (New York) looked at how students evaluate whether two data sets are the same. In particular, they wondered if something as simple as the number of decimal places could alter student conclusions about data, and if so, how.
“Better, more exact data (for instance, more decimal places) provide stronger evidence and could lead to better justifications,” Kok says. “In our teaching experience however, we find that this is not always the case.” So, the team devised an experiment.
The researchers had about 150 students in a physics class in Germany watch a short video that introduced two physics experiments. In the first experiment, sensors at two different heights recorded how long it took a dropped ball to fall. The second experiment was identical, except that the ball was launched horizontally. Both balls fell the same vertical distance.
On a worksheet, the researchers asked students to hypothesize whether the free-falling ball or launched ball had a longer falling time, or whether the falling time was the same for both objects (it was the same). Then students were given data sets for the two experiments and had the option to revise their hypothesis accordingly and explain why. Here’s where the decimal places came in—each student solely saw two, three, or four decimal places in all of the measurements.
When they analyzed student responses, the researchers saw that students who received more exact data (data with more decimal places) appeared to be at a disadvantage. They were more likely to change their hypothesis from correct to incorrect and less likely to change it from incorrect to correct than students with measurements that included only two decimal places.
In other words, more exact data seemed to push students toward an incorrect conclusion about the fall times.
So what’s going on? Kok explains it this way, “With this study we have some good indications that the difficulties in judging the quality of data–specifically comparing data sets–come from students’ inability to judge the relevance of the difference between two mean values.”
Most students approached the problem by calculating the average fall time for each data set and comparing the two numbers. The two numbers were slightly different and the challenge seemed to be deciding whether that difference was significant.
The study outcome suggests that when comparing measurements with only two decimal places, it’s easier for students to judge whether a difference is significant. This is probably because the variance is hidden. However, when comparing numbers with more decimal places, the decision is harder. This is probably because the numbers look more variable. When students are unsure if the difference is significant, the research suggests, they often revert to intuition. And intuition often leads them astray.
Students would probably exercise better judgement, say the researchers, if they knew more about measurement uncertainties and had a framework for determining when a difference is significant—things that are often left out of the curriculum. Given what’s at stake, the researchers recommend that teachers make time to include these concepts in science classrooms and beyond.
“Since data, and judging the quality of this data, is becoming so prominent in our everyday lives, teachers in all subjects should try to incorporate this into their classes,” they write.