Sunday, December 9, 2012

Cliff Konold Demonstrates Data Games, Discusses Research at UMN

"I don't think that we've really demonstrated that we can teach the fundamental ideas at any level. It'd sure be nice to do that!"
Cliff Konold, Director of the Scientific Reasoning Research Institute (SRRI) and Research Associate Professor at the University of Massachusetts Amherst, graced us with his presence on Friday, November 30. He is a major voice in the statistics education research community, and our CATALST course makes extensive use of the software he helped design, TinkerPlots™, for learning introductory statistics using simulation, randomization, and the bootstrap.

Cliff the Researcher

Cliff started out the day, however, wearing his researcher hat. Students in Joan Garfield's statistics education research seminar asked him about his evolution as a researcher, accompanied by a dotplot ("headplot"?) of all his first-author papers, as reported by his publications page at UMass:

Over the years, Cliff has used his head extensively for research purposes

Cliff's research has ranged from close studies of how people perceive randomness to informal inference and data analysis. He said his most proud accomplishment as a researcher is his 2002 Journal for Research in Mathematics Education article with fellow UMass researcher Alexander Pollatsek, "Data analysis as the search for signals in noisy processes", which explores why students are not comprehending the usefulness of measures of center. They argue that ideas of stochastic processes, rather than samples and populations, should be more emphasized in statistics instruction.

Cliff's papers getting all mixed up in TinkerPlots™.
Students Elizabeth Fry and Ethan Brown then decided to turn Cliff's creation against him. They put a selection of his papers into a TinkerPlots™ mixer—a simulated device for random sampling—and shuffled them to pick two papers to discuss with him. The first was a 1997 paper that Ruma Falk was the lead author on, "Making Sense of Randomness: Implicit Encoding as a Basis for Judgment" in Psychological Review. Cliff thought of this as some of the tightest research that came out of SRRI and described what a pleasure it was to have Ruma Falk there visiting and collaborating. The next paper which popped out of the mixer, "Understanding probability and statistical inference through resampling" from 1994, sparked reminiscences of getting deeply lost in Perugia, Italy at the first meeting of the International Conference on Teaching Statistics, where he presented the paper.

Cliff the Software Designer

For the afternoon, Cliff put on his software designer hat to show us his latest project with KCP Technologies, Inc., Data Games, a series of activities where students have to successfully analyze game data to improve their strategy. He's a co-PI on this project with Bill Finzer (see the full team here).

But his researcher hat was still on, poking out from below the software designer hat! Cliff discussed how we typically introduce students to univariate data by discussing natural objects or events such as people's heights. He argues, however, that this is actually a very difficult context to think about measures of center and variability, because there is no concrete thing that the "average person's height" represents.

Instead, he proposes that we can more easily teach these concepts using repeated measures settings. He designed an activity where different students measure the teacher's head; in this case, the center of the distribution is the actual person's head, and the variability is the student's errors in measurement (rounding issues, where exactly they hold the measuring tape, and so on).

Cliff uses not only his own head, but Professor Joan Garfield's as well.
Another promising setting is a production processes: in this settings, the center is the target of the process, and the variability is a combination of measurement error and production errors, which again can be named and pointed to.

But how can these be efficiently brought into the classroom? Having students do a production process or repeated measures takes quite a lot of time to collect the data. This is where the Data Games project comes in.

Ship Odyssey teaches students how to use rats for statistical inference.
Ship Odyssey gives students a chance to engage with repeated measures in a fanciful simulated environment. As intrepid treasure hunters on the high seas of yesteryear, gamers can send down highly skilled rats who find a treasure and then swim up. But they don't swim precisely straight up—they may get tossed to and fro, introducing variability. Students learn how to send down enough rats to get a good estimate of the center of the rat-measures distribution. When they send down their hook, they either see that they've collected a pile of sludge or they hear the satisfying cha-ching of successfully getting the treasure.

We had a great time seeing Cliff and hearing about his research, and we can't wait to see how Data Games develops. In the meantime, our challenge is to not let playing Ship Odyssey distract us from our end of semester work!

1 comment:

  1. Since last couple of days i've spend my time to get this kinds of information. Right now i'm quite pleased to find this info. Big thanks to you!