HCII @ CHI: Computer Guides Humans in Crowdsourced Research
Getting a bunch of people to collectively research and write a coherent report without any one person seeing the big picture may seem akin to a group of toddlers producing Hamlet by randomly pecking at typewriters. But Carnegie Mellon University researchers have shown it actually works pretty well — if a computer guides the process.
Their system, called the Knowledge Accelerator, uses a machine-learning program to sort and organize information uncovered by individuals focused on just a small segment of the larger project. It makes new assignments according to those findings, and creates a structure for the final report based on its emerging understanding of the topic.
Bosch, which supported and participated in the study, already is adapting the Knowledge Accelerator approach to gather diagnostic and repair information for complex products.
Relying on an individual to maintain the big picture on such projects creates a bottleneck that has confined crowdsourcing largely to simple tasks, said HCII Associate Professor Niki Kittur. (Learn more about Kittur's work in this NSF article.)
"In many cases, it's too much to expect any one person to maintain the big picture in his head," Kittur said. "And computers have trouble evaluating or making sense of unstructured data on the Internet that people readily understand. But the crowd and the machine can work together and learn something."
The researchers will present their findings on Tuesday, May 10, at CHI 2016, the Association for Computing Machinery's Conference on Human Factors in Computing, in San Jose, Calif. In addition to Kittur, the team includes Ji Eun Kim of the Bosch Research and Technology Center in Pittsburgh, HCII Ph.D. student Nathan Hahn, and Language Technologies Institute Ph.D. student Joseph Chang.
Story by Byron Spice