Crowdsourced consumer data: how do we make sure it's good? Friday, 6 July 2018
Crowdsourcing data through online marketplaces such as Amazon Mechanical Turk poses new challenges about how consumer research should be designed, conducted and analysed. Additionally, it raises questions about the validity of the participants and the information they provide. As protocols for crowdsourcing data are still being worked out, we have developed a few guidelines that will benefit those using such platforms for research purposes.
When Amazon launched Mechanical Turk (MTurk) in 2005, executives touted it as a way to augment artificial intelligence with the old-fashioned human variety. Organizations would post details about a small task that needed to be completed, such as writing product descriptions or identifying performers on music CDs, and then people searching the MTurk site would browse the jobs available and start working on those they were qualified for and thought sounded interesting or lucrative.
But as is often the case with an innovation, one of MTurk’s most popular applications seems to have caught Amazon by surprise: social science research. In consumer behaviour research alone, over 15,000 studies have been published based on evidence collected from MTurk workers, making it perhaps the most represented pool of participants in the history of my discipline. In the Journal of Consumer Research, one of the major journals in my area of specialism, 43 per cent of behavioural studies in the June 2015-April 2016 volume were conducted using MTurk.
MTurk and analogous platforms such as Prolific have enabled my colleagues and me to collect samples much more quickly and more cheaply than through traditional alternatives (eg, university participant pools), and those of us who looked into the quality of the resulting data found it to be comparable to such alternatives.
“Crowdsourcing websites like MTurk make survey and experimental investigations more efficient.”
But every powerful new tool creates a new set of risks, and crowdsourced data is no exception. Some fear that the people filling out the materials might not be giving them their full attention. Others worry that people who taking many surveys may no longer be naïve respondents, possibly compromising the results. Finally, how do you know the person filling out the survey is who he says he is? After all, as the old New Yorker cartoon put it, on the internet, nobody knows you’re a dog.
Getting it right
To address these worries, my colleague Joe Goodman of the Ohio State University and I undertook a review of the evidence underlying them, and came up with some guidelines for survey and experimental researchers to harness the benefits of online pools and avoid their drawbacks.
Some of these problems are difficult to prevent, but you can steer away from most by adopting a few strategies. You’ll find more if you read our paper, but:
- Avoid asking for a specific quality unless it’s a pre-sortable category, such as geography. Without knowing in advance what you want, respondents won’t be tempted to lie about themselves simply to get the job.
- Require participants to formally enrol before you show them the study. Requiring enrolment prevents previewing the study (which can compromise its validity) and raises the time costs required by participants. In addition, increasing the effort demanded before the survey decreases the attractiveness of quitting halfway through.
- If you are worried about “professional participants”, consider using a third-party crowdsourcing support service such as Turk Prime that can help you recruit people with somewhat less experience taking psychological surveys.
- Pay a reasonable rate. Though quality seems to be relatively independent of pay rate, you may compromise your individual reputation and the future attractiveness of participating in your studies and in MTurk in general. (Besides, there are obvious ethical reasons not to exploit those who work for you, right?)
- For similar reasons, resist blocking any MTurk worker. This can get them knocked off the site (also hurting your own reputation in the end).
Other issues may take more time to work out. Crowdsourced data has been around for roughly a decade now but protocols for its use are still being worked out. Our work has proposed some guidelines for handling it better, but there are other issues that remain.
For instance, we need time to know the distinctive qualities of the crowd behind the data. For example, one survey has found that American MTurk workers tend to score higher on reporting if they needed to think about an answer (need for cognition) and higher on civics questions. They tend to be younger and better educated than the general run of people. They are also unusual in that they are slightly more introverted, show greater levels of social anxiety, and have slightly lower self-esteem than the general population. This should serve to remind us that absent more sophisticated recruitment tools, we should always treat crowdsourced samples as non-representative.
Also, the technology itself still has plenty of room for improvement. For example, though third parties can help, these sites don’t have a good way yet to handle interaction between participants. Similarly, easier tools to share projects and data across researchers would be helpful. By enabling researchers to collect larger samples, crowdsourcing is already contributing to making consumer research better, but more can be done to facilitate more open collaboration between scientists.
Crowdsourcing websites like MTurk make survey and experimental investigations more efficient. When used conscientiously, crowdsourcing can also help improve consumer science by enabling more numerous and informative studies and increasing participant and researcher diversity. However, online research and crowdsourcing in particular have their own set of risks, and researchers need to design studies in ways that mitigate them.
Read the paper Crowdsourcing consumer research, written by Joseph K. Goodman and Gabriele Paolacci, is published in the Journal of Consumer Research, Volume 44, Issue 1, 1 June 2017, p196–210.
This article was published in RSM Discovery Magazine 34.
For more information about RSM or this release, please contact Ivo Martijn, Media Officer for RSM, on +31 10 408 2028 or by email at firstname.lastname@example.org.
Rotterdam School of Management, Erasmus University (RSM)is one of Europe’s top 10 business schools. RSM provides ground-breaking research and education furthering excellence in all aspects of management and is based in the international port city of Rotterdam – a vital nexus of business, logistics and trade. RSM’s primary focus is on developing business leaders with international careers who can become a force for positive change by carrying their innovative mindset into a sustainable future. Our first-class range of bachelor, master, MBA, PhD and executive programmes encourage them to become critical, creative, caring and collaborative thinkers and doers. Study information and activities for future students, executives and alumni are also organised from the RSM office in Chengdu, China. www.rsm.nl
Photo by Bruce Mars from Pexels.com