Caryn Conley and Jennifer Tosti-Kharas
The purpose of this paper is to evaluate the effectiveness of a novel method for performing content analysis in managerial research – crowdsourcing, a system where geographically…
Abstract
Purpose
The purpose of this paper is to evaluate the effectiveness of a novel method for performing content analysis in managerial research – crowdsourcing, a system where geographically distributed workers complete small, discrete tasks via the internet for a small amount of money.
Design/methodology/approach
The authors examined whether workers from one popular crowdsourcing marketplace, Amazon's Mechanical Turk, could perform subjective content analytic tasks involving the application of inductively generated codes to unstructured, personally written textual passages.
Findings
The findings suggest that anonymous, self-selected, non-expert crowdsourced workers were applied content codes efficiently and at low cost, and that their reliability and accuracy compared to that of trained researchers.
Research limitations/implications
The authors provide recommendations for management researchers interested in using crowdsourcing most effectively for content analysis, including a discussion of the limitations and ethical issues involved in using this method. Future research could extend the findings by considering alternative data sources and coding schemes of interest to management researchers.
Originality/value
Scholars have begun to explore whether crowdsourcing can assist in academic research; however, this is the first study to examine how crowdsourcing might facilitate content analysis. Crowdsourcing offers several advantages over existing content analytic approaches by combining the efficiency of computer-aided text analysis with the interpretive ability of traditional human coding.