The Dynamics of Micro-Task Crowdsourcing — The Case of Amazon Mechanical Turk

Micro-task crowdsourcing is rapidly gaining popularity among research communities and businesses as a means to leverage Human Computation in their daily operations.  Unlike any other service, a crowdsourcing platform is in fact a marketplace subject to human factors that affect its performance, both in terms of speed and quality.  Indeed, such factors shape the dynamics of the crowdsourcing market.  For example, a known behavior of such markets is that increasing the reward of a set of tasks would lead to faster results.  However, it is still unclear how different dimensions interact with each other: reward, task type, market competition, requester reputation, etc.

 

In this paper, we adopt a data-driven approach to (A) perform a long-term analysis of a popular micro-task crowdsourcing platform and understand the evolution of its main actors (workers, requesters, tasks, and platform.  (B) We leverage the main findings of our five year log analysis to propose features used in a predictive model aiming at determining the expected performance of any batch at a specific point in time.  We show that the number of tasks left in a batch and the time at which the batch was posted are two key features of the prediction.  (C) Finally, we conduct an analysis of the demand (new tasks posted by the requesters) and supply (number of tasks completed by the workforce) and show how they affect task prices on the marketplace.