Image Attribution: Yutong Liu & Digit / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/
When it comes to the future of work and the platform economy, one major, recurring aspect of these discussions is algorithmic management. Different aspects of the labour-manager-firm relationship are being datafied, platformised, and automated. A substantial body of advocacy and research has emerged to discuss the surveillance, coordination, and discipline in settings like the platform economy and in employee workplaces. These discussions have brought up a range of themes, such as data collection and automated decision-making that controls labour through the likes of “wage manipulators,” “robo-firing,” and ranking and scores. Whatever the future of work might hold, algorithmic management will be a crucial issue.
Data work and content moderation represent a side of algorithmic management that labour, research, and journalism are still exploring. While these sectors can end up clubbed with digital labour platforms, we must also understand them as part of the outsourcing and AI supply chain landscapes. Aside from online platforms, data work and content moderation find themselves handled by business process outsourcing (BPO) units or even company-facing platforms and labour teams.
Data workers may also end up engaging in a wider range of roles, getting involved in operations like labelling data for different uses, rating AI products’ outputs, or providing language data. These differences may lead to different kinds of performance measurement and expectations. Compared to the likes of ridehailing and food and parcel delivery, the source of feedback can also be different in data work and content moderation, where, depending on the business model, you can be reviewed by the person requesting the service or by a company-side supervisory figure. Thus, the likes of data work and content moderation deserve to be studied and mapped beyond the digital labour platform and its associated issues.
Algorithmic management in data work and content moderation can feature a range of practices and processes, as well as a variety of labour issues and risks. In 2025, Aapti Institute and GIZ GmbH’s Gig Economy Initiative collaborated on the Exploring AI Labour in the Global South project, combining expert interviews and stakeholder consultations with secondary research to explore the data work and content moderation sectors’ practices, problems, and possibilities for change.
The series’ second installment, this report discusses algorithmic management components and their labour implications in the context of data work and content moderation. Features like metrics, targets, and queues, as well as the digital nature of many setups, create several risks, from the sudden loss of work to harsh, unforgiving discipline that can push workers to overwork and stress. While businesses may find themselves able to access the labour markets across the world, many people find themselves engaged in fragile and heavily surveilled work arrangements.
GIZ-2026-Engineered-precarities.-Algorithmic-management-in-data-work-and-content-moderationPart of the problem arises from the kinds of processes and workflows companies use and the data they collect. Beyond problems in business practices, difficulties and precarity also persists as a product of the lack of labour relations and regulations that create rights for workers, as well as restrictions and scrutiny for labour-facing digital systems. Find the full report on algorithmic management in data work and content moderation below. For further explorations on data work and content moderation’s precarity and transnational considerations, consider reading the series’ first report, Invisible Workers, Visible Harms, alongside the concluding installment, Fragmented Responsibility.
For feedback and questions, you can reach us via email at [email protected] or [email protected].