Invisible Workers, Visible Harm: Perils and Precarities of AI Labour

By Sreya Nair
March 2nd, 2026

Publication : Report
Themes : content moderationData workGlobal SouthLabourworking conditions

Invisible Workers, Visible Harm: Perils and Precarities of AI Labour

Image Attribution: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/

Artificial Intelligence (AI) is often described through the language of automation, efficiency and innovation. Yet, behind these narratives lies a vast human workforce performing the essential yet invisible labour that allows AI systems to function. AI systems are often described as autonomous, but in practice they are deeply reliant on human labour to function effectively. They depend on continuous human intervention across the AI lifecycle. Workers collect and clean training data, label images, text, and audio so models can recognise patterns, and review outputs to ensure accuracy and safety. They also moderate harmful or disturbing material and continually correct and recalibrate systems as these technologies evolve. These tasks are often broken down into small units and distributed through digital workflows designed for speed and scalability.

Despite their centrality, these sectors are often organised through complex global supply chains. Data work and content moderation may be routed through digital labour platforms, managed within Business Process Outsourcing (BPO) firms, or coordinated through vendor networks that sit at several removes from the technology companies they ultimately serve.

This structure enables firms to access labour across geographies while also fragmenting responsibility for working conditions. While these roles are essential to building, maintaining, and safeguarding AI-enabled services, the workers’ labour is frequently undervalued, tightly controlled, and insufficiently protected. Working conditions across this ecosystem reveal a consistent pattern of precarity. Workers frequently operate on short-term or task-based contracts, face low and inconsistent pay, and experience intense algorithmic monitoring through digital metrics, performance dashboards, and productivity targets. For content moderators in particular, routine exposure to distressing material introduces serious psychological risks, while available support systems often remain limited or difficult to access. These realities point to structural gaps in labour protections, accountability, and recognition across AI supply chains.

As AI systems scale rapidly across industries, limited attention has been paid to the human labour underpinning these technologies and the conditions in which they work. In 2025, Aapti Institute and GIZ GmbH’s Gig Economy Initiative collaborated on the Exploring AI Labour in the Global South project, combining expert interviews and stakeholder consultations with secondary research to explore the data work and content moderation sectors’ practices, problems, and possibilities for change.

This report focuses specifically on the working conditions of data workers and content moderators, mapping the everyday realities of this hidden workforce and situating them within global AI value chains. It highlights how business models, regulatory environments, and technological systems interact to shape labour outcomes, often concentrating value at one end of the supply chain while dispersing risk at the other. Addressing these issues requires moving beyond conversations about AI deployment alone and toward a deeper engagement with the human labour that underpins it.

Find the full report on the working conditions of data workers and content moderators below. For further explorations on the use of algorithmic management and transnational value chains, consider reading the series’ second and third reports, titled Engineered Precarities and Fragmented Responsibilities respectively.

GIZ_2026_InvisibleWorkersVisibleHarms-1

Download here

For feedback and questions, you can reach us via email at [email protected] or [email protected].