Image Attribution: Clarote & AI4Media / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/.
Artificial intelligence (AI) and online platforms rely on large human workforces to develop and support the tech sector’s products and services. Data workers perform a variety of tasks essential for AI initiatives, including establishing guardrails for models and facilitating robotic motions. Content moderators manage substantial volumes of user-generated content, removing materials that are harmful, illegal, or violate platform policies. These vital services cannot be attributed to the efforts of a single leading country or a specific group of online platforms.
A burgeoning workforce of business process outsourcing (BPO) and platform-based workers, primarily from countries in the Global South, is instrumental in training AI models and addressing user reports of harmful online behavior. Despite their significant contributions, both data workers and content moderators often contend with personal risks and challenging working conditions. Common issues in these sectors include low wages, concerns about psychological well-being, and the uncertainty and demands associated with intrusive algorithmic management practices. As efforts to establish accountability arise from labor movements, research, and journalism, it is worthwhile to explore and untangle the complex web of outsourcing that underpins the global AI landscape.
Moreover, the frequent outsourcing and cross-country arrangement of data work and content moderation services warrant transnational methods of scrutiny and regulation. In 2025, Aapti Institute and GIZ GmbH’s Gig Economy Initiative collaborated on the Exploring AI Labour in the Global South project, combining expert interviews, stakeholder consultations, and secondary research to explore the data work and content moderation sectors’ practices, problems, and possibilities for change.
The third report in this series examines the accountability challenges encountered by data workers and content moderators operating within transnational outsourcing arrangements. By subcontracting across various jurisdictions, “lead firms” involved in AI model development often source workers while evading responsibility for labor-related issues. As the primary drivers of demand for data work and content moderation, tech companies that engage in outsourcing must also address their responsibility regarding the labor conditions where their work is performed. However, the multi-jurisdictional structures established by outsourcing complicate their participation in discussions and interventions pertinent to both sectors.
Various methods have been linked to transnational scrutiny, accountability, and enhancements in business practices. Notably, supply chain and due diligence regulations stand out, with examples including the European Union’s CS3D and Germany’s LkSG. However, this approach often falls short in industries that heavily depend on remote workers across different jurisdictions. Consequently, among other recommendations, the report advocates for policymakers and regulators to consider value chains as a framework for outsourcing and to develop policies accordingly.
Find the full report on the transnational AI supply chains below. For further explorations on the working conditions of data workers and content moderators, and the use of algorithmic management in this sector, consider reading the series’ first and second reports, titled Invisible Workers, Visible Harms and Engineered Precarity, respectively.
GIZ_2026_FragmentedResponsibilityWe would like to thank Somya Singh for her contributions to the secondary research for this report.
For feedback and questions, you can reach us via email at [email protected] or [email protected].