The AI Surge: The AI Market and Its Use Cases are Spreading
Artificial Intelligence (AI), or machine learning, is catching on across markets and modes of working, learning, and living. The diffusion and adoption of AI technology and its applications have been extensive, expanding the size and scope of its market and its participation in the global economy. Bloomberg Intelligence has predicted that the AI-oriented market will “explode” from 40 billion USD in 2022 to 1.3 trillion USD in the coming decade. According to Goldman Sachs, global investment into AI could reach around 200 billion USD by 2025. One of the faces of contemporary AI services, ChatGPT, achieved 100 million users in two months and dwarfed the growth of internet sensations like TikTok and Instagram. The AI space, embodied by the spread of products like ChatGPT, Bard, and Dall-E, is here to stay and set to grow.
Much of the AI surge is spearheaded by an influential group of “foundational” models. These models are called foundational because of how they enable and set the tone for more pointed use cases of AI. For example, OpenAI’s GPT-3 and GPT-4 are foundation models that exist as AI services in the form of the ChatGPT chatbot. These services are part of a paradigm and sub-type called Generative AI (GenAI), which consists of models raised on colossal volumes of data that can “learn” to produce new, possibly human-like content based on prompts, in formats that can go beyond plain text.
Future Unknown: AI’s Potential Implications
Despite its promises and potential as a versatile and enabling technology, the GenAI landscape has not been without its risks, pain points, misuses, and mishaps. Gender is one fundamental blindspot that GenAI has repeatedly run afoul of, creating AI-based risks and obstacles for people who are not cis-males. For one, the AI landscape has ended up contributing to transphobic chatbots and reproducing stereotypes with skewed representation in cases of text-to-image services producing images of professionals. There exists work that has attempted to generate pictures of human faces, and thus, inferences like sex, ethnicity, and age using human audio.
Journalistic investigations also show how AI can produce highly sexualized content and respond differently, troublingly, and biasedly when it comes to gender and race. Deepfakes and GenAI-created material featuring the sexual abuse of children are other AI-assisted means of causing harm. The risks and potentials exist alongside the widespread applications of GenAI.
Our Undertaking: Mapping Gender Risks and Harms in AI Value Chains
Under the ambit of the United Nations Development Programme’s (UNDP) work on gender, Aapti Institute has set out to map the gender-related human rights risks and violations that GenAI creates. Our gender lenses focus on the genders outside the cis-male order, including but not being limited to non-binary gender identities. This project is part of Aapti’s larger examination of artificial intelligence and its constituents and stakeholders, taking forward our work on AI’s potential impacts on human rights in India and conversations on data labelling for AI.
This undertaking will adopt a value chain approach that covers the life cycles of GenAI (elaborated on in Figure 1), examining the practices and conditions that make it possible for gender risks and harms to pervade the various phases of developing and deploying GenAI. The study will also utilise the United Nations Guiding Principles on Business and Human Rights and its three-pillar framework (Protect, Respect, and Remedy), infusing the value chain mapping of GenAI’s gender pain points with human-rights-oriented solutions distributed between businesses and governments.
Figure 1. Aapti’s conceptualization of the AI value chain, prepared through desk research
Our efforts will be driven by desk-based secondary research, doctrinal analysis surrounding legislation and regulation, as well as the qualitative analysis of consultations and problem-centred expert interviews. Our investigations will materialise as publicly available outputs in the form of reports, value-chain-based risk assessment toolkits, issue briefs, slide decks, and social media cards.
If you are working on or have experience in subjects like AI value chains, gender’s intersections with technology, intersectionality and technology, AI policy, or other relevant fields, and would like to contribute to this initiative, please contact us. You can reach us over email using [email protected].