Understanding the AI trifecta: Examining the intersection of AI, digital integrity and security

April 27th, 2024

Publication : Blog
Themes : Artificial IntelligenceEmerging Tech

Understanding the AI trifecta: Examining the intersection of AI, digital integrity and security

The convergence of AI and the internet marks a pivotal moment in discussions on fostering secure and positive online environments. Such a development is particularly salient given increasing AI integration across different sectors. While inquiry around the vulnerabilities created by AI deployment is abundant, complementary research on the potential afforded by AI for digital transformation is otherwise limited. Balancing the twin implications of AI deployment requires considered deliberation to develop a gamut of strategies that not only combat vulnerabilities, but also promote responsible use of AI systems.

To this end, Aapti received funding from Google.org to study questions at the intersection of AI, digital integrity, and cybersecurity. The research recognises the need to unpack the risks associated with large-scale AI deployment, while attempting to delineate strategies that leverage its transformative potential proactively.

Further, the scope of the current study is to spotlight opportunities afforded by AI to leapfrog innovation within businesses and governments, underscored by a transition from AI-scepticism to AI-adoption. The positive spillovers produced by AI adoption is certainly evident in sectors such as finance and climate science where predictive analytics have long been deployed successfully. However, prevalent logics of AI research and development demonstrate that much of the inquiry is firmly located in the private domain, with limited opportunities to access and disseminate results. Consequently, there exist formidable siloes between various stakeholders in the AI ecosystem that hinders the formulation of common vocabulary, standards and strategies to responsibly leverage AI for cybersecurity.

In an attempt to overcome this impasse, one with observable costs to enterprises and society at large, we propose the establishment of a Community of Practice (CoP) that would convene stakeholders engaging with the finer nuances that define the interface between AI and emerging questions around safety – both from the lens of how AI can deliver safer practices for cybersecurity and digital integrity, while also unravelling the salient considerations for responsible AI deployment . The CoP hopes to bring together academia, independent researchers, industry specialists, regulators, and CSOs, fostering a collaborative approach to unpack critical barriers to and enablers for AI adoption. Designed as a series of workshops, expert presentations and online as well as in-person meetings, the CoP’s workings will be guided by a sectoral lens, departing from current horizontal studies on AI, all the while remaining vigilant to its problems.

The CoP represents a critical initiative in what is a seminal moment in the evolution of AI and its role in reinforcing digital integrity and cybersecurity. This initiative is not just a research project; it’s a collaborative expedition, seeking to delve into the depths of AI’s capabilities and challenges. By joining our Community of Practice, you will be at the forefront of a dynamic, interdisciplinary dialogue, contributing your unique expertise to a collective endeavour that transcends traditional boundaries. Our aim is to foster a rich tapestry of perspectives, blending academic rigour with practical industry insights, to not only understand but shape the future of responsible AI deployment. We are eager to welcome seasoned professionals, thought leaders, and innovators who are ready to engage, challenge, and collaborate in this vital discourse.

If you or anyone you know wishes to participate in the CoP and collaborate on this research,
please reach out to us at [email protected].