The development world is buzzing with excitement over the idea that new and emerging applications of artificial intelligence (AI) can supercharge economic growth, accelerate climate change mitigation, improve healthcare in rural areas, reduce inequalities, and more. But what does this look like in real life?
Here’s an example of this technology in action: WorldCoin, a cryptocurrency and digital identity platform, wants to scan the irises of every person in the world to develop an AI-powered system to tell robots from humans. This sounds like the beginning of a dystopian tale, but it’s not science fiction.
WorldCoin has already used its signature orb to collect biometric data from millions of people. And not surprisingly, the U.S./Germany-based company has run into regulatory troubles in many countries. In Kenya, for example, WorldCoin paid people around 7,000 shillings (just over $50 USD) to turn over their personal data, a practice which Kenya’s communication and data protection authorities labeled as “border[ing] on inducement.” WorldCoin’s activities were suspended in Kenya but are set to start again after a year-long government inquiry was recently dropped.
Regulatory challenges to the company’s activities are the highest profile example of what the Global Partnership for Sustainable Development Data’s network of partners are grappling with every day: namely, how to ensure that AI is deployed in development contexts in ways that help and don’t hurt people. Complementing these regulatory challenges are practical ones that arise in Development Gateway: An IREX Venture’s (DG) implementation work, for instance, how to audit a particular AI tool’s terms of reference and training datasets to assure its appropriateness for use in a development setting.
The Global Partnership and DG recently hosted a series of discussions with development practitioners on the real-life applications of AI and related technologies in international development. Eighty-four participants from local non-profits, international NGOs, and philanthropic and academic institutions attended the online sessions from around the world.
The purpose of these dialogues was to create a space for partners to share how they are thinking about AI and related technologies, including the opportunities and challenges they’re facing and practical examples of these tools on the ground. The three sessions were focused on distinct topics, namely: agriculture, climate, and health; conventional and new media; and Digital Public Infrastructure and digitization—three areas where we knew of existing examples of practical applications of AI among our partners. Across sectors, concerns emerged about the challenges of adapting digital technologies to development contexts and the power imbalances inherent to these activities, as illustrated in the WorldCoin example. But, overall, partners expressed a sense of optimism for the potential of AI-powered tools to provide solutions tailored to specific contexts, if we can overcome three key challenges.
What we mean when we talk about “AI in development”
When we talk about AI, we’re often referring to a broad range of computer-based systems that can perform complicated tasks that we would normally expect humans to do, like solving problems, diagnosing illnesses, reasoning, and generating novel images or text. (Here’s a brief explainer from the Simon Institute for Long-Term Governance.) During the online dialogues, participants shared examples of applying AI, machine learning, and related new tech tools in their sectors. Some examples included: AI-powered chatbots for healthcare workers using Large Language Models, forecasting technology trained on historical data to alert farms to upcoming shocks, using machine learning to analyze geospatial data for national agricultural policy, and more.
What partners want: AI that’s useful, built on strong foundations, and deployed appropriately.
Three challenges emerged during the dialogues to building and using AI to advance development goals. These were:
The push to develop and apply AI to development challenges largely ignores questions about when—or whether—AI is useful.
Participants described sensing pressure—whether from policymakers, development partners, or funders—to seek out applications of AI in their fields. But they cautioned against seeing AI as a one-size-fits-all solution. In fact, there are limited situations where AI tools can currently provide helpful solutions. For example, chatbots that can help people without access to healthcare receive diagnoses and recommend treatments do not guarantee that medication or treatment will be available, accessible, or affordable. Because AI tools are best applied in the context of larger development efforts, there’s a need to make sure AI development is demand-driven with the needs of end users in mind and deployed in the context of wider, well-resourced programs.
Access to appropriate training data is a barrier to creating AI tools to serve the most vulnerable.
Developing AI tools requires large amounts of training data. Ensuring that this data is representative of end users is a key challenge in development, especially because many of the people who are targets of development efforts do not have access to digital tools to generate data to train AI models. AI-powered tools generate analytics, text, or images based on the data they are trained upon. So, if you’re not in the data, the solution proposed by an AI tool won’t necessarily apply to your situation. One way to approach this, participants proposed, is by using intermediaries to collect data in the field. For example, having farmers use mobile (not smart) phones to text data to an intermediary that collects this data and transforms it into data for AI. But this is time consuming and costly. Data sharing could be one answer to this problem. But much of the existing data that could be used to develop AI tools is held by organizations, companies, or governments who are not incentivized or equipped to share it.
Decision-makers lack the knowledge needed to regulate and deploy AI tools.
Participants described a gap in skills and knowledge between the people who understand and develop this technology and the policymakers who have to make the decisions about whether and how to deploy and govern it. Bridging this gap requires democratizing information about AI tools, participants said, but most organizations don’t seem interested in funding or engaging in this work.
Moreover, policymaking silos structured around traditional sectors such as health, agriculture, or climate inhibit the potential deployment of AI tools to help identify trends across sectors. A case in point relates to climate change trend analysis and forecasting, which requires inputs from multiple sectors. If the policy and institutional structures are not designed to facilitate data sharing, then the opportunity to apply automated analysis and tools to solve cross-cutting problems such as climate change may well be missed.
How can we ensure AI tools are developed and used in fair and human-centered ways?
As governments and organizations grapple with how best to regulate AI, it’s clear that many people feel left out of these conversations and are concerned with putting safeguards in place to protect people and prevent harm before it's too late. As one participant put it, “AI cannot be a positive force without addressing issues of ownership and inclusion.”
This is an especially complex question within development contexts, as the WorldCoin example above highlights. Can individuals anywhere in the world, who have very little understanding of complex AI systems, truly consent to their personal data being used to train algorithms? How should people be incentivized to contribute much-needed data to AI training? When do financial incentives become ‘inducement’ to participate? How do we measure or balance the benefits of AI tools against potential harms?
These are complex questions that require inclusive and participatory exploration on a context-by-context basis. Involving practitioners, like those who joined the AI Dialogues Series, is one step in the right direction to ensure AI can become a useful tool. We invite you to weigh in on these questions or to suggest some of your own. Please reach out to jmclaren@data4sdgs.org and torrell@developmentgateway.org to stay connected to this work.
These conversations will continue in 2024 through a series of online peer exchanges focused on AI for inclusive development that will be open for anyone to join. To ensure you receive an invitation to register, make sure you’re signed up for the Global Partnership’s listserv. You can join by emailing info@data4sdgs.org.