A year ago I was writing my university dissertation on the role of artificial intelligence (AI) and data as essential building blocks for future smart cities. If you had told me then that a year later I would find myself mingling among some of the world’s leading experts on AI at the AI for Good Summit in Geneva, I would have said you were crazy. However, here I am listening to insightful panel discussions, witnessing groundbreaking demonstrations, and meeting eerily lifelike robots. The summit showcased the incredible potential of AI and the role it can play in achieving the Sustainable Development Goals (SDGs). It was truly a surreal experience, but while the summit underscored the profound good that AI could bring, I couldn’t help but wonder if we are really prepared for what comes next. 

AI’s tipping point 

Led by the United Nations (UN) International Telecommunication Union (ITU), the annual AI for Good Summit at the end of May brought together leading experts from across UN agencies, private companies, and the public sector to explore how AI can advance development priorities. I attended the summit to hear from these experts and get a sense as to whether or not the optimism and hype surrounding AI is truly justified. 

To help us understand what to expect from AI as we approach the 2030 SDG deadline, British journalist and entrepreneur Azeem Azhar, who is widely regarded as one of the world’s most influential thinkers on technology’s impact on humanity, took the stage. Azhar discussed some of the early misconceptions and exaggerations of AI but maintained that a tipping point is inevitable and likely to happen by 2030. 

Azeem explained tipping points as the exponential growth that occurs once something is cheap and good enough for enough people to afford and use. While Large Language Models (LLMs) and generative AI, aren’t currently 100 percent reliable or accurate, they’re both adequate and accessible enough to drive demand for innovation and fuel a potential tipping point. This isn’t a crazy assumption when you consider the growth of other similar technological revelations of their time such as smartphones. Smartphones are now a fundamental part of our lives, and it’s hard to imagine life without them. However it was only 10 years ago that we saw the exponential growth and tipping point of smartphone sales, as their growth almost quintupled over five years from 2010.

I left this talk feeling a mixture of optimism and caution. AI right now is like a plane taking off while it’s still being built. The wheels have left the tarmac but, without all the safety mechanisms installed, ethical, privacy, and inclusivity issues remain unresolved and potentially prevalent. 

Sticking to the facts

In his book Factfulness, Hans Rosling explains that people tend to evaluate facts through the lens of their existing worldview, instead of taking them at face value. For example, Rosling’s research showed chimpanzees are better at predicting the current state of international development than most academics, Nobel laureates, and investment bankers, whose existing worldviews color their perception of the state of wellbeing in the world.

Taking Rosling’s approach, factfulness allows us to recognize and avoid common ways in which information gets misinterpreted, enabling us to make the most informed decision in any given scenario. Factfulness is recognizing that, when a decision feels urgent, it’s important to take a breath, insist on timely and accurate data, and be wary of drastic action. This approach can, and shouldbe applied to AI. 

In the face of mounting pressure to make quick and effective decisions, before a tipping point ensues, AI decision-makers and policymakers need to embrace these principles of factfulness. By internalizing the lessons from recent summits, like the AI for Good Summit, lessons such as the imperative need for robust data and AI governance, a human and ethical approach to AI, and the need for inclusivity, leaders can ensure that their decisions are both responsible and forward-thinking.

Safety first 

AI conferences and events, like the AI For Good summit, can provide useful moments to apply factfulness to emerging technologies. While it’s exciting to focus on the next big innovation, it’s essential to consider how a thoughtful approach to governing the development and deployment of AI can help protect against harms and unintended effects while unlocking its power to supercharge development progress. As domestic policymakers grapple with this issue, important processes are unfolding at the regional and international levels, including through negotiations of the Global Digital Compact—part of the Pact for the Future to be agreed upon by world leaders in September—and through the work of the UN Secretary General’s High Level Advisory Body on AI

Because data form the building blocks and are outputs of AI, setting guardrails for AI must start with addressing the role that data governance plays at each stage of the AI lifecycle. Data governance, or decisions about how data is collected, processed, analyzed, used, and shared, must be transparent and accountable to people to ensure that AI tools can be ethically and effectively developed and deployed. AI is too often viewed through a model-centric lens focused on new technology as a stand-alone tool. But AI is developed and trained based on diverse and enormous datasets that become invisible once the tool is deployed. Ensuring we’re governing data as an input to AI is the first step to developing tools that benefit everyone fairly. 

Stay calm and advocate for governance

How can you or I prepare for the future of AI? Rosling talks about the fear instinct as an aspect of factfulness, arguing that the world always seems scarier than it is. Frightening things grab our attention most effectively, such as when we hear reports and rumors that AI is going to steal our jobs, cause social unrest, or that robots are going to take over entirely. As Rosling would say: This is the time to focus on the facts. 

As the plane reaches cruising altitude, what will determine the course it takes will be the institutional and global responses to the prospects and potential of AI. Supporting this journey, and implementing the necessary safety features will put us in the best possible position for AI to be the superpower we need to accelerate progress on the SDGs.