The world we live in right now is moving at a truly mind-bending speed, with new breakthroughs and technological advancements coming out practically every day. Of these, artificial intelligence (AI) stands out as an absolute game-changer with broad and potentially disruptive implications for the whole of society. AI is poised to bring about a profound transformation, extending beyond the realms of hard sciences such as healthcare, space exploration, computing, engineering, and quantitative finance. It will also reshape the very nature of diplomacy, fundamentally altering political landscapes and redefining society in its entirety. As the world comes to terms with this fast-changing reality, it becomes imperative to closely examine the impact of AI, exploring both its pitfalls and the myriad of opportunities it bestows. To be clear, there are at least three broad fronts where AI and diplomacy interact: as a driver of geopolitics, as a topic on the political and diplomatic agenda, and as a tool for diplomacy. This article will focus on the role of AI in shaping the way diplomacy is conducted.

Diplomacy, traditionally very heavily dependent on human inputs, is undergoing a paradigm shift through the integration of AI. At this point, it is important to recognize that while many view AI as a tool, its rapid development is giving rise to the emergence of AI agents. Like human agents, AI agents will have the agency — that is, ability and authorization — to perform ever more complex tasks and to make decisions on behalf of its principals (humans). The degree of reliance and autonomy that humans will grant AI will vary from case to case and will depend on the sensitivity of the outcomes. One thing is known for sure — from informing strategic decision-making to streamlining negotiation preparations, AI is revolutionizing the diplomatic landscape.

One key area where AI is making an impact is in analyzing and processing vast amounts of data in a vastly reduced time frame, a task that would require hundreds if not thousands of human analysts to accomplish. This ability to process data has been demonstrated in the realm of conflict prediction. In Africa, the Conflict Early Warning and Response Mechanism (CEWARN), powered by AI algorithms, has proved effective in forecasting potential conflicts, enabling proactive diplomatic interventions and preventing bloodshed. Another notable application is the Cognitive Trade Advisor developed by IBM, which has already augmented negotiations by providing concise information on trade treaties that would typically demand days or weeks and several legal experts to sort through.

However, concerns arise regarding AI’s potential to reinforce biases in diplomatic decision-making. Scholar and AI expert Kate Crawford warns, “AI systems are only as good as the data they are trained on. If that data is biased or incomplete, it can perpetuate discriminatory outcomes.” It is essential to ensure that AI algorithms are developed with transparency and ethical considerations in mind, and this is where government regulations and industry standards will play a key role. Even today, policymakers should provide sufficient guardrails for the development of this technology, such that it does not diverge to a path that does more harm than good to humanity.

While AI can help politicians through real-time analyses of public opinion, there are an increasing amount of concerns over its use in misinformation and public manipulation. The Cambridge Analytica scandal in 2018, for instance, involved harvested data from up to 87 million Facebook users that was used to feed an algorithm that allowed targeted election campaigning, benefitting certain US politicians’ campaigns. A scandal such as this serves as a stark reminder of the potential dangers of large-scale algorithmic manipulations that can undermine democratic processes. Moreover, AI’s ability to generate hyper realistic fake images, videos and audio – so-called “deep fakes”– poses a serious threat to the credibility of political discourse. These sophisticated manipulations have the potential to destabilize trust in public institutions and incite unrest among people. With declining costs of producing fake and malicious content, any malicious actor with a computer and connection can now disrupt the real world in a matter of seconds. For example, on May 22nd, a fake image showing an attack on the Pentagon circulated via Twitter and immediately went viral. Following the circulation of the fake image, major stock market indices briefly encountered a drop, but they eventually recovered. Investigations revealed that the image was AI-generated and was initially posted by a Twitter account with a verified mark. This is but one of many recent incidents where AI was used to mislead and spread disinformation.

Where is the tipping point? How many deep fakes do we need to witness before we start doubting everything we see, hear, or read? The proliferation of fakes that are difficult to distinguish as such can be immensely damaging to public trust. To this end, AI can be leveraged to assist with detecting and warning about fake and misleading content, while also scouring through massive amounts of data to help politicians in decision-making. For instance, AI can provide valuable insights into public sentiment by sifting through publicly available data from citizens. However, the ethical use of AI for political processes, in policymaking for example, should be centered around transparency and accountability. Transparent and accountable AI systems can help maintain trust in democratic processes, though this is certainly a challenge for more complex AI models. Diane Coyle, a renowned professor of public policy at the University of Cambridge, explained in an article for the Brookings Institute: “With machine learning in general and neural networks or deep learning in particular, there is often a trade-off between performance and explainability. The larger and more complex a model, the harder it will be to understand, even though its performance is generally better.” Policymakers should be aware of this, and be very vigilant against deflecting accountability and blaming AI for what would be human errors of judgment embedded in algorithms. To this end, it is essential to develop AI algorithms that are transparent, unbiased, and consistent with ethical norms and standards.

As we navigate the AI revolution, it is crucial to strike a balance between embracing the opportunities it presents and addressing the potential pitfalls. In diplomacy, AI can enhance decision-making processes, predict conflicts, and facilitate more effective negotiations. It is not hard to imagine a future where diplomats spend more time doing diplomatic work e.g. representation, negotiation, relationship-building and crisis management – instead of the less value-adding administrative tasks of doing routine reports and other paperwork. Striking a balance between human capabilities and AI augmentation will be key to ensuring a harmonious transition.

Pessimists have been warning that diplomats, like many other professionals, are at risk of becoming obsolete, replaced by faster and ever-more intelligent AI agents. Diplomats can, however, escape this foretold irrelevance by embracing new skills and adapting to new technologies. When ATM machines were invented decades ago, some warned that bank tellers and staff would become obsolete. We know now that did not come to pass, as banks successfully diverted the skills of bank tellers toward more value-adding functions such as Know-Your-Customer (KYC) processes, or client relationship management. In the same vein, diplomats can harness their unique set of skills and decades of experience, augmented by the computing power of AI, to perform more complex political, diplomatic and intercultural functions.

Diplomats need not worry, their profession will not die anytime soon.. If anything, the cross-border disruptions – and the corresponding policies, guardrails, international regulatory frameworks necessary to deal with those disruptions – heralded by AI across all sectors of society, economics, and politics will make the profession as pertinent as ever. AI ethicist Renée Cummings from the University of Virginia offers a glimpse into the future: “AI will not replace policymakers. What we will see is collective intelligence, the best of human intelligence working with the most sophisticated artificial intelligence.”


Written by Shaira Rabi; Edited by Peter Janiš

Art credit by: Julia Drössler