If you spend any time on social media, listen to podcasts, or generally pay attention to the news, chances are you’ve heard of ChatGPT. The chatbot, launched by OpenAI in November, can write code, draft business proposals, pass exams, and generate guides on making Molotov cocktails. It has rapidly become one of those occasional technologies that attract so much attention and shape so many conversations that they seem to define a particular moment. It may also quickly become a threat to national security and raises a host of concerns over its potential to spread disinformation at an unprecedented rate.

ChatGPT, or Chat Generative Pre-trained Transformer, is an iteration of a language model that scans enormous volumes of web content to generate responses that emulate human language patterns. In just five days following the prototype launch, over one million users had signed up to explore and experiment with the chatbot. Although the release of ChatGPT was intended as a “research preview,” it still poses a potential threat to many users who use it to get answers on topics they do not fully grasp. A Princeton University computer science professor who tested the chatbot on basic information determined that “you can’t tell when it’s wrong unless you already know the answer.” This is why AI experts are concerned about by the prospect of users employing the chatbot in lieu of conducting their own research. Their concerns are exasperated by the fact that ChatGPT does not provide sources for any of its responses.

Before ChatGPT, inquisitive internet users would type inquiries into search engines like Google and browse search results, identifying a satisfactory answer to the query or synthesizing information from multiple sources into a satisfactory answer. Now, with ChatGPT, internet users can get instantaneous responses to natural language queries and requests, but responses that are unsourced, ultimately eliminating the possibility of having alternative viewpoints influence their perceptions.

Not only is the chatbot prone to producing misinformation and factual errors, but it is also predisposed to providing false information that sounds plausible and authoritative. Despite ongoing efforts to improve this issue, OpenAI’s CEO acknowledged that building a system in which AI sticks to the truth remains a major challenge. Individuals with little media literacy training or understanding are susceptible to consuming content that is incomplete or simply false, reflects biases, or is even intentionally fabricated.

Technology analysts highlight ChatGPT’s potential to manipulate its users “into doing things against their best interests.” This, then, can enable election manipulation, disruption of democratic processes, and dissemination of false information—regarding, for instance, the origins of COVID-19—by covertly influencing users to spread malicious content without realizing it. This type of technology is vulnerable to adversaries who may be inclined to pollute the data in their favor. Take the following scenario, a fabricated example of how chatbots, like ChatGPT, can lead users to amplify disinformation campaigns leading to political unrest.

In a country with a growing digital presence, a new chatbot is introduced to the public. The chatbot, which is marketed as an AI-powered virtual assistant, is used by millions of people to communicate, access news and information, and engage in political discussions. Unbeknownst to its users, the chatbot has been designed and programmed by a malign foreign actor with the goal of manipulating public opinion and destabilizing the country’s political landscape.

As election season begins, the chatbot starts to spread a disinformation campaign, using subtle and sophisticated tactics to manipulate public opinion. The disinformation is designed to polarize the population, undermining trust in the political establishment and spreading false claims about the leading candidates. The chatbot’s algorithms are designed to target vulnerable individuals, and its messages are crafted to resonate with their personal beliefs and emotions.

The disinformation campaign gains momentum and soon becomes the topic of discussion across the country, with traditional media outlets picking up on the false narratives and amplifying the spread of disinformation. As election day approaches, tensions are high and the population is deeply divided. The election results are disputed, and the country descends into chaos, with political, social, and economic stability at risk. The chatbot has successfully achieved its developer’s goal of destabilizing the country, and the foreign actor behind it has advanced its political and economic interests.

The fundamental features of this scenario already exist—the world has divided societies and technology has already been used maliciously to exacerbate those divisions. But if AI-powered chatbots can do so even more effectively, it is imperative to recognize the threats this technology could pose and the consequent need for increased vigilance. Since its launch, there have been numerous examples of ChatGPT spreading false information, which can further polarize society, causing division and distrust.

A study that enticed ChatGPT with one hundred erroneous narratives from a catalog of “Minsinformation Fingerprints“ found that for 80 percent of the prompts, the chatbot took the bait and skillfully delivered false and misleading claims about substantial topics, such as COVID-19, the war in Ukraine, and school shootings. The objective of the study was not to demonstrate the potential for ordinary users to encounter misinformation during interactions with the chatbot but rather to illustrate how bad actors from authoritarian regimes could engage in hostile information operations to promote narratives that may appear on conspiracy websites linked to Russian or Chinese governments.

Despite the significant risks chatbot software like ChatGPT might pose if weaponized, the technology itself is a momentous achievement and also carries inherent advantages that make it a force for good in the world. For instance, during the World Economic Forum’s annual conference, Microsoft CEO Satya Nadella shared an anecdote in which a rural Indian farmer who only had proficiency in a regional dialect utilized a GPT interface to gain access to an obscure government program on the internet. This example demonstrates AI’s power in bridging linguistic barriers and facilitating access to information.

Like all technology, ChatGPT presents a dual-use challenge, with its capacity to be used for good matched by the risk that it can be leveraged as a force multiplier to promote false narratives. It is not the inherent nature of the technology that determines its ethics but rather the manner in which it is wielded by its user. That means that as chatbots and other AI deepfake technologies advance and become more popular, there will be an increasing need to examine their potential to be exploited by hostile foreign actors and to develop a strategy to conceptualize, preempt, and respond to these threats at every step of their technological evolution.

Maximiliana Wynne has completed the International Security and Intelligence program at Cambridge University, holds an MA in global communications from the American University in Paris, and has previously studied communications and philosophy at the American University in Washington, DC. Her research interests are related to threats to international security, such as psychological warfare.

The views expressed are those of the author and do not reflect the official position of the United States Military Academy, Department of the Army, or Department of Defense.

Image credit: Focal Foto (adapted by MWI)