In his 2023 article “AI: the real threat may be the way that governments choose to use it,” Lancaster University Professor of International Security Joe Burton challenges the notion that the primary danger is in the form of an artificial superintelligence that targets humanity, and instead claims a potentially worse threat lies in government misuse of AI. I believe Professor Burton is correct in his assessment, but also that he misses a key factor in the form of economic implications.

Using the AI Safety Summit meeting as a backdrop, the author points to evidence from the last two decades of AI development, including the ongoing government use of AI for extensive surveillance and disinformation. He states that while the focus appears to be on AI products from private companies, governments at the summit are not as impartial as they seem and take an active role in AI development. Raising the issue of the militarization of AI, he notes that development is progressing on using these new technologies for nuclear defense, interrogation, and autonomous drones.
AI Safety Summit 2023

The first AI Safety Summit was held in November 2023 in England’s Bletchley Park. Attendees included a multinational group of AI industry leaders as well as government representatives from dozens of countries, twenty-eight of whom signed the Bletchley Declaration—an agreement calling for international cooperation “to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.”
Professor Burton observes that the most common AI news pertains to a super-intelligent system threatening humanity, implying that not enough time is spent reporting on how it could amplify the worst traits found in humans. His response to this trend is to highlight his own research, which indicates that governments presenting AI as a threat might simultaneously use it as justification to obtain power, limit its benefits to fewer people, or otherwise misuse it. However, he admits that rules governing AI are beginning to emerge, and points to the U.S. and European Union introducing meaningful new regulations.
Regulatory Efforts in the U.S. and EU
Just prior to the November summit, President Joe Biden signed an executive order with broad new initiatives governing AI, and the author references this order via a 2023 article by Toby Walsh, Professor of AI at the University of New South Wales. Entitled “The US just issued the world’s strongest action yet on regulating AI. Here’s what to expect,” the article describes the order as “a grab bag of initiatives for regulating AI—some of which are good, and others which seem rather half-baked. It aims to address harms ranging from the immediate, such as AI-generated deepfakes, through to intermediate harms such as job losses, to longer-term harms such as the much-disputed existential threat AI may pose to humans.”
Professor Walsh goes on to criticize the fact that some important issues are paid scant attention in the order, such as the use of AI in warfare, but encourages international replication of many of the included initiatives.
The author also links to a 2023 article by Nello Cristianini, Professor of Artificial Intelligence at the University of Bath in England. In his work “EU approves draft law to regulate AI – here’s how it will work,” he summarizes the European Union’s AI Act legislation. Some aspects of this law will go into effect in August of 2024, with the remainder being applied through August of 2026. Professor Cristianini explains that rather than regulating AI itself, the law will apply laws to the use of AI, based on four categories of risk—unacceptable, high, limited, and minimal. The higher the risk, the more stringent the obligations.
Even with these and future AI regulations in place, Professor Burton expresses concern that some countries may simply ignore them, allowing authoritarian states to surpass those that adhere to the law.
Economic Implications of AGI
The author makes some excellent points, and it’s an important topic to bring to light. I believe he is likely correct that government misuse is a more substantial threat than a rogue AGI (Artificial General Intelligence), but he also seems to have lost sight of the profound impact a functional AGI will have on world economies. Without adequate preparation, even the beneficial aspects of AI have the potential to cause harm, and the result could be every bit as negative as government misuse or a villain superintelligence. For example, once AGI is available to businesses as a paid service, it might only be a matter of months before it starts an avalanche of job losses.
A great deal hinges on how long it takes for AGI to be achieved and released into the wild, how rapidly the necessary hardware infrastructure can be built, how much governments regulate, hinder, or encourage its adoption, and how quickly and to what extent any entities are able to use it for nefarious purposes.
In the June 2024 paper Situational Awareness: The Decade Ahead, former OpenAI researcher Leopold Aschenbrenner makes the claim that “it is strikingly plausible that by 2027, [AI] models will be able to do the work of an AI researcher/engineer.” He then paints a picture of the repercussions for AI research, which is that of hundreds of millions of AGIs performing that work—contracting a decade of human progress into less than a year, and in the process creating something “vastly superhuman.”
While I’m usually an optimist, it doesn’t take long for my thoughts of an AI utopia to slip into an apocalyptic world. It will undoubtedly have a significant impact on my own life, so it engenders both fascination and worry. Using the software industry as an example, I imagine a single AGI worker performing the roles of project manager, designer, developer, and tester—listening and speaking at meetings, writing code and reports, communicating status updates to management, etc. If this AGI is a software-as-a-service model, it may cost the company as little as a few hundred dollars per month to replace $400,000 in annual human salaries with something that doesn’t require sleep, vacations, health insurance, or retirement benefits.
In other words, an opportunity that no employer could pass up, and they may in fact consider it a requirement to remain competitive. A similar shadow hangs over labor and service industry workers in the form of autonomous robots. With the addition of AI and numerous new companies building niche hardware, this industry is advancing rapidly, with robots becoming smarter and more agile and useful seemingly by the week. AGI robotics and materials science researchers will greatly accelerate this.
Unfortunately, these aren’t the types of transitions that existing unemployment benefit systems are designed to handle. The provision of sufficient benefits to allow for the time and cost of retraining would be a start, but considering the variety of skills AGI and robotics will eventually encompass—from brain surgery to dog walking—I’m left with the question, “retrain for what, exactly?” With no universal basic income (or corporate taxes to fund it) in place to mitigate the impact of these tools, the result could be catastrophic for any economy, so it’s important to be as prepared as possible.
I am encouraged by the fact that the president’s executive order includes actions intended to “develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection,” however, the sheer size of the labor and tax law changes required and the slow turning of the wheels of bureaucracy remain worrisome.
If government action moves more slowly than private industry progress (as is usually the case), mass surveillance, disinformation, and even killer robots will seem like minor concerns to the millions of families who would have no means of paying for housing or other necessities. When attempting to view the far side of this potential disaster, I’m led to wonder if we’ll also see AGIs granted human-equivalent rights and freedoms, or perhaps witness the birth of a new social and business mindset where human workers are prized simply for being human.