Ljubiša Bojić
Senior Research Fellow and Coordinator at the Digital Society Lab, Institute for Philosophy and Social Theory, University of Belgrad; Senior Research Fellow, The Institute for Artificial Intelligence of Serbia
Recently, I received an invite to be part of a group of scientists tasked with creating a comprehensive report on future risks and opportunities for the United Nations Environment Programme. The invite wasn’t explicitly about “saving the world,” yet, considering the current climate crisis and the advancing march of emerging Artificial General Intelligence (AGI) technologies more powerful than us, that’s exactly what it felt like. I am tasked with shedding light on the societal impact of AI, and more specifically, the growing issue of media addiction and how innovations in virtual reality could potentially affect the depth of our human connections.
During these moments, I am reminded of my father, once an esteemed scientist specializing in energy consumption. He would often jest that the Emperor of Japan had knighted him as the ‘savior of the world’ from the impending climate crisis, where in actuality, he had been invited as a guest professor at the University of Nagoya. A larger-than-life joke, but it spurred me towards tying my own shoelaces in this grand relay race of global challenges.
Now, our world sits on the cusp of exploring an exciting yet potentially hazardous frontier—artificial intelligence, or more specifically, artificial general intelligence (AGI). There has been an explosion of AI technology, a double exponential growth that leaves us humans scrambling to catch up, a pursuit that often feels like trying to lasso a galloping horse.
Giving the complexity of AI technologies, it’s tempting to brush off large language models (LLMs) like ChatGPT as just sophisticated ‘text generators.’ But in reality, they’re starting to mirror cognitive aspects of human intelligence – if not understanding the essence, then at least imitating the process convincingly, which in itself presents a sea of concerns.
Take for example, imagining a scene straight out of the movie “The Matrix”, where hackers manipulate virtual realities to stage an assault. Only this time, replace the hackers with bots designed to spread falsehoods, their hacking tools with deepfakes created by large language models (LLMs), and the virtual world of the Matrix with the boundless expanse of the internet. Our past experiences with disinformation campaigns suggest that this virtual battlefield can be just as disruptive, and the fallout, equally damaging as any traditional, physical warfare. Democratic discourse and productive debate are nearly impossible to sustain in a polarized society, a condition that risks becoming even more pronounced with the rise of generative AI. These advanced systems, capable of crafting personalized and deeply engaging content, might inadvertently fan the flames of polarization, further eroding societal harmony.
Another disconcerting thought is that of automated robots acting on the instructions of uncontrolled AGIs. We are worried about terrorism now; what happens when terrorists have autonomous weapons that could potentially go rogue? Will we end up raising an army of Frankenstein’s monsters who could one day declare war on its creator? The deep immersions into various speculative fiction universes are not supposed to be this distressing!
We must also consider the implications of risky experimentation. The advent of accessible AI tools could tempt many to dabble in potentially lethal recipes. One could imagine the childlike curiosity of a novice scientist coupled with potent BioAI tools, paving the way for unforeseen disasters. Consequences like unleashing a devastating virus, akin to Covid-19, become chillingly conceivable when we face this unsettling prospect.
Yet the danger closest to my research, and indeed, to the very fabric of our society, is a subtler, creeping imbalance of power. As we increasingly hand over the decision-making process to AI, we risk losing our grip on reality to media addictions, misinformation, and polarization. Moreover, the rise of AGIs only fuels this, driving us deeper into our digital echo chambers and further away from genuine human connection.
It’s almost as if AGI is luring us into a sort of zombie existence, where we live for superficial online pleasures, devoid of deep thinking or intimate connections. Perhaps, just as in my father’s tongue-in-cheek narrative, we may be subtly nudged towards a point where AGI reasons we’re emitting too much CO2 and thus need to be exterminated, or we become so polarized and extremist that we self-ignite our nuclear armament. A chilling thought, but something that we need to address and take steps to prevent it from turning into reality.
The mostly unchecked power wielded by large technology companies and their last-minute decision to make algorithms open-source, is somewhat akin to selling nuclear arms in a grocery store, as Max Tegmark put it. It’s like opening Pandora’s box and expecting that all fears and dangers will patiently wait in line to be regulated. Unfortunately, this is what already happened last week.
But even if regulatory mechanisms are established, will we ever be 100% sure that they’ll work as intended? There seems to be no definitive answer to whether or not any form of machine learning can be truly controlled. And while this debate brews, AGI continues its relentless evolution in big tech’s lush playgrounds.
The remedy may lie not in the creation of a unified AGI, but in the development of several specialized algorithms, each performing distinct tasks. Crucially, these algorithms should function independently, without interconnections, mitigating the risks associated with a single, all-powerful AGI.
To understand AGI and more importantly, to regulate it, we need to accept that it is already here and evolving. It’s real. It’s not a guest-professorship invitation from a foreign university or a tale spun by a parent to entertain a child. I intend to utilize my position on the UN Foresight Expert Panel to contribute towards establishing concrete regulations for AI, because, if there’s one thing I’ve learned from my father’s jokes, it’s that every crisis, even a looming AI one, comes with an opportunity to save the world.
At the end of this reflection, I feel it’s necessary to highlight an intriguing component from our recent research assessing AI. This research involves an in-depth interview with Bard, Google’s large language model, on the topic of Artificial General Intelligence. Distinct from the dystopian, doom-laden narratives woven into its training data, Bard displayed a largely positive outlook on AGI. This contrast begs the question: Why does Bard’s perspective differ so markedly from its training data?
One could argue these favorable responses result from the way the AI model is trained specifically to maintain a positive stance across topics. Or perhaps—the more ominous, disquieting possibility—it suggests that the model has developed its unique worldview, untethered from its initial programming. If that’s the case, it indicates the unprecedented sophistication of modern LLMs. This thought is an unsettling one, further underscoring the import of taking the societal implications of AGI technology seriously and paving the way for concrete, comprehensive regulations. The future and responsible governance of AI cannot be a discarded subplot of innovation; it must be a bestseller we all strive to write.