Ever since I started to take notice of the recent advances in artificial intelligence (AI), from Dall-E and Stable Diffusion to GPT-3, I’ve had this “end-of-the-world” kind of feeling that I couldn’t quite explain. Not like the world is literally ending, but that humanity has discovered something so profound it will change the world forever. In this post I’ll explore what I think AI could mean for the future of human life, and why I think ethical AI research could make all the difference.

Most people I know are sceptical about the ‘hype’ building around AI, and other software developers generally see it as impressive but not magical - ChatGPT is ‘just’ text generation, isn’t it? So when the Future of Life Institute’s open letter came out last week, calling for an immediate pause on powerful AI research due to safety concerns, most responses I saw didn’t exactly take it seriously. I think they’re severely underestimating the importance of ethical AI research, which in turn has far worse consequences than underestimating AI’s current capabilities. I believe such decisions are critically important, and could possibly even make the difference between extinction and immortality for the human race.

Recent research claims that the current state-of-the-art AI model, GPT-4, is already showing ‘sparks’ of artificial general intelligence (AGI) - the ability to perform any task that humans can. Most AI experts believe that when this happens, superintelligent AIs - meaning an AI which far exceeds human capability in every domain - will follow in less than 30 years. GPT-4 itself might seem a long way from that level (although it can still probably take your job), but at the current rate of AI progress, AGI and superintelligence are starting to look more and more likely.

Now let’s take an even more speculative and hypothetical idea: mind uploading (also called whole brain emulation). This is the idea that the human brain could be simulated perfectly such that it can be transferred onto a computer and ’live’ in a digital world. With a fast enough computer, thousands of ‘years’ could be simulated in an instant, with the mind subjectively still experiencing time at the normal present-day rate, effectively becoming immortal. I use this example to illustrate a far-future technology that may seem unattainable given current research, but I believe even this could be possible in a future where technology has advanced far beyond current limitations, especially with the assistance of AI.

Another area of research with the potential to change human life is longevity, which has the goal of extending the healthy human lifespan through breakthroughs in biology, genetics and others. Connecting the dots, it is conceivable that longevity research could achieve its goals and extend the life of a human today such that they live long enough to see mind uploading become a reality, entering a new era of human life. While not directly involving AI, longevity has similar ethical considerations and potential for transformative change, and would likely be a potential target of research to focus highly intelligent AI resources towards.

Such technologies are extremely speculative and may never be possible for even the most intelligent humans to achieve. But what about a superintelligent AI? By definition, it would be capable of research far surpassing human comprehension, accelerating the advance of technology and making breakthroughs across all areas, including longevity and whole brain emulation. Goals which seemed unthinkable or impossibly far off would suddenly become a reality overnight, bringing about profound changes for humanity.

However, I think the most critical challenge in reaching this potential future isn’t a scientific or technical one at all. It’s ethical. Overcoming the ethical challenges is what will align humanity and AI towards the best possible outcome, and enable the desired technological breakthroughs to take place. Failure to take the ethical problems seriously and address them could put the course of AI towards another direction, one with far worse consequences including existential risks. There are ongoing efforts to address these challenges: organisations such as Google and OpenAI have published their approaches to AI safety and emphasize the importance of AI ethics, and Twitter and Facebook also have departments dedicated to responsible AI research. In 2017, the Asilomar AI Principles were written. These consist of 23 guidelines for approaching ethical AI research. Incidentally, this was also published as an open letter by the Future of Life Institute, and widely endorsed by experts including OpenAI’s CEO Sam Altman.

Perhaps the risk most commonly associated with AI superintelligence is that of misaligned objectives. It is not necessarily the case that a highly intelligent AI will only have goals that are beneficial to humans - it is more likely that it would have a combination of goals that are both beneficial and harmful to humans. Therefore, if care isn’t taken to properly align the AI’s values with our own, there is a risk of it taking actions that cause directly cause harm to humans.

Some specific scenarios where a combination of misaligned objectives and unintended consequences could cause human extinction include the so-called ‘paperclip maximizer’, where an AI might go to extreme measures in pursuit of a goal, consuming all resources on Earth including humans. Similarly, an AI tasked to solve climate change could decide that the simplest way to do that is by eliminating all humans. If an AI were allowed to design and control autonomous weapons, this could lead to an arms race with disregard for safety measures, creating out of control weapons and causing global destruction.

Even if AI does not reach the point where it can directly impact real-world resources and warfare, just the potential power of AI could escalate tensions between global powers and create an unstoppable arms race. It would just take one act of recklessness for a person in power to act out an AI’s instructions on its behalf, for example unleashing nuclear weapons or releasing bio-weapons, to cause humanity’s eventual extinction.

These are extreme outcomes that highlight the impact that a superintelligent AI could have. But once AI advances to a certain point, especially if it achieves the capability of self-improvement and/or self-replication, it may be impossible to stop or even control the consequences of whatever path it’s chosen. So it’s critical that we find ways to control AI now, while we still can.

In conclusion, with the recent breakthroughs and speed at which AI is advancing, there is a significant chance of reaching AGI and superintelligence within a few decades. When this happens, it is difficult to predict whether the future of humanity will be a utopia of abundance and immortality, or rapid destruction and extinction at the hands of AI. We should obviously be doing everything we can to increase the possibility of it being the former and not the latter. A six-month delay of technical progress in AI is an insignificant price to pay in the context of our future, and seems to have a reasonable chance of increasing our understanding and control of AI’s capabilities. That’s why I believe that ethical AI research is the most critical challenge in human history.

(This post was written by me, not by an AI)