Artificial Intelligence might be our last ever invention
At current rates of progress, experts predict that we will achieve Artificial Superintelligence between 2045 and 2060. Superintelligence can be defined as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills.
A system more intelligent than every human brain combined, in every single way.
Where are we at the moment?
AI can defeat the best Chess and Go players with ease 100% of the time.
AI Chatbot ChatGPT recently passed high-level law, medicine, and business management exams. A qualified lawyer on your phone, kind of.
Autonomous vehicles of all kinds are currently being rolled out in the USA, Japan, and Germany, with the UK not far behind.
PathAI uses machine learning algorithms to analyse human tissue samples to make diagnoses and prescribe medical treatment.
Current AI can generate original essays and stories and create original art and music - Google it. Wild.
Not impressed?
Remember, this is 1st generation technology. AI is in its Nokia/Atari/Walkman stage. Who knows what will ChatGPT Version 100 will accomplish?
Perhaps it will single-handedly create an entire movie, just for you, from scratch.
We all thought that AI would replace low-skill and blue-collar jobs, eventually working its way up to high-skill white-collar jobs, and that the artists and visionaries would always be safe. Suddenly, AI is coming in from all angles.
And it's happening incredibly fast.
Today, computer technology progresses more in one hour than it did in its first 90 years.
If we extrapolate this level of exponential growth, we're not going to make 80 years' worth of progress in the remainder of the century. We're going to make 20,000 years worth.
Why are people so scared of AI?
In his fantastic book, The Precipice, Toby Ord explores several areas of existential risk - things that could lead to human extinction in this century. Unaligned AI is top of the list (1 in 10 chance), ahead of engineered pandemics (1 in 30), climate change (1 in 1000), nuclear war (1 in 1000) natural pandemics (1 in 10,000), super-volcanic eruption (1 in 10,000), asteroid impact (1 in 1,000,000), and stellar explosion (1 in 1,000,000,000). These risks combined suggest that there is a 1 in 6 chance of human extinction before the end of the century.
Most people will associate extinction via AI with dystopian science fiction movies, in which a group of tech geniuses develop robots to either serve humans or become our successors, only to be met with dire consequences (iRobot, Ex Machina etc.). This is rather far-fetched, but it makes for entertaining cinema.
Though these fictional depictions may be implausible, we shouldn't dismiss the genuine risk that AI poses to humanity.
So how is this risk likely to manifest?
I don't know. But consider this...
Imagine if Aliens had visited Earth in 100,000 BCE. They'd be pretty unimpressed by our stone tools and campfires. Now imagine they came back in 2023 - where we have Cities, aeroplanes, mobile phones and satellites. The aliens would assume that primitive life was replaced with intelligent life - but they'd be wrong. If you could pick up a baby in 2023 and use time-travel to swap it with a baby in 50,000 BCE, they'd probably both grow up and fit in as normal people - maybe as far back as 100,000 BCE, and certainly back in 10,000 BCE.
Humans haven't really changed much over the past 20,000 years. Humanity has changed significantly as a result of our collective intelligence - built on our ability to store and pass on information in the form of language. But humans - we take a long time to change naturally.
AI on the other hand, has gone from beating humans at basic games like checkers in 1992 and chess in 1997 to passing law and medical exams, driving cars, and creating original music, in less than 30 years. Talk about fast-track evolution. And it's getting faster.
Intelligence has given us God-like powers over all other species on Earth, which hasn't worked out too well for them. Anything that we can sell for profit, wear for fashion, kill for sport, or that just gets in our way has had a pretty bad time of it. Because, like it or not, all species act out of self-interest.
Now consider creating something far more intelligent than we are. Oh and by the way, no one's ever done it before - there are no experts; it's our first time. Sounds like an obvious Darwinian miscalculation.
If (when) AI does surpass humanity to become the intellectually-dominant "species" on Earth, what makes you think that we could control it or shut it off? Don't you think other species would shut us off if they could? Cue Planet of the Apes theme music.
One alarming and plausible idea is a slow transition into an AI-controlled future. As AI systems acquire an increasing share of power, an increasing amount of our collective future - our decisions and actions - are generated and optimised with inhumane values. Our future is unprecedented and unpredictable.
Even if we do manage to develop AI systems which are perfectly aligned with our values and entirely obedient to our instructions, we're still at risk. As humans have wielded more power over time, we have come pretty close to making some catastrophic errors, even when our intentions were "good". A superintelligent AI wields far more power at proportionately greater risk.
We should also consider the risks arising from the deliberate misuse of powerful AI systems. AI systems that successfully act in accordance with the instructions or values of their operators can be aligned with and controlled by a malicious individual or group, who may choose to utilise it as a powerful weapon.
Our brains evolved at a time when things progressed very slowly, i.e. not at all within our lifetime. Therefore, it's hard to imagine the world being significantly different within our lifetime. Therefore, I think that we underestimate how quickly things could change in the coming decades. People don't seem that impressed by current AI, yet if you showed it to someone in the 80s, I'm pretty sure they'd have a heart attack.
AI is fundamentally different to anything we've ever made before, and so it should be taken extremely seriously.
Ideally, we'll create something that greatly enriches our lives and fixes all of our problems. A more bleak but equally valid outlook would portray us as a baby playing with a loaded gun. The baby can't take precautions against something it doesn't understand.
After hundreds of thousands of years at the top, we might just be living in the century where the torch gets passed on - or yanked from us. Whilst terrifying, I can't help but find it quite exciting.
See Our Blog for the latest industry news, tech tips, company updates, and anything else we feel like writing about.