When AI Goes Quantum: Rethinking Learning in an Era of Exponential Progress
We think we're seeing progress now. Just wait until quantum computers are hooked up to AI. We've seen nothing yet.
We’re all hung up on how AI will totally change our education system. And we’re right to be. But that’s AI with standard GPUs like the Nvidea A100 at the backend, which, according to Nvidea, offers “the world’s fastest memory bandwidth at over 2 terabytes per second (TB/s) to run the largest models and datasets.”
It’s estimated that Open AI will eventually require something like 30,000 Nvidea graphics cards to power its models. The thousands it currently uses cost in the region of $3M a month to run. It’s debatable how sustainable their model is. Microsoft are clearly pinning a lot on their CoPilot being enough of a success to warrant this massive initial investment.
Microsoft and Open AI are not alone. Elon Musk has reportedly bought 10,000 A100s to power X.AI. Again a huge cost at a time when Twitter is still burning cash.
But are they all looking in the wrong direction long term? Are Nvidea GPUs the way forward for true superintelligence, or AGI?
The A100 against Frontier
Let’s start by comparing the Nvidea A100 with Google’s Frontier supercomputer:
Frontier is expected to achieve over 1 exaflop of processing power (over 1 quintillion calculations per second) at peak performance.
Each Nvidia A100 GPU provides up to 19.5 teraflops of FP32 compute performance.
To match Frontier's 1 exaflop performance, it would take over 50,000 Nvidia A100 GPUs.1
So, even once Open AI has leveraged the full power of its 30,000 A100s, it likely won’t come close to Frontier.
Enter Quantum Computing
Now let’s compare Frontier to Google’s latest iteration of its Sycamore quantum computer.
Sycamore now has 70 qubits (quantum bits). That's effectively 241 million times more powerful than its 2019 predecessor. The 70 qubit Sycamore can process in six seconds a calculation that would take Google's Frontier 47 years to complete. The 2019 54 qubit computer performed a calculation in 200 seconds that would have taken a supercomputer 10,000 years to output.
However, it’s a little like comparing apples with oranges, as conventional and quantum computers work on entirely different architectural principles:
Frontier is designed for classical computing, optimised for workloads like scientific simulations, data analytics, and machine learning.
Sycamore is specialised for workloads like quantum simulation and optimisation. It’s focused on demonstrating "quantum supremacy" - performing certain calculations exponentially faster than classical supercomputers. But its practical applications are still limited.
Frontier's 1 exaflop classical computing power dwarfs Sycamore's, but they excel at very different use cases. Frontier cannot perform the quantum simulations Sycamore is designed for.
Overall there is no direct correlation between the performance of these disparate technologies. They represent complementary approaches at the cutting edges of classical and quantum computing respectively.2
Quantum Challenges
Before we jump into a quantum world there are some challenges to overcome. Quantum computers remain noisy, fragile and prone to errors, incoherence and infidelity over long periods. It would seem they are great at the sprint (47 years greater in fact) but fare less well over extended timeframes. They’re also limited to less than 100 qubits currently which means that they’re not yet able to replicate neural networks in the way standard GPUs like the A100 are.3
It’s therefore likely, based on current progress, that it’ll be 5-10 years before we have a commercially viable quantum computer that can be effectively used as the backend of an AI system. Hooking one up too early could be a disaster: it would be akin to creating a quantum scalpel that could potentially do the job of brain surgery infinitely better than a conventional scalpel, but you wouldn’t know until you began to operate. Probably not worth taking the risk just yet.
Preparing for a Quantum Leap
But when we do make the jump, AI will have leapt on and we may be approaching AGI even without it being quantum powered. And then what? When we have quantum AI, what will our world look like? It’s impossible to know. The reason is that we’re only able to think in a linear, temporal way. So most of us find it hard to comprehend how quantum mechanics works.
I struggle conceptually with the idea that you can have a qubit that can hold two states at the same time. Conventional bits are either in the on or off state (1 or 0 in binary). Qubits can be on and off at the same time. I quite like the marble analogy: a standard processor is akin to a marble being either at the top or the bottom of a hill (the hill being the calculation process). In a quantum processor, the marble is in a superposition at the top and bottom of the hill at the same time. The marble can therefore explore multiple ways down the hill.
This means quantum processors can make multiple calculations at the same time, rather than in a linear fashion as in conventional processors. Even at their fastest, we can understand linear. It is much harder for us to wrap our heads around the quantum world.
But wrap we must, as when they are an integral part of our lives, and super-intelligences are powered by processors many thousands or even millions of qubits in size, we can wave goodbye to our notion of time.
A one million qubit quantum computer capable of hosting a neural network might find a cure for all the worlds diseases, end climate change, solve quantum entanglement (the foundation of quantum teleportation), and design effective ways to travel to the furthest reaches of our galaxy in a matter of milliseconds. It could also, of course, decide within the same timeframe it no longer needs us to achieve any of these goals.
It is certainly a force we need to treat with the utmost care.
Perhaps hybrid is the way forward
Perhaps we need to look at how to combine conventional and quantum systems to get the best of both worlds. Here are a few ideas as to how that might work.
Classical computing could provide the "brawn" - the raw computational power needed for tasks like training deep neural networks on massive datasets. Quantum computing could provide the "brains" - tackling problems intractable for classical systems.
For example, classical machines could train deep learning models, while quantum systems focus on optimising model hyperparameters or finding optimal network topologies.
Quantum computing could accelerate certain AI workflows like data clustering, dimensionality reduction, and generative modelling that rely on linear algebra and statistics.
For large neural networks, classical computers handle computation-intensive matrix multiplications and backpropagation, while quantum computers tackle activation functions.
Quantum machine learning algorithms like quantum Boltzmann machines, quantum generative adversarial networks, and quantum classifiers could be combined with deep learning.
Quantum simulators can create training data for classical ML, reducing the dataset needed. Quantum enhanced reinforcement learning is also promising.
For autonomous systems, classical computers handle sensor processing and control, while quantum computers inform path planning and decision making.
Quantum neural networks that emulate quantum effects like superposition and entanglement may unlock new AI capabilities.4
Quantum AI and human learning
Whatever happens, the impact on human learning will be immense. We are already considerably slower than AI across multiple intelligences, and the gulf is only set to increase. We speak a lot currently about how we need to privilege human qualities, and that would seem sensible. But is this enough? Is being a decent human being enough for us? Doesn’t our incredible human mind need more?
Perhaps it will soon reach the point where we learn just for the love of it. Where we don’t actually need to know much stuff at all as the AI will know it all better, deeper and faster than us. But that we learn nonetheless, as the brain is a muscle that atrophies if not exercised.
This is one for another post as I need to wrap my head around how our education system will be totally upended when we have truly hyper-intelligent machines. There’s certainly plenty to explore.
Claude 2, July 2023 Model
Claude 2, July 2023 Model
Claude 2, July 2023 Model
Claude 2, July 2023 Model