Back in 2019, Google proudly announced that they had achieved what quantum computing researchers had sought for years: proof that the esoteric technique could outperform traditional ones. But this demonstration of “quantum supremacy” is being challenged by researchers claiming to have pulled ahead of Google on a relatively normal supercomputer.
To be clear, no one is saying Google lied or misrepresented its work — the painstaking and groundbreaking research that led to the quantum supremacy announcement in 2019 is still hugely important. But if this new paper is correct, the classical vs. quantum computing competition is still anybody’s game.
You can read the full story of how Google took quantum from theory to reality in the original article, but here’s the very short version. Quantum computers like Sycamore are not better than classical computers at anything yet, with the possible exception of one task: simulating a quantum computer.
It sounds like a cop-out, but the point of quantum supremacy is to show the method’s viability by finding even one highly specific and weird task that it can do better than even the fastest supercomputer. Because that gets the quantum foot in the door to expand that library of tasks. Perhaps in the end all tasks will be faster in quantum, but for Google’s purposes in 2019, only one was, and they showed how and why in great detail.
Now, a team at the Chinese Academy of Sciences led by Pan Zhang has published a paper describing a new technique for simulating a quantum computer (specifically, certain noise patterns it puts out) that appears to take a tiny fraction of the time estimated for classical computation to do so in 2019.
Not being a quantum computing expert nor a statistical physics professor myself, I can only give a general idea of the technique Zhang et al used. They cast the problem as a large 3D network of tensors, with the 53 qubits in Sycamore represented by a grid of nodes, extruded out 20 times to represented the 20 cycles the Sycamore gates went through in the simulated process. The mathematical relationships between these tensors (each its own set of interrelated vectors) was then calculated using a cluster of 512 GPUs.
In Google’s original paper, it was estimated that performing this scale of simulation on the most powerful supercomputer available at the time (Summit at Oak Ridge National Laboratory) would take about 10,000 years — though to be clear, that was their estimate for 54 qubits doing 25 cycles. 53 qubits doing 20 is considerably less complex but would still take on the order of a few years by their estimate.
Zhang’s group claims to have done it in 15 hours. And if they had access to a proper supercomputer like Summit, it might be accomplished in a handful of seconds — faster than Sycamore. Their paper will be published in the journal Physical Review Letters; you can read it here (PDF).
These results have yet to be fully vetted and replicated by those knowledgeable about such things, but there’s no reason to think it’s some kind of error or hoax. Google even admitted that the baton may be passed back and forth a few times before supremacy is firmly established, since it’s incredibly difficult to build and program quantum computers while classical ones and their software are being improved constantly. (Others in the quantum world were skeptical of their claims to begin with, but some are direct competitors.)
As University of Maryland quantum scientist Dominik Hangleiter told Science, this isn’t a black eye for Google or a knockout punch for quantum in general by any means: “The Google experiment did what it was meant to do, start this race.”
Google may well strike back with new claims of its own — it hasn’t been standing still either — and I’ve contacted the company for comment. But the fact that it’s even competitive is good news for everyone involved; this is an exciting area of computing and work like Google’s and Zhang’s continues to raise the bar for everyone.