We built machines that calculate. Now we're building machines that ask nature to do it for us. The difference isn't gradual — it's fundamental.
What a Quantum Computer Actually Does
A classical computer is essentially a very fast accountant. It takes numbers, pushes them through logic gates — AND, OR, NOT — and remembers the results. Everything is exact, everything is predictable, everything is based on two states: current on, current off. One and zero.
A quantum computer works in a fundamentally different way. It manipulates the properties of individual particles — atoms, ions, photons — and then measures the result. At first, that sounds like a minor technical difference. It isn't.
Because between manipulation and measurement, something remarkable happens: nothing. At least nothing the quantum computer actively does. It delegates the actual computation to quantum physics itself. It sets up the initial conditions, essentially presses start — and the laws of nature do the rest.
Think of it like an architect shaping a riverbed. He doesn't dig a channel from A to B. He shapes the landscape so that water naturally flows the desired path. A quantum computer shapes the "landscape" of quantum states so that physics naturally flows toward the solution.
This has a far-reaching consequence: Everything we know about classical algorithms is useless here. Not a single classical algorithm can be reused on a quantum computer. All of computer science must be rethought for this machine.
Probabilities Instead of Certainties
To understand how a quantum computer computes, you need to let go of a deeply rooted intuition: the idea that information always has an exact value.
In the quantum world, every piece of information has a probability with which it exists.
Imagine a coin toss. The coin spins in the air — as long as it's airborne, it's neither heads nor tails. Both sides exist simultaneously, each with a 50% probability. Only when the coin lands on your hand and you look does the state resolve: 100% heads, 0% tails. Or the other way around.
In quantum mechanics, that's not a metaphor. It's literally how it works. A qubit — the quantum equivalent of a classical bit — exists in a superposition of 0 and 1. Not because we don't know what value it has. But because it is simultaneously both, until we measure.
Exact values only exist before the computation (when we set the input) and after the measurement (when we read the result). In between, everything moves through a space of probabilities.
The trick lies in manipulating these probabilities. With each interaction — each "quantum gate" — the probability amplitude of the various states shifts. A quantum algorithm is nothing more than a clever sequence of manipulations that drives the amplitude of the desired result ever closer to 100%.
Back to the coin: A classical computer would have to look at both sides sequentially to determine whether they're the same. A quantum computer can look at both sides simultaneously by mapping them onto entangled qubits. What physicist David Deutsch first formalized in 1985 — laying the foundation for all of quantum computing.
The Fundamental Laws — Nothing Works Without Them
Three quantum-physical phenomena form the foundation on which everything rests.
Superposition — Everything at Once
A qubit in superposition is simultaneously in states 0 and 1. Two qubits can simultaneously be in four states (00, 01, 10, 11). Three qubits in eight. The pattern is exponential: n qubits can represent 2ⁿ states simultaneously.
This means: 300 qubits in superposition can represent more states simultaneously than there are atoms in the observable universe. Not sequentially. Simultaneously.
Entanglement — Spooky Connection
Entanglement is what Einstein called "spooky action at a distance" — and what bothered him so much that he doubted it until the day he died.
Two entangled particles behave as a single system, no matter how far apart they are. Change the state of one, and the state of the other changes — instantaneously. Not because information is transmitted, but because both particles form a shared quantum system. The information doesn't need to travel because it was never separated.
Entanglement means: A single manipulation can change many pieces of information simultaneously. Instead of examining each side of a coin individually, you look once — and know both.
For quantum computers, this is tremendously useful. Through a single operation — a single quantum gate — information across many entangled qubits can be processed simultaneously.
And before the misconception arises: Entanglement does not allow information to be transmitted faster than light. To use the information from an entangled system, you always need a classical communication channel. So Einstein was right after all — just not in the way he thought.
Interference — The Art of Superposition
Quantum states behave like waves. And waves can superpose: Constructively, when they reinforce each other. Destructively, when they cancel each other out.
The double-slit experiment demonstrates this behavior in a fascinating way. When you fire individual photons through two slits at a wall, what appears is not a simple two-stripe pattern — but an interference pattern of many stripes. The photon behaves like a wave that passes through both slits simultaneously and interferes with itself.
But here's where it gets truly strange: If you set up a detector that observes which slit the photon passes through, the interference pattern disappears. The photon suddenly behaves like a classical particle. The mere act of observation changes the outcome.
For quantum computers, interference is the central tool. The algorithm is designed so that wrong answers superpose destructively and cancel each other out, while the correct answer is constructively reinforced. At the end of the computation, ideally only the correct solution remains — with a probability near 100%.
Schrödinger's Cat — And What It Really Means
No text about quantum computers is complete without Schrödinger's cat. But most explanations get the thought experiment wrong — treating it as a quirky curiosity. In truth, it's one of the most profound statements about the nature of reality.
In 1935, Erwin Schrödinger described the following scenario: A cat is sealed in a closed box. Inside the box is a lethal mechanism — say, a vial of prussic acid triggered by radioactive decay. Since the decay is a quantum-mechanical process, the mechanism is triggered with exactly 50% probability. As long as the box remains closed — as long as nobody looks — the cat, according to quantum mechanics, is both simultaneously: alive and dead.
Not "either-or, we just don't know." Literally both. In quantum mechanics, a state exists as a superposition of all possible states until a measurement occurs. Only opening the box — the measurement — forces the system to decide.
Translated to the quantum computer:
- The cat is the qubit — a particle carrying information
- The box is the isolation — vacuum, cryogenic cooling, shielding from the environment
- Opening is the measurement — the moment the quantum state becomes a classical value
The isolation is the crucial part. As long as the qubit is shielded from its environment, it can remain in superposition and perform quantum computations. The slightest uncontrolled interaction with the surroundings — a thermal photon, an electromagnetic fluctuation — and the state collapses. The computation is destroyed.
For a deeper dive into what Schrödinger's cat means for our understanding of time, see my article "The Weight of Time", which explores the connection to the question of whether the future itself might be a quantum state.
The Biggest Problem: Errors
Before we look at how a quantum computer is built, we need to talk about its most fundamental problem. Because it explains why building one is so incredibly difficult — and why every hardware decision is ultimately an answer to this one problem.
Quantum states are fragile. Not a little fragile, like a delicate clockwork. Absurdly fragile. A qubit in a superconducting circuit typically remains coherent for only a few hundred microseconds — fractions of a millisecond. All computations must be completed within that time. Every uncontrolled interaction with the environment — a thermal photon, an electromagnetic fluctuation, a minimal vibration — and the quantum state collapses. Physicists call this decoherence. It's as if someone opened Schrödinger's box before the computation was finished.
Why Classical Error Correction Doesn't Work Here
In classical computing, error correction is a solved problem. The principle is simple: store the same information multiple times and compare. If you send a bit over a noisy channel, you send it three times — 0 becomes 000, 1 becomes 111. If 010 arrives, the middle bit was obviously faulty. Majority vote, problem solved.
A quantum computer cannot use this trick. For a fundamental reason: you cannot copy a quantum state. This isn't a technical limitation — it's a law of nature, the No-Cloning Theorem. Any attempt to duplicate a qubit would irreversibly alter its state. And simply reading it for verification is also out — because any measurement destroys the superposition you're trying to protect.
Quantum error correction must therefore take an entirely different path. The idea: distribute the information of a single logical qubit across many physical qubits, without ever directly measuring the logical state. Instead, you measure only the relationships between the physical qubits — so-called syndrome measurements. These reveal whether an error has occurred and what kind, without exposing the actual information.
The problem: this requires many physical qubits per logical qubit — currently hundreds to thousands. A useful quantum computer with, say, 1,000 logical qubits would need millions of physical qubits.
Google's Willow experiment demonstrated at the end of 2024 for the first time that this approach works in practice: the more physical qubits used for error correction, the lower the error rate — exactly as the theory predicts. A milestone the community had worked toward for over a decade.
And this is precisely why a number like "1,000 qubits" alone means little. IBM introduced the term Quantum Volume — a metric that accounts not just for the number of qubits, but also for their quality, connectivity, and error rate. A system with 50 high-precision qubits can be more powerful than one with 1,000 noisy ones.
Hardware — What Does One of These Things Look Like?
With this understanding, it becomes clear why building a quantum computer is such a challenge: every hardware platform is, at its core, an attempt to defeat decoherence — while retaining enough control over the qubits to compute meaningfully.
A quantum computer will never be a device you put on your desk. Inside most systems, temperatures hover near absolute zero — colder than outer space. This extreme cooling is necessary so that the qubits aren't disturbed by thermal vibrations.
But how do you realize a qubit? There isn't one answer — there's an entire zoo of approaches. And the competition between them is one of the most exciting in modern technological history.
Superconducting Qubits — The Current Front-Runner
IBM and Google use superconducting circuits operated at temperatures of about 15 millikelvin — roughly 0.015 degrees above absolute zero. The qubits consist of tiny current loops in which the current flows in both directions simultaneously — a macroscopic superposition.
In December 2024, Google achieved a historic breakthrough with the Willow processor: for the first time, it was demonstrated that quantum error correction actually works as the theory predicts. The more qubits used for error correction, the lower the error rate — a result the community had worked toward for over a decade. Published in Nature and already cited over 460 times, this experiment marks the transition from the "noisy" to the fault-tolerant era.
Trapped Ions — The Precision Masters
Companies like IonQ and Quantinuum use individual charged atoms — ions — suspended in electromagnetic fields. The ions are manipulated with lasers and can execute extremely precise quantum gates.
The advantage: trapped-ion qubits produce the fewest errors per computation — they operate more precisely than any other platform. The disadvantage: they are slower than superconducting systems and harder to scale. But the all-to-all connectivity — every qubit can interact directly with every other — is an enormous advantage for certain algorithms.
Topological Qubits — Microsoft's Long-Term Bet
Microsoft pursues a radically different approach: topological qubits based on Majorana fermions. These exotic quasiparticles are intrinsically protected against disturbances — like a knot in a rope that doesn't come undone with a gentle tug.
In February 2025, Microsoft published in Nature the first experimental demonstration of a key component: interferometric single-shot parity measurement in indium arsenide-aluminum hybrid structures. The road to a functioning topological qubit is still long — but if it succeeds, it could solve the error correction problem at the hardware level.
Photonic Quantum Computers — Light as Qubit
Xanadu and PsiQuantum use photons — particles of light — as qubits. The major advantage: photons barely interact with their environment and can operate at room temperature. No cryogenic cooling required.
In January 2025, Xanadu presented in Nature a modular photonic quantum computer that can be connected via fiber-optic networks. This opens a path to scaling that is barely conceivable with other platforms: quantum computers networked like classical servers.
Silicon Spin Qubits — The Semiconductor Revolution
Intel pursues a particularly pragmatic approach: qubits based on individual electron spins in silicon. The advantage is obvious — the entire semiconductor industry is built on silicon. In May 2024, Intel demonstrated in Nature that individual electrons can be controlled on standard 300mm wafers — the same wafers used today for billions of transistors.
If this approach works, existing chip infrastructure could be used for mass production of quantum processors. No other approach offers this scaling potential.
Neutral Atom Systems — The Newcomers
QuEra and Pasqal use neutral atoms held in precise patterns by optical tweezers — focused laser beams. These systems can control hundreds of qubits simultaneously and are particularly well suited for optimization and simulation tasks.
What Quantum Computers Are Actually Good For
A widespread myth needs to die here: quantum computers are not faster than classical computers at everything. They will never replace your laptop, load emails faster, or speed up word processing.
But for certain classes of problems, they aren't just faster — they're in an entirely different league.
Simulating quantum systems: This was Richard Feynman's original vision from 1981. Nature is quantum-mechanical — why not simulate it with a machine that also works quantum-mechanically? A classical computer needs exponentially more compute time as more particles are simulated. A quantum computer doesn't.
Searching unsorted data: Lov Grover's algorithm from 1996 searches an unsorted database of N entries in √N steps instead of N. That sounds modest — but for a billion entries, it's the difference between one billion and 31,623 steps.
Prime factorization: Peter Shor's algorithm from 1994 can factor large numbers into their prime factors — exponentially faster than any known classical algorithm. RSA encryption, which protects a large portion of today's digital communication, is based on this.
Optimization problems: Finding the best route for 1,000 delivery trucks, assembling the optimal portfolio from thousands of stocks, computing the most efficient configuration of a supply chain — problems where the number of possible solutions grows explosively.
A quantum computer will never replace the classical computer. It will complement it — the way a microscope complements the human eye. You don't read a newspaper through a microscope. But when you want to know what's happening at the molecular level, eyes are no longer enough.
The Simulation Revolution
If there is one area where quantum computers will change the world, it's simulation. Not the simulation of game worlds or weather forecasts — but the simulation of nature at its most fundamental level: the level of individual atoms and molecules.
New Drugs — Without Decades in the Lab
Developing a new drug takes an average of 10-15 years today and costs over a billion euros. The reason: molecules are quantum-mechanical objects. How a drug binds to a protein, what side effects it has, whether it remains stable in the body — all of this depends on quantum-mechanical interactions that classical computers can only roughly approximate.
A quantum computer can simulate these interactions exactly. Not approximately, not with tricks — exactly. Because it works quantum-mechanically itself.
In 2024, researchers demonstrated for the first time a hybrid quantum computing pipeline for real-world drug design problems — no longer just proof-of-concept studies, but the simulation of covalent bonds in actual drug candidates. By the end of 2025, the prediction of hydration sites in protein pockets was performed on a quantum computer with 123 qubits — with a precision matching classical methods, but paving the way for advantages in more complex systems.
New Materials — From Superconductors to Batteries
The simulation of materials may be even more significant than that of drugs. Because here we're dealing with the physical foundations of our technology.
Next-generation batteries: Researchers are already simulating the X-ray absorption spectroscopy of lithium-excess cathodes on quantum computers — the materials that give electric vehicles their range. Classical simulations of these systems hit their limits because the electron correlations become too complex. Quantum computers can natively represent these correlations.
The holy grail — room-temperature superconductors: A material that conducts electricity with zero resistance. At room temperature. It would revolutionize energy transmission, magnetic resonance imaging, particle accelerators, and quantum computers themselves. Classical computers cannot perform the necessary simulations — too many correlated electrons. Quantum computers could.
Physical Systems — Thousands of Variants in Minutes
Imagine you want to test a new aircraft design. Today, you build a prototype, put it in a wind tunnel, analyze the results, modify the design, build the next prototype. Or you simulate it classically — which can take weeks on supercomputers for turbulent flows.
With quantum simulation, you could run through thousands of variants before building a single physical model. Not in weeks. In hours.
This applies to everything that is quantum-mechanical at its fundamental level — meaning all of chemistry, all of materials science, and large parts of biology.
AI × Quantum Computing — The Near Future
The convergence of artificial intelligence and quantum computing isn't a distant future scenario — it's already being actively researched. And the potential is enormous.
Quantum Machine Learning
Classical AI models are trained on vast amounts of data, optimizing millions to billions of parameters. This optimization process is fundamentally a search problem in a high-dimensional space — and this is exactly where quantum computers can play to their strength.
Quantum neural networks use parameterized quantum circuits to process data in an exponentially large Hilbert space. Initial results show consistent performance improvements over classical models — for instance in predicting molecular properties for drug development, where quantum LSTM networks achieve ROC-AUC improvements of 3-6% over classical LSTM models.
Quantum-Enhanced Reinforcement Learning
Particularly promising is the combination of quantum computing with reinforcement learning — the learning paradigm where an AI learns through trial and error in an environment.
In 2025, researchers demonstrated that quantum algorithms — specifically Grover's search algorithm — can accelerate the kinematic optimization of robotic arms by a factor of 93. The quantum algorithm doesn't search the high-dimensional configuration space of a robotic arm sequentially — it superposes all possible configurations and finds the optimal position with quadratically fewer computation steps.
What Has Changed Since 2018
When I wrote the original version of this article in 2018, I was optimistic. I predicted that quantum computers would "arrive at a usable stage within the next 2 to 3 years."
It's time for an honest assessment.
What came true:
- Cloud-based quantum computers are a reality. IBM Quantum, AWS Braket, Azure Quantum — anyone with internet access can compute on real quantum processors today
- Algorithm development has accelerated massively
- Quantum Supremacy: Google claimed in 2019 with the Sycamore processor to have performed a computation in 200 seconds that would take a classical supercomputer 10,000 years. IBM disagreed — and the debate continues to this day
- Post-quantum cryptography is standardized: NIST finalized the first quantum-resistant encryption standards in 2024
What turned out differently than expected:
- Error correction was significantly harder than anticipated. It wasn't until the end of 2024 that it was even shown to work in practice
- "Usable" is relative. For specific research problems: yes. For everyday applications: not yet
- Hardware diversity has increased rather than consolidated — six different qubit technologies compete, and it's unclear which will win
- The number of qubits has grown faster than their quality — a problem the community has increasingly recognized
What would have surprised me in 2018: It wasn't the physics that was the biggest hurdle, but the engineering. Quantum mechanics works exactly as the theory predicts. But building a machine that keeps thousands of qubits coherent at 15 millikelvin, controls them precisely, and operates them fault-tolerantly — that is one of the greatest technical challenges humanity has ever undertaken.
Where the Journey Leads
I deliberately refrain from naming specific years. Anyone who learned in 2018 that predictions in this field are risky no longer makes specific forecasts.
But the direction is clear:
In the short term, hybrid systems will be the norm — quantum computers as specialized accelerators embedded in classical supercomputers. The quantum parts handle what they do best: simulating quantum systems, optimization in high-dimensional spaces. The rest stays classical.
In the medium term, error correction will mark the transition from the noisy to the fault-tolerant era. If this transition succeeds — and after Google's Willow result, there's strong evidence it will — practically useful quantum simulations will become reality. New drugs, new materials, new catalysts.
In the long term, a quantum internet will emerge — a network in which quantum computers are connected via entangled photons and compute together. The fiber-optic cables for it already exist. The encryption would be physically guaranteed to be unbreakable.
And cryptography? The post-quantum standards from NIST have already been finalized. RSA encryption will be replaced long before a quantum computer could crack it. For once, humanity has reacted in time.
A quantum computer doesn't compute faster. It computes differently — it lets nature take a shortcut that doesn't exist by classical means. This isn't a better calculator. It's a new tool for thinking.