March 27, 2026
Why quantum computers could be great for machine learning after all
What is the current direction of quantum machine learning as a field? In this blog post, Xanadu's Quantum Machine Learning team discusses using quantum computers and the quantum Fourier transform to unlock fundamentally different approaches to machine learning—and better machine learning models.
Since its early days in 2017, Xanadu’s research effort has had a small subdivision obsessed with a single question: Why should someone use a quantum computer to solve a machine learning task?
This question became quite popular between 2018 and 2024, when papers constructing contrived quantum speedups for learning, or boasting how quantum outperforms classical ML in benchmarks, were found daily on the arXiv. Naturally careful about big claims, we were often baffled by how unhelpful these results turned out to be in answering our question. Fashion moved on, and it became popular to criticise quantum machine learning instead.
Again, we were not happy: the criticism, we believed, was all for the wrong reasons—it said that most random quantum models are bad, but didn’t help us find the "measure-zero" zone of good ones. In light of being so hard to convince, it is rather remarkable that at Xanadu we have unanimously started to be excited about a research direction, which we sketch in our latest paper. This blog explains why, and it has a lot to do with Fourier transforms.
Let’s first get something out of the way.
Quantum computers, which are developed across the globe, can be seen as samplers from high-dimensional probability distributions that are prepared by manipulating and measuring qubits. Machine learning is mostly concerned with finding tricks to solve the computationally challenging problem of modeling high-dimensional probability distributions. Given this overlap, one might think it unlikely that quantum hardware will be of no use for machine learning.
But how can we identify where it can have an impact if machine learning is notoriously empirically driven, while quantum computing is presently limited to pure theory, noisy experiments and small-scale simulations? The answer is simple: we have to use the tools available today to design the best quantum machine learning algorithms we possibly can. Then, we have to test and refine them once the first fault-tolerant hardware is available.
How, then, can we design good quantum machine learning algorithms?
Such an algorithm would have to be classically difficult to implement, solve a useful problem in a different way than currently possible, and be feasible to implement on realistic quantum hardware (a practical constraint that is often overlooked by academic contributions). The standard strategy is to start with popular machine learning algorithms and try to speed them up by “making them quantum”. But there are problems with this approach. It is evident, even to non-scientists, that popular machine learning algorithms already work extremely well. Furthermore, they have been carefully selected to suit classical hardware. Following this logic, the worst starting point is actually one of the preferred strategies in quantum machine learning research: attempting to speed up neural networks with quantum computers.
We decided, instead, to turn things around and ask a single question: What is the most natural operation for quantum computers?
After some deliberation, our choice fell on the Quantum Fourier Transform (QFT). The QFT was, after all, a driving engine behind Shor’s algorithm, which remains one of the most prolific examples of a (somewhat) useful quantum algorithm up to this date. It is intimately related to group structure, which seems to be something quantum computers excel at exploiting. And it actually appears in most modern quantum algorithms as well, if we consider that a layer of Hadamard gates executes the “boolean” version of the Fourier transform. But much more importantly, (group) Fourier analysis seems to be deeply related to the mathematical formalism of quantum mechanics itself, and can therefore be considered “natural” for quantum computers, always a good thing if one bets on an approach. The only issue was that there is (hardly) any mention of Fourier transforms in machine learning textbooks!
Not deterred by such a detail—after all, we were trying to find something that machine learning is not yet solving—we started digging. And found the fascinating work of Persi Diaconis, a mathematician at Stanford (who happens to be a magician as well). His extensive work from the 1980s claims that the Fourier spectrum of a function—for example one that records the results of a survey, but it could likewise be a machine learning model—is full of statistically relevant information. And he was not (only) referring to the Fourier spectrum that everyone knows from signal processing, but also group-theoretic generalisations. These generalisations are highly relevant, since machine learning has recently recognised the value of thinking of data domains as groups. In fact, Diaconis’ favourite example is nothing other than the symmetric group, a deeply fundamental, but non-trivial specimen that many master mathematically, but only a very special mind can link to practical tasks such as shuffling cards. His work has even inspired a range of machine learning algorithms around 2010, an effort that we believe stalled due to computational challenges.
Things started to get interesting: Is the Fourier spectrum of a model both relevant for learning, but also computationally hard to compute, and hence hard to manipulate? And does this generalise to all sorts of groups for which quantum computers can implement the QFT with exponential speedup over classical computers?
We dug deeper, and what emerged was unexpected. It turns out that some of the most fundamental methods in machine learning, including deep learning, can be understood better by looking at the Fourier spectrum of a model. For example, many “kernel methods” such as Support Vector Machines with a Gaussian or Laplacian kernel, enforce a decaying Fourier spectrum to regularise—to encourage training to find simple models that do not overfit the data. The Maximum Mean Discrepancy, a kernel-based cost function to train implicit generative models, often compares distributions by comparing mostly the lower-order part of the distribution’s Fourier spectrum. And—this was the biggest surprise to us—deep learning seems to have an inherent spectral bias, which fits low-order Fourier coefficients before higher-order ones. In short, and in hindsight not unsurprisingly, machine learning often implicitly aims at designing a certain behaviour in the Fourier space of a model.
Or in other words: maybe machine learning textbooks would likely be full of mentions of Fourier transforms of models, if only they weren’t so hard to compute directly.
This was clearly enough to excite us. We had, after all, a candidate answer to our most burning question of why quantum computers could be useful for machine learning—to unlock spectral methods. But, of course, exploiting these insights is not so easy. First of all, if a lot of methods in machine learning implicitly design models in Fourier space, it means that we have powerful classical competition. We need to find situations in which these indirect methods fail, things that people tried and gave up on, strategies that might only work with access to quantum hardware. And since there is a lack of large-scale benchmarks to compare methods, we need to rely on our intuition—what we read between the lines in papers and conclude from limited simulations.
Secondly, quantum access is very particular. For example, we quickly realised that a QFT does not manipulate the Fourier spectrum of a generative machine learning model implemented as a quantum state, but its amplitudes. This turned out to be a subtle difference that we have found both useful and annoying at times.
And thirdly, quantum algorithms do not just provide a silver bullet. For example, taking a quantum state and bandlimiting its Fourier spectrum, or implementing stochastic processes in Fourier space, sometimes requires highly non-unitary operations, with the usual price to pay. We therefore have to make sure that our ideas remain feasible to implement on fault-tolerant quantum hardware, which adds another point to the long list of constraints.
Our article gives some more detail on the technical background, as well as the experience on potential and pitfalls that we have gathered while approaching quantum machine learning from the perspective of spectral methods (watch the space for upcoming work!). And this has already changed the way we conduct research: Having a tentative answer for why quantum computers should help to build good machine learning models—to design simplicity biases with the help of Fourier space—provides a design principle, or working hypothesis, that can guide us. If a model we cooked up does not work, we can go back to the reason why we believed it should work in the first place, and find out what went wrong. Designing “good” quantum machine learning models became much more than showing our colleagues that in some contrived case it can be reduced to a known classical hardness proof, or showing that it cannot be dequantised. Good models are, at least for us, those that realise the principles of learning in a manner that is natural for quantum computers. Essentially this means that they find the delicate sweet spot between complexity and simplicity, the place where a model is as complex as necessary, but as simple as possible.
Fourier space might be a good place to find this sweet spot. And, it is likely to be found somewhere in a gray zone where both classical hardness proofs and dequantisation methods fail—a place that quantum machine learning hasn’t spent much thought on yet.
About the authors
Vasilis Belis
QML research at Xanadu
Joseph Bowles
Quantum Machine Learning researcher at Xanadu
Rishabh Gupta
Quantum Scientist @ Xanadu | Quantum Machine Learning | Quantum Finance
Evan Peters
QML research at Xanadu
Maria Schuld
Dedicated to making quantum machine learning a reality one day.