COHERENT COMPUTING https://coherentcomputing.com Your Gateway to the Quantum Age Sun, 31 Aug 2025 10:36:16 +0000 en-US hourly 1 https://wordpress.org/?v=6.9 https://coherentcomputing.com/wp-content/uploads/2025/02/CC_pictogram_Color.svg COHERENT COMPUTING https://coherentcomputing.com 32 32 How many qubits does a machine learning problem require? https://coherentcomputing.com/how-many-qubits-does-a-machine-learning-problem-require/ https://coherentcomputing.com/how-many-qubits-does-a-machine-learning-problem-require/#respond Sun, 31 Aug 2025 10:10:08 +0000 https://coherentcomputing.com/?p=772 What if you could examine a dataset and quickly calculate how many qubits you’d need to model it on a quantum computer? For quantum machine learning, that kind of clarity has been missing. And as a result, researchers and companies have been left guessing about when quantum models will truly beat classical ones.

We know of only a few cases where quantum advantage is proven, Shor’s algorithm for factoring integers being the classic example. In most other cases, especially in machine learning, the situation is fuzzier: speedups are anticipated, but it’s not clear where the boundary lies between classical and quantum relevance. This matters most in areas like biology, where the data is huge, complex, and deeply interconnected, exactly the kind of setting where quantum resources should shine.

That’s the gap we set out to close. In our latest paper, “How many qubits does a machine learning problem require?”, we introduce the first framework that can tell you how many qubits are needed for a quantum model to reach a desired accuracy on a dataset of interest.

We find that medium-sized classical machine learning problems from OpenML require about 20 logical qubits to reach 100% training and testing accuracy – something within the reach of today’s classical simulators – suggesting no quantum advantage. However, when applied to subsets of the Tahoe-100M dataset, a transcriptomic dataset with 100 million samples, the required qubits quickly exceed the practical limit for classical simulation. But they are still well within the reach of the quantum computers that are expected to be viable in the next 5 years! These results support that some of the largest, most information-rich problems in biology and beyond may be prime candidates for quantum machine learning.

As a further technical result, we find that the number of qubits required to reach 100% train and test accuracy did not necessarily increase with the number of features of the Tahoe-100M subsets. The key lies in the bit-bit encoding technique we introduced earlier this year, which efficiently compresses classical data into a fixed qubit budget while preserving the information most relevant for learning.

In conclusion, by offering a concrete way to connect datasets to quantum hardware requirements, we mark a turning point in the field of quantum machine learning. Instead of speculating about where quantum advantage may appear, researchers can now calculate it. For companies exploring AI and quantum technology, that means clearer roadmaps, smarter allocation of resources, and a sharper sense of when quantum computing will make a meaningful impact.

To learn more about this research, check out the paper https://arxiv.org/pdf/2508.20992 .

Stay tuned for a convenient online tool to calculate whether or not your own datasets are likely to benefit from quantum advantage! If you would like alpha access to our tool or want to learn more about our other offerings, please feel free to reach out to us at [email protected].

]]>
https://coherentcomputing.com/how-many-qubits-does-a-machine-learning-problem-require/feed/ 0
A Hierarchy of Contexts https://coherentcomputing.com/a-hierarchy-of-contexts/ https://coherentcomputing.com/a-hierarchy-of-contexts/#respond Wed, 07 May 2025 08:42:09 +0000 https://coherentcomputing.com/?p=489 In this post, we describe an important aspect of constructing and compiling complex quantum operators—specifically, how large operators built from many smaller pieces can be represented and resolved.

Let’s say we want to create a unitary operator acting on a large Hilbert space. This operator isn’t monolithic—it’s built from smaller sub-operators, each of which may act on its own local Hilbert space. Some might act on just a few qubits, others on the full space. And each of those sub-operators may, in turn, be composed of even smaller building blocks. As a result, the overall operator can be naturally organized as a tree, with the highest-level operator at the root and more granular components forming the branches and leaves.

In the diagram, time flows from left to right. That is, an operator placed to the left of another in this structure will be applied earlier. Up to this point, we haven’t mentioned quantum circuits explicitly—although circuits could be one way to implement such a structure, the construction we’re describing is more general.

Now consider adding an operator at the lowest level of the tree, say to the branch all the way on the left.

In quantum computing, a unitary operator can be defined concretely using a sequence of low-level gates, similar in spirit to a classical list of instructions. But here’s the key difference: in quantum mechanics, the same operator can be executed in a variety of contexts which may change how they relate to the overall operator. For example:

  • The operator might be executed in its inverse form.
  • It might be controlled on the state of another qubit.
  • It could appear inside a compute–uncompute block.
  • Or it might occur within a Hadamard test, where it plays a role in an interference-based measurement.

Crucially, we don’t want to define a new operator for each such context. Instead, we define the operator once and annotate the tree with the context in which it should be interpreted. Since the context is not intrinsic to the operator itself—but rather to how it is being used—it makes sense to store that context in the parent node of the operator, not within the operator’s own definition.

And here’s where things get especially interesting: these contexts can interact with each other, and these interactions can be exploited for compilation. For example, when an operator is both controlled and wrapped in a compute–uncompute structure,

Thus, once we’ve constructed the overall operator tree, the first step in compilation is to resolve the hierarchy of contexts. This means systematically combining and simplifying the contexts so that the final operator realizes the correct unitary evolution in the most efficient manner.

This approach opens up a rich set of questions. Are there a finite set of physically meaningful contexts? Can their interactions be classified or composed in some principled way? Understanding these possibilities could guide future research and help define the semantics of high-level quantum programming.Coherent Computing is pioneering high-performance and user-friendly software platforms that unlock the full potential of quantum hardware. Get in touch with us to learn more.

]]>
https://coherentcomputing.com/a-hierarchy-of-contexts/feed/ 0
Quantum Software for Accelerating Application Design https://coherentcomputing.com/quantum-software-for-accelerating-application-design/ https://coherentcomputing.com/quantum-software-for-accelerating-application-design/#respond Tue, 06 May 2025 10:00:43 +0000 https://coherentcomputing.com/?p=86 In this post, we’ll walk through our architectural approach — a structure designed to make quantum computing more accessible, modular, and scalable.

Building for the Quantum User

At the top of the stack, we find the hardware-independent layers. These are the tools and abstractions that allow users — scientists, engineers, and domain experts — to design quantum solutions without worrying about the quirks of any particular device.

  • APIs for Domain-Specific Integration: These APIs allow quantum capabilities to be embedded directly into workflows like finance, pharmaceuticals, logistics, and materials science, without requiring end users to have deep quantum expertise.
  • Quantum Application Libraries: Beneath the APIs are rich libraries containing pre-built algorithms and application templates, streamlining tasks like optimization, simulation, and machine learning.
  • Quantum Structures: This layer provides quantum abstractions that enable developers to reason about quantum systems and construct sophisticated algorithms.
    • Q operations, specified as Hamiltonians, oracles, tensor networks, rotations etc.
    • Q data, which represent quantum states with support on abstract Hilbert spaces, defined explicitly, or through descriptions that capture entanglement structure (such as a collection of Bell pairs or a matrix product state), or implicitly through the application of a unitary to a computational basis state
    • Q logic, such as compute/uncompute patterns, controlled operations, execution conditions based on symmetries
  • High-Level Quantum Circuits: At this layer, quantum programs are described as circuits involving relatively small numbers of qubits (~ 10). These circuits serve as a concrete description of a quantum algorithm before it is adapted to specific hardware constraints.

Bridging to Hardware

At a certain point, abstraction must give way to physical reality. This happens at the hardware-specific layers.

  • Hardware-Aware Quantum Circuits: These circuits have been compiled to ‘native’ operations that are directly executable on a target quantum device. The compilation accounts for the quirks and constraints of specific quantum devices — such as connectivity limits, error rates, or available gate sets. They are optimized to ensure that programs remain efficient and robust when mapped onto actual hardware. This stage also incorporates error detection, mitigation and correction methods.
  • Quantum Hardware APIs: Finally, at the bottom of the stack, we interact directly with the quantum processors themselves. Coherent Computing’s APIs provide a standardized way to send instructions to a diverse range of quantum devices, ensuring portability and flexibility.

This layered structure serves a critical purpose: separation of concerns. By carefully dividing the software into hardware-independent and hardware-specific components, Coherent Computing aims to enable rapid development, broader accessibility, and easier hardware innovation. Users should be able to build complex quantum workflows without ever needing to manage the deep technical challenges of working with quantum devices. At the same time, hardware specialists can focus on improving qubit technologies without constantly rewriting higher-level software.

In short, it’s a design that acknowledges the unique challenges of quantum computing while opening the door to real-world adoption.Coherent Computing is pioneering software platforms that unlock the full potential of quantum hardware.

Get in touch with us to learn more.

]]>
https://coherentcomputing.com/quantum-software-for-accelerating-application-design/feed/ 0
Foundations for Scalable Quantum Machine Learning https://coherentcomputing.com/foundations-for-scalable-quantum-machine-learning/ https://coherentcomputing.com/foundations-for-scalable-quantum-machine-learning/#respond Mon, 05 May 2025 10:00:18 +0000 https://coherentcomputing.com/?p=88 In this post, we will walk through this recent paper that outlines a strategy for training large quantum models for the problem of classification. In this problem, we are given a labeled dataset — for example, images of handwritten digits where each image is paired with a label indicating which digit it represents. The task is to learn a model that, given a new image as input, can accurately predict the correct label as output. 

There are three major criteria any scalable quantum model must satisfy:

  1. Universal Approximation:
    The architecture of the model should be such that as the number of qubits and quantum gates increase, the function realized by the model systematically approaches the ideal function describing the relationship between the inputs and outputs.
  2. Trainability:
    We need a reliable method to start training and converge the model — ideally one does not rely on fragile, hyperparameter-sensitive optimizers.
  3. Hardware and Simulator Utility:
    Ideally, current quantum simulators and early quantum hardware should already be useful stepping stones for this path.

The paper develops techniques that directly address all of the above criteria for classification, and demonstrates it on the MNIST dataset.

1. Bit-Bit Encoding: Maximizing Expressivity with Minimal Resources

Traditionally, quantum machine learning has struggled with the “data loading problem” — how to efficiently represent classical data with quantum states. Loading real-world datasets (like MNIST images) into quantum circuits without losing key information has seemed impractical.

This work introduces bit-bit encoding, a simple yet powerful idea:

  • Compress classical data down to the most predictive bits via efficient classical preprocessing.
  • Encode both inputs and outputs directly as binary strings.

Critically, this method ensures universal approximation: any function between the input bits and output bits can be represented by a unitary operation. Unlike amplitude or angle encoding — where expressivity is limited — bit-bit encoding allows the model to learn any relationship as quantum gate depth and qubit count increase.

It flips what one might intuitively assume, showing that sparsity in data loading enables better expressivity in quantum models, in contrast to dense loading.

2. Optimizer-Free Training: Guaranteed Convergence Without Gradients or Hyperparameters

Another major obstacle has been trainability. Classical machine learning relies heavily on gradient-based optimizers like Adam or SGD, which require careful tuning of learning rates and other hyperparameters. Worse, these optimizers can get stuck at saddle points which occur frequently in high-dimensional optimization.

Here, we propose a radical shift: exact coordinate updates. Instead of guessing step sizes or tuning learning rates:

  • We analytically compute the exact minimum for each parameter update.
  • Only two or three measurements per update are needed, matching the sample complexity of gradient methods.
  • No hyperparameters like learning rate need to be set.
  • Guaranteed convergence to a local minimum, something classical methods can’t promise!

Moreover, this method naturally avoids getting stuck at saddle points — an advantage that becomes more critical as models grow in size since large models are much more susceptible to getting stuck at saddle points than they are at bad local minima.

3. Sub-Net Initialization: Overcoming Barren Plateaus

Large variational quantum circuits often suffer from barren plateaus on initialization — regions where gradients vanish and training grinds to a halt.

The solution here is sub-net initialization:

  • Train smaller quantum models first (with fewer qubits and parameters).
  • Use the trained parameters to initialize larger models incrementally.
  • Initialize newly added nodes connected to the trained sub-network as identity gates.

This avoids random initialization that causes barren plateaus and creates a “ladder” where each rung builds smoothly on the last. It’s like growing a deep neural network layer-by-layer — but with exact parameter transfer, something classical ML can’t easily replicate.

It also enables a powerful fail-fast strategy: If a small model gets stuck in a poor local minimum, retrain it (cheaply) before scaling up, rather than wasting time on huge models stuck in bad basins.

Putting It All Together: Training Large Quantum Models

Using these three techniques, we demonstrated scalable quantum learning on the MNIST dataset (digits 0–3 and 0–9), scaling from 4 to 16 qubits on quantum simulators. We designed an ‘entanglement net’ architecture which consists of entangling nodes between all possible pairs of qubits, as well as a funnel-like architecture that systematically concentrates the relevant information until it is measured. This architecture can encode arbitrary correlations and can approximate any function between the inputs and outputs as the depth increases thus satisfying the universal approximation property.

The paper finds that

  • Loss consistently decreased with the number of qubits.
  • Optimizer-free training successfully converged across different seeds.
  • Sub-net initialization outperformed random initialization in building larger models.

Most importantly, this framework gives near-term quantum computers — even ones that can’t yet beat the performance of classical systems on machine learning, but have gone beyond the latter’s simulation capabilities  — a crucial role:  Training smaller sub-models today prepares better initializations for bigger quantum models tomorrow. Thus, every generation of quantum hardware becomes a stepping stone toward quantum advantage.

Final Thoughts

This paper is a major step toward building practical, scalable quantum machine learning models.

It offers:

  • An information-theoretically motivated encoding scheme (bit-bit).
  • A training method (exact coordinate updates) that beats classical methods in convergence guarantees.
  • A strategy (sub-net initialization) to systematically scale without falling prey to barren plateaus.

Unlike many approaches that mimic classical machine learning, these methods embrace quantum-native thinking — leveraging sparsity in data encoding, strongly-entangling unitaries, a training strategy that eschews classical optimizers and instead exploits the functional form of the quantum loss, and the fact that entangling nodes can be added to the entanglement net as an identity operator in order to do sub-net initialization. In contrast to earlier studies, it demonstrates that quantum models can be extremely expressive and trainable at the same time. And importantly, it demonstrates that you do not need QRAM (quantum random access memory) or multiple data reuploadings in order to get a model that has the property of universal approximation.Coherent Computing is pioneering high-performance and user-friendly software platforms that unlock the full potential of quantum hardware.

Get in touch with us to learn more.

]]>
https://coherentcomputing.com/foundations-for-scalable-quantum-machine-learning/feed/ 0