Back

Towards quantum advantage with efficient graph implementations

Published by

Henrique Silvério

,

April 11, 2022

From molecules to quantitative finance, PASQAL’s quantum computers empower a fundamental machine learning concept to give a taste of what quantum advantage will look like.

Some of the world’s most complex structures and data can be encoded as graphs with nodes linked in a network. From molecules to quantitative finance problems, engineers and scientists have used the mathematical method of graphs for decades, but they have struggled to implement it to adequate computing solutions efficiently.

This article explains how PASQAL’s quantum processors are naturally suited to deal with such problems. Two experiments have been run on PASQAL’s first-generation 100 qubit quantum processors. The results clearly state the relevance of applying graphs methods on PASQAL’s quantum processors with computational benefits in chemistry and the financial sector.

Some of the world’s most interesting data is relational and can be encoded in graphs: nodes and links in a network, financial indicators and atoms in a molecular diagram, etc. Graph structures can be rich sources of information, allowing us to uncover hot spots in a network, clusters in a dataset, or infer function from structure in chemical compounds.    

Many industry-relevant question can be stated in terms of graph problems. These problems are often easy to state, but extremely hard to solve. Core questions in the study of graphs like— what is the best way of cutting this graph in two? or — how similar are these two graphs? — are tremendously challenging to answer in a scalable fashion.  

PASQAL’s quantum processors are naturally suited to deal with such problems. Our quantum computers use neutral atoms as qubits. Each atom is trapped using a focused laser beam known as an optical tweezer. Using these tweezers, individual atoms can be positioned into 2D and 3D arrays in a split second. Then, by means of carefully tuned lasers we can activate interactions between the atoms. In this way, we can encode an abstract graph into a physical graph, where atoms, in the blue circles from fluorescence pictures of real atoms, stand for nodes and interactions for edges (Fig. 1).    

In this post, we tell you how we are making fast progress towards turning this feature into quantum advantage for real-world applications.

Anticipating risk deterioration

The financial industry faces challenging computational problems like in pricing financial instruments, optimizing portfolios, or estimating risk routinely. Financial institutions must be able to perform these computations fast and with great accuracy to remain competitive. This is why banks and other financial companies keep an eye on quantum computing as a critical tool to their future success.

“We are playing an active role in this revolution by working with one of the world’s leading financial institutions—Crédit Agricole CIB. We are collaborating on industry-relevant problems where quantum computers can be fast and accurate. Importantly, we emphasize the use of real quantum hardware, not just simulations. We know that the field will only advance if we learn how to grapple with the full intricacy of real quantum processors,” says Ali El Hamidi, Deputy Head of Capital Markets Funding at Crédit Agricole CIB. “Our partnership with PASQAL is great. The complementarity of the goals and the skills of the two companies makes this journey a real pleasure towards pushing the boundaries further than what we have experienced in the past. We are thrilled to contribute to the progress of our field and seize the outcome for the benefit of our clients, our shareholders and our employees,”  

 

One of the exciting problems we are exploring with our partners at Crédit Agricole CIB is that of anticipating the risk deterioration of a financial exposure. This isn’t a theoretical discussion but a serious question in finance as the stakes are key in the bank’s risk management, solvency and profitability. Being able to anticipate portfolio adverse events helps the bank reduce its losses and control its cost of capital, thus becoming stronger and more competitive.

The question is: how could we spot the forthcoming risk deterioration?

The problem here is that data does not come prelabelled. The machine only sees a cloud of data points (corresponding to counterparties potentially at risk) in a high-dimensional space and must learn the categories by itself. Interestingly, this question can be turned into a weighted graph problem, where the weights correspond to the distance between data points. Splitting the data set translates into finding the cut across the graph that maximizes the sum over the weights of edges crossed (Fig. 2). This is the notoriously hard Maximum Cut NP-complete problem.  

It seems that we got ourselves into a conundrum. However, this is a graph problem and our processors display a natural affinity for such questions. Indeed, we discovered an efficient way to encode the problem in our processors, such that maximizing the cut value corresponds to minimizing the energy stored in the physical graph.  

Excellent results were achieved after multiple runs and over 44 000 measurements of real neutral atom arrays. We achieved a shorter time to solution than random search and our scaling is very promising (Fig. 3). As qubit count increases, the gap between random search and our techniques grows exponentially. Now, it is time to test this claim experimentally. To achieve this, we are automating our pipeline and pushing towards larger graphs with better atom and noise control.  

“Our collaboration with PASQAL has been very fruitful. We are very excited to see what real quantum hardware can do for our business. We look forward to continuing experimenting with their processors as they scale up,” says Didier M’Tamon, Head of Portfolio Models at Crédit Agricole CIB.

Assessing toxicity from a molecular graph

Chemistry is another field where our unique ability to process graph data can give an edge to our customers. A molecular graph is an abstract representation of a molecule where atoms correspond to vertices and edges to chemical bonds. Molecular graphs are more than simple abstractions. Sometimes, we can uncover critical information about a molecule’s properties by carefully studying its graph structure. The idea of inferring chemistry from graph structure dates back to the work of the great mathematician Arthur Cayley, who used graph theory to enumerate alkane structural isomers in the 1870s.

Today’s computational techniques open the possibility for asking more nuanced questions about the relationship between graph structure and function. One might ask, for instance, whether it is possible to infer the toxicity of a given compound directly from its graph. The Predictive Toxicity Challenge (PTC) poses this question. The challenge consists in training a model to distinguish between the graphs of toxic and non-toxic molecules. The model must learn by parsing over a database of molecular graphs marked according to their carcinogenicity on mice and rats.

In order to classify graphs into different categories, we must first consider another fundamental problem: how can we measure the similarity between two graphs? One could count the number of nodes and vertices or the number of loops or see if graphs can be deformed into one another. This is feasible for a few simple molecules, but we need to scale the process to make real discoveries.  

A popular approach consists of introducing a notion of similarity in the space of graphs. These functions are known as graph kernels. There is no canonical way to decide which kernel to use for a given problem, but the problem structure imposes certain constraints. For PTC, for instance, one would like two toxic molecule graphs to be ‘closer’ to each other than a non-toxic one. The art lies in finding a measure that is sensitive enough, without overkill. A kernel that focuses too much on minutia will end up being too computationally demanding to scale.  

Once a notion of similarity has been introduced, we can deploy machine learning techniques to classify graphs into different categories. For the case at hand, separating the graphs of toxic molecules from those of non-toxic ones. A popular method for achieving this is known as a support vector machine (SVM). An SVM’s job is to cut the space of graphs into two ‘halves’, one half for toxic, the other for non-toxic molecules. The learning process consists of the SVM refining the boundary separating these two halves by parsing over the data. This is an interesting strategy, but much work remains to be done. Commonly used graph kernels—like random walk kernels or graphlets subsampling—scale poorly.  

PASQAL’s quantum processors are naturally suited to deal with graph-encoded data, so we decided to explore the possibility of creating a new type of graph kernel.

 

Quantum evolution kernel

We can encode molecular graphs directly into the quantum register using atomic tweezers and laser pulses. But we also need to introduce a graph kernel native to the platform. We do this as follows: each graph corresponds to a particular quantum state which stores a different amount of energy, corresponding to the cost of generating the graph. Then, we let the system evolve freely except for carefully tuned laser pulses. At the end of the process, we measure the final energy of the state. We repeat this procedure several times and build an energy histogram for each graph (Figure 4). Since each graph will evolve somewhat differently, these histograms provide a sort of energy fingerprint for each graph.

How does this help us? We have encoded molecular graphs in our quantum computer and associated a unique energy histogram to each. Thus, we translated the question of measuring graph similarity into comparing two energy histograms. This is very convenient because there is a well-developed mathematical machinery to do this. We rely on the Jensen-Shannon divergence, commonly used to measure the distance between distributions in fields ranging from bioinformatics to quantitative history.

And now we have new graph kernels based on quantum evolution!

Quantum maps in the space of graphs

Once we have a graph kernel, we can turn on a support vector machine to learn from data how to classify graphs. We tested our new Quantum Evolution Kernel (QEK) against classical Kernels on various graph datasets; using real quantum hardware! And our QEK showed results comparable to the best-in-class classical methods.  

Furthermore, we found something much more intriguing. A graph kernel is a way of measuring how similar two graphs are to one another. In doing so, the kernel induces a notion of distance, which can be used to build maps of the space of graphs. We discovered that the QEK could be used to create maps that appear to be impossible to draw with classical methods.  

We might have discovered a new way of charting the space of graphs. Now, it is time to take these maps and explore. We’ll keep you posted on what we find.  We will publish a paper based on this experiment in the near future, stay tuned!