Back

# Decompositional QGNN

Published by

Karg W Lee

,

November 23, 2022

Quantum Machine Learning is a field that aims to solve Machine Learning problems with quantum algorithms and quantum computing. There exists a large variety of quantum algorithms used in solving such problems, with the most common being Quantum Neural Networks (QNNs). Varieties of QNNs include Quantum Convolutional Neural Network (QCNN), Quantum Generative Adversarial Network (QGAN), and Quantum Graph Neural Network (QGNN). As the names suggest, these algorithms are inspired from their classical counterparts. In this paper, the proposed algorithm is a hybrid quantum-classical algorithm for graph-structured data, which is called Decompositional Quantum Graph Neural Network (DQGNN), and is based on the classical GNN, which is one of the most popular learning tools, especially for dealing with non-Euclidean data. The main differentiator between the DQGNN and previous QGNN implementations is the decompositional aspect of this algorithm, that allows the computation of larger-sized graph-structured data with a fixed-sized quantum circuit with limited number of qubits, regardless of the size of the data. Algorithms based on GNNs have found applications in social network analysis, drug design, combinatorial optimization, and multi-agent learning.

GNNs function on the basis of the neighborhood aggregation strategy, which involves updating the representation of a graph node by recursively aggregating the representations of its neighbors. So far, a variety of GNNs has been realized such as the Relational Graph Convolutional Networks (R-GCN), Graph Isomorphism Networks (GIN), edGNN, Random Walk Graph Neural Networks (RW-GNN), and Factorizable Graph Convolutional Networks (Factor GCN). However, these approaches embed nodes represented as points in a Euclidean space leading to significant structural distortion and information loss during computation. On the other hand, DQGNN is a hybrid quantum-classical algorithm for graph-structured data, that focuses on effective and scalable learning.

DQGNN utilizes tensor and unitary matrices to implement a quantum GNN by designing a quantum circuit and replacing the Euclidean weight matrices of the GNN with unitary matrices, i.e. quantum gates. Utilizing tensor products offers two advantages: i) enlarges the node representation space exponentially such that nodes with different features can be mapped to different representations, and ii) requires significantly fewer model parameters than related deep learning models. DQGNN consists of the following processing steps; 1) A classical computer is used to decompose graphs into subgraphs. 2) The circuit of the mapping method is trained with node features. After training, the node features of the subgraphs are mapped into quantum states by applying the circuit. 3) The quantum circuit is implemented on a quantum device to compute the representations of the different subgraphs sequentially. 4) Finally, a classical computer computes the entropies of the individual representations and combines them. All these steps are iterated for obtaining graph representations for classification.

Besides the proposed decompositional processing strategy, a trainable method is also proposed for mapping data from a Euclidean space to a Hilbert space, which can maintain distance relations and reduce information loss during the mapping.

DQGNN is evaluated experimentally on six graph classification benchmarks. The compared methods include graph kernels, deep learning methods, and quantum machine learning methods. DQGNN is also compared with recent baselines for graph classification: 1) Kernel-based methods: WL subtree kernel, and Subgraph Matching Kernel (CSM), 2) Representative GCNs, including Deep Graph CNN’s (DGCNN) and Relational Graph Convolutional Networks (R-GCN), 3) Representative GNNs, i.e., Graph Isomorphism Network (GIN), edGNN, and RW-GNN. A derivative-free optimization method, UOBYQA is utilized for training DQGNN. A quantum simulator is used to implement quantum machine learning models on a classical computer utilizing codes by Qiskit and Tensorflow Quantum.

The results demonstrate that DQGNN achieves the best results on 5 out of 6 benchmarks with accuracy close to GNNs, while requiring fewer parameters in comparison. As compared to the GNN model with the least parameters referenced as DGCNN, the authors find that DQGNN achieves similar performance with only 1.68% parameters of the classical counterpart. The performance is also compared with alternative quantum machine learning methods like QSVM (Quantum-enhanced Support Vector Machine) and QCNN (Quantum Convolutional Neural Networks) using the graphlet count vector as the inputs (Gra+QSVM, Gra+QCNN). Results show that Gra+QSVM, Gra+QCNN and Gra+QCNN with the proposed trainable mapping method (Gra+QCNN w/M) demonstrate no improvement on DQGNN. However, as compared to Gra+QCNN, Gra+QCNN w/M achieve higher accuracy, therefore highlighting the effectiveness of the proposed mapping method. For example, in the case of the PROTEINS dataset, the accuracy achieved by DQGNN is 9% higher than that of DQGNN without the mapping. This improvement can be resulted from less information loss in the former case.

DQGNN is a method that can be applied to any application that includes graph-structured data, and offers many advantages compared to other contemporary algorithms. The main advantage lies in the ability to divide the graph into sub-graphs and process each sub-graph one at a time with DQGNN, while using the same quantum circuit. Such methods can help with potential scalability issues that other algorithms might exhibit. Furthermore, it is speculated that the proposed mapping method can enhance the accuracy of alternative quantum machine learning methods, because of less information loss.

DQGNN provides a good insight into transitioning to larger-sized graph-structured data during the NISQ era (limited qubits); requiring fewer parameters but having similar performance to other contemporary quantum algorithms.