Back

Shallow Circuits in Variational QML

Published by , 14th December 2021

Shallow Circuits in Variational QML

A Case For Noisy Shallow Gate-Based Circuits In Quantum Machine Learning

Variational Quantum Machine Learning (VQML) models are based on variational principles where computation is done on a quantum computer with classical model parameters. These algorithms have shown efficient performance even in the presence of noise in hardware implementations, making them compatible with current Noisy Intermediate-Scale Quantum (NISQ) devices.

At the moment, there exist many useful approaches for implementing VQML algorithms, especially for solving machine learning tasks, however no clear ‘optimal’ approach to designing the circuit architectures has been identified. Some approaches use reinforcement learning or simulated annealing to design variational quantum circuits, however these techniques are sample-inefficient. Furthermore, such techniques are not applicable when access to real devices is scarce and can be expensive for a higher number of qubits. In addition, deep quantum circuits in principle have a higher capacity over shallower circuits given a fixed number of qubits, however there is no clarity on how to take advantage of such benefit. The authors in this work attempt to establish a set of core design principles for VQML, focusing specifically to classification tasks. The objective also includes investigating the effects of key architecture properties on machine learning model performance, noise apparent in the system (inducing a range of errors into the model), and the number of qubits used.

The work explores an ansatz based on fixed blocks from which circuits are sampled and evaluated. The following workflow was established to explore a large range of candidate circuits given a dataset and (classical) optimizer, 1) creating a number of candidate circuits by randomly sampling the available design space with trainable parameters such as number of qubits, the entanglement pattern, the number of blocks and the configuration of blocks, 2) compiling these circuits into an executable Qiskit circuit, 3) deriving a number of noise models to be applied to circuits, based on the two decoherence times and gate duration, 4) compiling the noise models into Qiskit executable noise model, and 5) training all permutations of circuit designs and noise models and evaluating their performance.

The design guidelines were established for VQML classifiers based on experiments with a large number (n = 6500) of circuits applied to five common datasets. All VQML models (circuits) were trained using the COBYLA optimizer for a maximum of 200 epochs. The overall results suggest four design guidelines: i) Circuits should be as shallow as possible since they are more robust to two-qubit-gate errors and generally have higher accuracy potential in both noisy and noise-free simulations ii) Circuits should be as wide as possible, iii) The number of features should be kept small, which can be done through dimensionality reduction techniques like PCA and, iv) The noise level should be small. It is also observed that measurement error is only a significant factor if “large”, i.e. > 10%.

Since the results demonstrate that overall low depth circuits exhibit a larger variance, it is crucial to explore improvement in results when the number of qubits is increased further. Also, the design space and constraints on the proposed model introduce limitations in leveraging the maximum number of qubits. It is an important future scope to explore tradeoffs in relaxing these constraints. For instance, by using features multiple times, wider circuits can be implemented, however, this could increase circuit depth and potentially variance. Further study of various circuit designs can also involve the influence of the barren plateau problem. Moreover, a larger variety of noise models and sources of noise, especially inspired from noise in hardware, should be further studied to develop more noise resilient circuits. Finally, as these results represent only 5 datasets, a potential next step would be to analyze classification problems with more classes or features via exploring larger numbers of datasets and models.