- Installation
- Tutorials
- How-Tos
- Background
- How to Give Feedback
- Contribution Guidelines
- Acknowledgements
- References
- License
The quantum kernel training (QKT) toolkit is designed to enable users to leverage quantum kernels for machine learning tasks; in particular, researchers who are interested in investigating quantum kernel training algorithms in their own research, as well as practitioners looking to explore and apply these algorithms to their machine learning applications.
We've designed this python-based toolkit around new components in Qiskit Machine Learning that provide extensible kernel training capabilities, documentation to guide users, as well as datasets and utilities for exploring and testing concepts.
This project evolved out of research at IBM Quantum on "Covariant Quantum Kernels for Data with Group Structure" [1].
Problem Statement
Given a labeled dataset, optimize a parametrized quantum kernel, according to a given loss function, for a machine learning task. For example, use quantum kernel alignment (QKA) as a loss function to iteratively adapt a quantum kernel to a classification dataset while converging to the maximum SVM margin.
Why Does It Matter?
Kernel methods are widespread in machine learning applications. A kernel is a similarity measure between data encoded in a high-dimensional feature space and can be utilized, for instance, in classification tasks with support vector machines. It is known that quantum computers can be used to replace classical feature spaces by encoding data in a quantum-enhanced feature space. Using an algorithm called the quantum kernel estimator (QKE), one can compute quantum kernels with data provided classically [2]. A key observation of this work was that a necessary condition for a computational advantage requires quantum circuits for the kernel that are hard to estimate classically. More recently, researchers proved that a quantum kernel can offer superpolynomial speedups over any classical learner on a particular learning problem based on the hardness of the discrete logarithm problem [3]. Furthermore, this particular kernel is contained in a kernel family, called covariant quantum kernels, that can be used for data with a group structure [1]. These results indicate that quantum kernels are an increasingly promising approach in machine learning problems.
However, finding a good quantum kernel for any given dataset can be a challenging problem in practice. Sometimes, structure in the data can inform this selection, other times a kernel is chosen in an ad hoc manner. Quantum Kernel Alignment (QKA) is one approach to learning a quantum kernel on a dataset. This technique iteratively adapts a parametrized quantum kernel to have high similarity to a target kernel informed from the underlying data distribution, while converging to the maximum SVM margin [1,4-6]. Such an approach has connections to the performance of the machine learning model; that is, QKA finds a quantum kernel, from a family of kernels, that yields the smallest upper bound to the generalization error. For data with an underlying group structure, covariant quantum kernels can be designed to exploit that structure. In this case, QKA provides a way to optimize the fiducial state of the quantum feature map on such a dataset. This toolkit provides examples of datasets with group structure and corresponding covariant quantum kernels. More information can be found in the background material and in Ref. [1].
To enable future research on quantum kernel training algorithms, this toolkit is extensible to methods beyond QKA. More information about the design is provided in the next section.
Overall Architecture
The structure of the QKT Toolkit is illustrated in the diagram below. New components and features were integrated into Qiskit Machine Learning to enable training of quantum kernels. The QKT Tookit is built on top of these integrations and includes local components such as datasets, feature maps, and documentation—all maintained with style, unit, and notebook tests.
New Integrations into Qiskit Machine Learning
QuantumKernelTrainer
: (New) Class to manage quantum kernel training for a given loss function and optimizer.QuantumKernel
: Option to handle quantum kernels with trainable parameters.KernelLoss
: (New) Base class to calculate loss of quantum kernel functions over trainable parameters and input data.SVCLoss(KernelLoss)
: (New) Class to compute loss corresponding to QKA for classification tasks.
This framework is extensible to other loss functions and optimizers and is compatible with Qiskit's existing kernel-based model interfaces (e.g., classification with QSVC
and regression with QSVR
).
Datasets and Feature Maps
The QKT toolkit includes datasets useful for illustrating how to train quantum kernels. Two datasets with a particular underlying group structure are provided for 7 and 10 qubits. These datasets can be used with a covariant quantum kernel to test and explore kernel training algorithms. More information about the datasets and kernel can be found in the background material.
Documentation
The QKT Toolkit includes documentation split into
- Tutorials: longer examples of end-to-end usage
- How-to guides: targeted answers to common questions
- Background material: in-depth information about quantum kernels and algorithms
We encourage your feedback! You can share your thoughts with us by:
- Opening an issue in the repository
- Starting a conversation on GitHub Discussions
- Filling out our survey
For information on how to contribute to this project, please take a look at our contribution guidelines.
This toolkit is based on the research described in [1].
The initial codebase was written by Jennifer R. Glick and Tanvi P. Gujarati.
[1] Jennifer R. Glick, Tanvi P. Gujarati, Antonio D. Córcoles, Youngseok Kim, Abhinav Kandala, Jay M. Gambetta, and Kristan Temme. Covariant quantum kernels for data with group structure. link
[2] Havlíček et al. Supervised learning with quantum-enhanced feature spaces. Nature 567, 209 (2019) link
[3] Liu et al. A rigorous and robust quantum speed-up in supervised machine learning. Nature Physics 17, 1013 (2021) link
[4] B. E. Boser et al. Proceedings of the Fifth Annual Workshop on Computational Learning Theory. COLT ’92, 144-152 link
[5] V. Vapnik. The Nature of Statistical Learning Theory. Information Science and Statistics (Springer New York, 2013) link
[6] N. Cristianini et al. Advances in Neural Information Processing Systems 14 (2001) link
Qiskit Global Summer School on Quantum Machine Learning (in particular, lectures 6.1, 6.2, and 6.3)