By Bartek Fecko, Spring 2025.
Perhaps the titular feature of the modern technological age is the development of artificial intelligence. AI technologies have dramatically enhanced human capabilities, whether by predicting outcomes of complex financial markets or by optimizing materials for engineering applications. Beyond these specialized uses, artificial intelligence like ChatGPT and other large language models (LLMs) have become a ubiquitous feature of modern life. However, the development of artificial intelligence is ultimately constrained by the underlying computational architectures that handle the tremendous amounts of data used by AI systems to perform computations. In recent years, quantum computers have emerged as a potential solution. These computers store information in quantum systems called qubits, enabling an exponential speedup in efficiency in many key applications. Researchers have considered, for example, the prospect of quantum machine learning, in which existing machine learning algorithms are implemented entirely on quantum computers [1].
Unfortunately, fault-tolerant, scalable quantum architectures are at least several years away from full development because quantum systems are extremely sensitive to environmental noise [2]. Nonetheless, quantum architectures have been realized on smaller scales, opening the door to an interesting possibility: hybridizing quantum computing and artificial intelligence in a mutually complementary manner. This approach not only makes even the limited, intermediate quantum computers useful, but also prepares for the likely outcome of quantum computing, which is collaboration with classical architectures. Even theoretically, quantum computation is only significantly advantageous across a particular set of computational tasks.
One area in which this hybrid quantum-classical approach is particularly useful is computational quantum chemistry, a field which aims to determine molecular properties and chemical dynamics via insights into electrons’ energies and probable locations. It has applications in nanomedicine, energy storage technologies, and even in developing the qubits of quantum computers themselves. Since the equations of motion for these systems are not analytically solvable, researchers must resort to approximations to simplify the system. For example, density functional theory (DFT) is a classical technique which postulates that the total energy of the system is completely determined by the electron density, encapsulating many-body interactions within the single charge density function [3]. As with other approximation schemes, DFT prepares a trial wavefunction and employs an iterative process to converge on a wavefunction that minimizes the system’s energy.
Figure 1. Overview of methods in computational quantum chemistry based on theoretical rigor and system complexity [6]. More complex systems are more amenable to molecular mechanics, which is based on classical force-field approximations, whereas simpler systems are better modeled by more accurate methods which incorporate many-body quantum interactions.
Of course, this simplification limits the accuracy of calculations done by classical architectures. The introduction of quantum computers may, in principle, account for more many-body complexities that classical computers cannot capture. Indeed, quantum computation has been demonstrated to be more efficient in simulating quantum systems, and methods have been implemented that incorporate quantum computers, even with present hardware challenges. For instance, the variational quantum eigensolver (VQE) uses quantum computers to more accurately prepare the trial wavefunction, and then defers to classical computers. VQE algorithms must select a particular quantum circuit structure, or ansatz, to encode the trial wavefunction [4]. Of particular relevance is the Unitary Coupled-Cluster (UCC) ansatz, which represents the trial wavefunction as an exponential of a unitary operator acting on a given initial reference state. There is also a variant of this ansatz called the paired Unitary Coupled-Cluster with Double Excitations (pUCCD) ansatz, which projects the UCC wavefunction onto a subspace of functions determined to be more physically relevant. This ansatz better reflects conservation symmetries and incorporates two-level electronic excitations in the structure of the wavefunction. After the trial wavefunction is set, UCC optimizers minimize the system’s energy by calculating a new set of optimal parameters for the unitary operator that take the reference state to the many-body wavefunction.
However, these traditional optimizers have limited performance in higher-dimensional optimization spaces and are vulnerable to noise, particularly from faulty quantum measurements. Moreover, they are “memoryless,” meaning that the optimization of each iterative wavefunction is performed from scratch, independent of the parameters seen at previous iterations. A recent breakthrough developed an alternative hybrid quantum-classical approach termed pUCCD-DNN, which combines pUCCD with optimization via deep neural networks (DNNs) [5]. Researchers allow the DNNs to train on system data determined from the existing wavefunction and global parameters. Importantly, DNNs are not memoryless, so they can learn from past optimizations of other molecules. This learning improves the efficiency of optimizations and minimizes the number of quantum hardware cells required for calculations, effectively compensating for the limitations of quantum hardware. Moreover, DNNs can adapt to varying levels of complexity—for example, higher-order electronic excitations—by using the output data of quantum computers, even if this data was not included in the original ansatz.
Following the theoretical development of pUCC-DNN, researchers tested this technology by performing benchmarking simulations on small test molecules. The mean absolute error of calculated energies was reduced by two orders of magnitude compared to non-DNN pUCCD methods, demonstrating that the pUCCD-DNN approach has greater predictive reliability than traditional pUCCD techniques. pUCCD-DNN has also shown incredible accuracy on more complex calculations, such as the isomerization of cyclobutadiene, a chemical reaction that is difficult to model. The reaction barrier predicted by pUCCD-DNN demonstrated a significant improvement over classical Hartree-Fock and second-order perturbation theory calculations. Additionally, the hybrid model closely matched the predictions of full configuration interaction calculations, the most accurate but computationally expensive classical method at present.
Evidently, hybrid quantum-classical architectures can be incredibly powerful when neural networks are allowed to compensate for the hardware limitations of quantum circuits. While the neural networks perform more efficient optimization of data returned by quantum computers, mitigating environmental noise via fewer hardware calls, the optimizations themselves are decisively more accurate because of the simulation capabilities of quantum computers themselves, which is why they were introduced to begin with. Regardless of the computational power of advanced neural networks, they are ultimately constrained by the quality of the training data. This data is less accurate when classical computers are forced to make simplifications due to their inability to efficiently model the complete system. As a result, the future of artificial intelligence will be guided by developments in quantum computing, and vise versa.
Ultimately, the question of how artificial intelligence and quantum computing will continue to co-evolve remains open. It is likely that the classical machine learning algorithms discussed in this article will first be integrated with fault-tolerant quantum computing before quantum computers are able to process the tremendous quantity of data handled by existing artificial intelligence. However, when existing hardware limitations are overcome, neural networks may themselves be implemented on quantum computers.
[1] Biamonte, Jacob, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd. “Quantum Machine Learning.” Nature 549, no. 7671 (September 2017): 195–202. https://doi.org/10.1038/nature23474.
[2] Preskill, John. “Quantum Computing in the NISQ Era and Beyond.” Quantum 2 (August 6, 2018): 79. https://doi.org/10.22331/q-2018-08-06-79.
[3] Kohn, W., A. D. Becke, and R. G. Parr. “Density Functional Theory of Electronic Structure.” The Journal of Physical Chemistry 100, no. 31 (January 1, 1996): 12974–80. https://doi.org/10.1021/jp960669l.
[4] Romero, Jonathan, Ryan Babbush, Jarrod R. McClean, Cornelius Hempel, Peter Love, and Alán Aspuru-Guzik. “Strategies for Quantum Computing Molecular Energies Using the Unitary Coupled Cluster Ansatz.” arXiv.org, February 10, 2018. https://arxiv.org/abs/1701.02691.
[5] Li, Weitang, Shi-Xin Zhang, Zirui Sheng, Cunxi Gong, Jianpeng Chen, and Zhigang Shuai. “Quantum Machine Learning of Molecular Energies with Hybrid Quantum-Neural Wavefunction.” arXiv.org, January 8, 2025. https://arxiv.org/abs/2501.04264.
[6] Tratnyek, Paul G., Eric J. Bylaska, and Eric J. Weber. “In Silico Environmental Chemical Science: Properties and Processes from Statistical and Computational Modelling.” Environmental Science: Processes & Impacts 19, no. 3 (2017): 188–202. https://doi.org/10.1039/c7em00053g.