Researchers have developed a new protocol for benchmarking quantum gates, a critical step toward realizing the full potential of quantum computing and potentially accelerating progress toward fault-tolerant quantum computers.
The new protocol, called deterministic benchmarking (DB), provides a more detailed and efficient method for identifying specific types of quantum noise and errors compared to widely used existing techniques.
The work is published in the journal Chemical Reviews.
“Quantum computing is ultimately limited by how accurately we can implement gates—the basic operations of a quantum processor,” said Daniel Lidar, co-corresponding author of the study and professor of electrical and computer engineering, chemistry, and physics and astronomy at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences.
“Our new protocol can identify both coherent and incoherent error types using just a handful of simple experiments, making it much more efficient than current approaches.”
Quantum gates and errors
Quantum computing may have the potential to solve complex problems that are beyond the reach of traditional, or classical, computers. However, the accuracy of quantum computations is highly dependent on the performance of quantum gates, which are prone to errors due to noise and miscalibration.
Quantum gates perform operations on qubits, which are the quantum equivalent of classical computer bits; they are essential for constructing quantum algorithms and are the fundamental building blocks of quantum circuits and quantum computations. They enable quantum computers to run algorithms that are exponentially faster than algorithms running on classical computers for certain tasks.
However, quantum gates are susceptible to noise and errors, which is why benchmarking and error correction are critical areas of research in quantum computing.
The two main categories are coherent and incoherent errors. Coherent errors are deterministic and repeatable errors that preserve quantum state purity. Coherent errors accumulate as amplitudes (rather than probabilities), potentially leading to quadratically faster error accumulation than incoherent errors.
Incoherent errors are a category of errors that result from quantum systems’ interaction with the environment; these errors rob quantum computers of their quantumness, leaving them performing no better than classical computers.
Physicists have recently realized the important role that coherent errors play in limiting the performance of quantum computers. Eli Levenson-Falk, co-corresponding author of the study and assistant professor of physics and astronomy and electrical and computer engineering at USC Dornsife, emphasizes the importance of accurate benchmarking of gate errors.
“What’s unique about our approach is that it can clearly distinguish between different types of quantum errors,” Levenson-Falk said. “This is crucial because certain error types, particularly coherent errors, can be more destructive to quantum algorithms and require different mitigation strategies.”
Deterministic benchmarking improves efficiency
Quantum benchmarks are a set of protocols and methods used to evaluate the performance of the overall quantum computer, which includes its gates, circuits, and processors. These protocols are crucial to the development and optimization of quantum computing technologies by providing quantitative measures of how well quantum operations are performed in the presence of noise and errors.
Lidar, who holds faculty positions at the USC Viterbi School of Engineering and USC Dornsife College of Letters, Arts, and Sciences, said deterministic benchmarking (DB) is a significant advancement in quantum computing because it is deterministic and efficient. Unlike other benchmarking methods, DB uses a small, fixed set of simple pulse-pair sequences rather than averaging over random circuits.
The researchers said the key to understanding the breakthrough of DB is to compare it to randomized benchmarking (RB), a widely used method for estimating the average error rate of quantum gates. Unlike RB, which averages many random gate sequences to provide a single error metric, DB uses designed sequences to detect specific error sources that go unnoticed when RB is used.
New method opens opportunities for advancements in quantum chemistry, materials science
The researchers demonstrated DB on a superconducting transmon qubit—a widely used type of superconducting qubit in quantum computing—to show its ability to detect small changes in qubit parameters that are invisible to standard benchmarking techniques.
“Through conducting several experiments, we demonstrated the variety of capabilities of DB,” Lidar said. He said the standout capability of DB is that it provides detailed information about both coherent and incoherent errors, enabling better calibration of quantum gates. DB also requires fewer experimental runs than RB, which improves resource-efficiency.
The research has significant implications for quantum chemistry and materials science applications, where precise gate operations are essential for achieving reliable simulations of molecular systems.
The researchers plan to explore ways to extend DB to two-qubit gates, which could lead to more complex quantum circuits. Additionally, they are investigating how DB can be adapted to other quantum computing platforms beyond superconducting qubits, such as trapped ions and photonic systems.
More information:
Vinay Tripathi et al, Benchmarking Quantum Gates and Circuits, Chemical Reviews (2025). DOI: 10.1021/acs.chemrev.4c00870
Citation:
Benchmarking quantum gates: New protocol paves the way for fault-tolerant computing (2025, May 6)
retrieved 6 May 2025
from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.