Kavli Affiliate: Irfan Siddiqi
| First 5 Authors: Akel Hashim, Stefan Seritan, Timothy Proctor, Kenneth Rudinger, Noah Goss
| Summary:
The promise of quantum computing depends upon the eventual achievement of
fault-tolerant quantum error correction, which requires that the total rate of
errors falls below some threshold. However, most contemporary methods for
benchmarking noisy quantum processors measure average error rates
(infidelities), which can differ from worst-case error rates — defined via the
diamond norm — by orders of magnitude. One method for resolving this
discrepancy is to randomize the physical implementation of quantum gates, using
techniques like randomized compiling (RC). In this work, we use gate set
tomography to perform precision characterization of a set of two-qubit logic
gates to study RC on a superconducting quantum processor. We find that, under
RC, gate errors are accurately described by a stochastic Pauli noise model
without coherent errors, and that spatially-correlated coherent errors and
non-Markovian errors are strongly suppressed. We further show that the average
and worst-case error rates are equal for randomly compiled gates, and measure a
maximum worst-case error of 0.0197(3) for our gate set. Our results show that
randomized benchmarks are a viable route to both verifying that a quantum
processor’s error rates are below a fault-tolerance threshold, and to bounding
the failure rates of near-term algorithms, if — and only if — gates are
implemented via randomization methods which tailor noise.
| Search Query: ArXiv Query: search_query=au:”Irfan Siddiqi”&id_list=&start=0&max_results=10