I. Introduction
It is widely recognized that Moore’s law, which states that the number of transistors in a dense integrated circuit doubles about every two years, will, if not already has, come to an end in the near future. On the other hand, although still in their infancy, quantum computers or, more precisely, quantum processing units (QPUs) have seen a steady increase in the number of valid qubits in the past several years. QPUs in the noisy intermediate-scale quantum (NISQ) era have rather limited qubit coherence time and only support a few kinds of one- or two-qubit elementary quantum gates, which usually have nonnegligible gate error. Nevertheless, quantum supremacy was demonstrated in Sycamore, Google’s recent 53-qubit QPU [1]. More and more quantum or hybrid quantum-classical algorithms have been designed for these NISQ era QPUs [2]. Naturally, the size (i.e., number of gates) and depth of such a quantum algorithm (or, a quantum circuit) are limited, due to the error caused by the decoherence and noise inherently present in these QPUs. Moreover, current QPUs impose strict connectivity constraints that require that any two-qubit operation can only be applied between connected qubits. This presents a challenge for quantum computing in the NISQ era. Assume that all quantum gates in a quantum circuit have already been decomposed into elementary gates supported by the QPU. Before executing , we need to transform into a functionally equivalent circuit while obeying the connectivity constraints imposed by the QPU. This process was first considered in [3] and has many different names. In this article, following [4], we term it as quantum circuit transformation (QCT).