Research Projects
My research ranges widely across theoretical aspects of quantum information and computation, generally with an aim of simplifying the demands placed on experimentalists trying to build a quantum computer. The descriptions or research interests below are by no means exhaustive, and a full listing of my research papers can be found here. You can also read my introductory articles about quantum computers and quantum cryptography. They are aimed at someone who doesn't know anything about quantum mechanics at all.
Quantum State Transfer

In quantum state transfer, we study the task of transferring an unknown quantum state between two distant locations of a quantum computer. For solid state devices, this is a tricky problem because all the interactions that we have available to us involve nearest-neighbours. While one can use these localised interactions to progressively swap a state from one qubit to the next, errors build up rather rapidly, thereby limiting the practical distance of transfer. Instead, we look at how to transfer the state without having to actively do anything, i.e. by pre-engineering a system Hamiltonian so that it perfectly transfers the state after a specific time interval. The analytic solutions for this problem are well understood in one dimensional systems. Research continues into the robustness of these systems and their experimental implementation, as well as generalisation to transfer across networks which are not one-dimensional. One of the key challenges is how to implement these protocols in an experimental system that we are given, rather than precisely specifying the design that such a system must have.

Our group is responsible for results including:

  • Proof of the necessary and sufficient conditions for perfect state transfer in chains.
  • The "monogamy of state transfer" in spin networks.
  • Asymptotically optimal solutions for chains with a broad arrival width, tolerating imperfect timing and a broad range of Hamiltonian perturbation and noise errors.
  • Some of the first experimental demonstrations of state transfer (in collaboration with experimental group at University of Jena).
  • The incorporation of error correction within state transfer.
  • Applications of state transfer to other tasks such as optimal cloning.
  • Violation of the state transfer speed limit by Quantum anti-Zeno effect.
  • The ability to transfer through (almost) any network just by controlling the input and output nodes.
  • A universal quantum interface: how universal quantum computation can be implemented on a noiseless chain, just by controlling two spins at one end.

Quantum Memories

Error correction of quantum systems is one of the toughest tasks that an experimentalist faces. The principles of classical error correction are easy - one can just make many, many copies of the same data so that if some of it gets corrupted, you just take a "majority vote" and, most likely, you get the correct answer. However, this is because, mathematically, the value of a classical bit is only represented by a single observable. The essence of quantum computation is that quantum bits, or qubits, are represented by two observables. Moreover, these two observables are non-Abelian; they do not commute. Hence, states cannot be simultaneous eigenstates of both observables. This conveys that it is not possible to just look at the state and replicate the values many times, as in the classical case. This is the "no-cloning theorem" of quantum mechanics, and is also closely related to Heisenberg's Uncertainty Principle.

Nevertheless, it turns out that quantum error correction is possible, although existing schemes are very complicated. Usually, to get good storage, one has to concatenate error correcting codes, i.e. encode within a code within a code etc. such that the error tolerance gets a little better at each level. This is extremely resource intensive. A recent revelation has been surface codes, which exhibit a finite fault-tolerant noise threshold (i.e. provided the error rate is below a certain threshold, error correction is successful, even when the gates used to implement the correction are also faulty) without concatenation, requiring only measurements on qubits which are sat next to each other.

These surface codes already drastically reduce the experimental demands, but we want to go further. If you think about something like a USB stick or hard drive, it doesn't even use error correction. The data just sits there and is incredibly robust. Is there any way of inducing this degree of robustness in a quantum system? Once you get rid of any active intervention, the only option you have is to pre-define the system's interactions (the Hamiltonian) and try to do it in such a way that the structure prevents the accumulation of errors by, for example, creating energy penalties. It turns out that the surface codes are central to this idea, but we are still in the early days of trying to prove necessary and sufficient conditions for realising a given storage time in the presence of relevant noise models.

Quantum Error Correction & Fault Tolerance

The main difference that we envisage between the modest quantum computers that we have today, and those we expect in the future will be error correction - having enough qubits available that we can afford to sacrifice some for the encoding needed to protect against faults during a computation. Ultimately, this should enable longer, more useful computations. Obviously, we'd like error correcting codes, and the computational procedures on encoded qubits, to be as simple as possible to maximise the useful work that we can extract from a given piece of quantum hardware.

There are many different strategies to realising "fault tolerance", the regime where we can correct for errors when even our error correction machinery is faulty.

  • Concatenated codes, where we encode the code, are the original method, and are my far the most mathematically rigorous in terms of proving a noise threshold below which our procedures are guaranteed to work.
  • Surface codes, such as the Toric code, simply define a family of codes of different sizes, and the code distance increases with the size. Concatenation is not necessary, making the error detection procedures much simpler, and local (which is important to many computational architectures).
  • quantum Low Density Parity-Check codes are similar in nature to surface codes, but are not limited to a local architecture. What they lost in being realisable in all computational architectures, they gain in improvements to error correcting properties. A code comprising n physical qubits encodes k logical qubits and has a distance d against errors. Good (i.e. asymptotically optimal) versions of these codes exists, meaning k~n and d~n.
We are interested in all these methods, extracting the maximum benefit from each structure and comparing their relative merits.

At the hardware level, you may have many quantum gates available, such as Hadamard and controlled-NOT but also rotations about arbitrary axes and by arbitrary angles. This lets one exactly decompose an arbitrary unitary using (relatively) short sequences. For example, if you want to make an arbitrary two-qubit pure state, it is sufficient to use just one controlled-not gate and three single-qubit gates. When acting on an encoded qubit, however, we do not have the full range of gates available. Instead, we might only have a finite set with which we can approximate any gate to arbitrary accuracy (the higher the accuracy we want, the more gates you need to use). For a given gate set, target unitary and target accuracy, what's the shortest possible sequence of gates? If different gates have a different implementation cost (some take longer or are less accurate), what's the smallest possible weighted sequence of gates?

Different error correcting codes have different gates that are implemented with more or less ease. Many gates can be implemented "transversally", which is a particularly simple strategy. For example, a transversal Hadamard gate realises Hadamard on the logical qubits just be applying individual Hadamard gates to the constituent physical qubits of the code. However, the Eastin-Knill theorem says that it is impossible to implement a universal set of gates in this manner. All of smallest codes struggle with a particular gate called T and achieve its implementation via "magic state distillation". However, I am particularly interested in examining slightly larger codes for which the T gate is transversal. In trade, another gate must be non-transversal. This is typically the Hadamard gate. But the magic state distillation equivalent for Hadamard is easier than it is for T. So, while we might be using more qubits, the gates themselves are easier. What sort of impact does this trade-off have?

Blind Quantum Computation

In the future, we expect to have the ability to create and run fully fault-tolerant quantum computers, meaning that in spite of the fact that noise will always be present in everything we do, we will be able to drive the effective noise rate of a computation to zero. We will be able to run any of the marvellous algorithms that quantum computing has predicted, such as quantum search, factoring and matrix inversion. We expect that there will be many interested users, wanting to do many exciting computations that might otherwise take years on even the best super computers. As such, those computational results will be extremely valuable. And yet, the complexity of managing a quantum computer will leave it beyond the reach of many, who will instead rely on central servers to run their computations for them. How then should they keep their valuable data safe? The solution is blind quantum computing.

  • Information theoretic security: if the user is "semi-quantum", meaning they need some small quantum capability such as the ability to prepare single-qubit quantum states, it is possible to prove security. We have recently shown the minimum amount of communication required for any given gate.
  • Computational security: if you are instead happy to rely on some assumptions about how difficult certain problems are to solve, even for a quantum computer, we can replace the quantum communication with classical communication.
Matrix Inversion

The HHL algorithm is currently viewed as being one of the "killer apps" for a quantum computer. It is capable of solving linear inversion problems Mx=b, where b is a quantum state, proving the quantum state x as an output. It is at the very heart of a range of applications from solving differential equations to machine learning. We have recently introduced a new variant which massively reduces the circuitry involved (no more phase estimation!), rendering it vastly more suitable for running on near-term (NISQ) devices.