Research Projects
My research ranges widely across theoretical aspects of quantum information and computation, generally with an aim of simplifying the demands placed on experimentalists trying to build a quantum computer. The descriptions or research interests below are by no means exhaustive, and a full listing of my research papers can be found here. You can also read my introductory articles about quantum computers and quantum cryptography. They are aimed at someone who doesn't know anything about quantum mechanics at all.
Quantum State Transfer

In quantum state transfer, we study the task of transferring an unknown quantum state between two distant locations of a quantum computer. For solid state devices, this is a tricky problem because all the interactions that we have available to us involve nearest-neighbours. While one can use these localised interactions to progressively swap a state from one qubit to the next, errors build up rather rapidly, thereby limiting the practical distance of transfer. Instead, we look at how to transfer the state without having to actively do anything, i.e. by pre-engineering a system Hamiltonian so that it perfectly transfers the state after a specific time interval. The analytic solutions for this problem are well understood in one dimensional systems. Research continues into the robustness of these systems and their experimental implementation, as well as generalisation to transfer across networks which are not one-dimensional. One of the key challenges is how to implement these protocols in an experimental system that we are given, rather than precisely specifying the design that such a system must have.

Quantum Memories

Error correction of quantum systems is one of the toughest tasks that an experimentalist faces. The principles of classical error correction are easy - one can just make many, many copies of the same data so that if some of it gets corrupted, you just take a "majority vote" and, most likely, you get the correct answer. However, this is because, mathematically, the value of a classical bit is only represented by a single observable. The essence of quantum computation is that quantum bits, or qubits, are represented by two observables. Moreover, these two observables are non-Abelian; they do not commute. Hence, states cannot be simultaneous eigenstates of both observable. This conveys that it is not possible to just look at the state and replicate the values many times, as in the classical case. This is the "no-cloning theorem" os quantum mechanics, and is also closely related to Heiseburg's Uncertainty Principle.

Nevertheless, it turns out that quantum error correction is possible, although existing schemes are very complicated. Usually, to get good storage, one has to concatenate error correcting codes, i.e. encode within a code within a code etc. such that the error tolerance gets a little better at each level. This is extremely resource intensive. A recent revelation has been surface codes, which exhibit a finite fault-tolerant noise threshold (i.e. provided the error rate is below a certain threshold, error correction is successful, even when the gates used to implement the correction are also faulty) without concatenation, requiring only measurements on qubits which are sat next to each other.

These surface codes already drastically reduce the experimental demands, but we want to go further. If you think about something like a USB stick or hard drive, it doesn't even use error correction. The data just sits there and is incredibly robust. Is there any way of inducing this degree of robustness in a quantum system? Once you get rid of any active intervention, the only option you have is to pre-define the system's interactions (the Hamiltonian) and try to do it in such a way that the structure prevents the accumulation of errors by, for example, creating energy penalties. It turns out that the surface codes are central to this idea, but we are still in the early days of trying to prove necessary and sufficient conditions for realising a given storage time in the presence of relevant noise models.

Quantum Error Correction & Fault Tolerance

The main difference that we envisage between the modest quantum computers that we have today, and those we expect in the future will be error correction - having enough qubits available that we can afford to sacrifice some for the encoding needed to protect against faults during a computation. Ultimately, this should enable longer, more useful computations. Obviously, we'd like error correcting codes, and the computational procedures on encoded qubits, to be as simple as possible to maximise the useful work that we can extract from a given pience of quantum hardware.

At the hardware level, you may have many quantum gates available, such as Hadamard and controlled-NOT but also rotations about arbitrary axes and by arbitrary angles. This lets one exactly decompose an arbitrary unitary using (relatively) short sequences. For example, if you want to make an arbitrary two-qubit pure state, it is sufficient to use just one controlled-not gate and three single-qubit gates. When acting on an encoded qubit, however, we do not have the full range of gates available. Instead, we might only have a finite set with which we can approximate any gate to arbitrary accuracy (the higher the accuracy we want, the more gates you need to use). For a given gate set, target unitary and target accuracy, what's the shortest possible sequence of gates? If different gates have a different implementation cost (some take longer or are less accurate), what's the smallest possible weighted sequence of gates?

Different error correcting codes have different gates that are implemented with more or less ease. Many gates can be implemented "transversally", which is a particularly simple strategy. For example, a transversal Hadamard gate realises Hadamard on the logical qubits just be applying individual Hadamard gates to the constituent physical qubits of the code. However, the Eastin-Knill theorem says that it is impossible to implement a universal set of gates in this manner. All of smallest codes struggle with a particular gate called T and achieve its implementation via "magic state distillation". However, I am particularly interested in examining slightly larger codes for which the T gate is transversal. In trade, another gate must be non-transversal. This is typically the Hadamard gate. But the magic state distillation equivalent for Hadamard is easier than it is for T. So, while we might be using more qubits, the gates themselves are easier. What sort of impact does this trade-off have?