- Quantum Memories
- Quantum State Transfer
- Monogamy of Quantum Correlations
- Free Will
- Multipartite Entanglement & Graph states (technical introduction)
Error correction of quantum systems is one of the toughest tasks that an experimentalist faces. The principles of classical error correction are easy - one can just make many, many copies of the same data so that if some of it gets corrupted, you just take a "majority vote" and, most likely, you get the correct answer. However, this is because, mathematically, the value of a classical bit is only represented by a single observable. The essence of quantum computation is that quantum bits, or qubits, are represented by two observables. Moreover, these two observables are non-Abelian; they do not commute. Hence, states cannot be simultaneous eigenstates of both observable. This conveys that it is not possible to just look at the state and replicate the values many times, as in the classical case.
Nevertheless, it turns out that quantum error correction is possible, although existing schemes are very complicated. Usually, to get good storage, one has to concatenate error correcting codes, i.e. encode within a code within a code etc. such that the error tolerance gets a little better at each level. This is extremely resource intensive. A recent revelation has been surface codes, which exhibit a finite fault-tolerant noise threshold (i.e. provided the error rate is below a certain threshold, error correction is successful, even when the gates used to implement the correction are also faulty) without concatenation, requiring only measurements on qubits which are sat next to each other.
These surface codes already drastically reduce the experimental demands, but we want to go further. If you think about something like a USB stick or hard drive, it doesn't even use error correction. The data just sits there and is incredibly robust. Is there any way of inducing this degree of robustness in a quantum system? Once you get rid of any active intervention, the only option you have is to pre-define the system's interactions (the Hamiltonian) and try to do it in such a way that the structure prevents the accumulation of errors by, for example, creating energy penalties. It turns out that the surface codes are central to this idea, but we are still in the early days of trying to prove necessary and sufficient conditions for realising a given storage time in the presence of relevant noise models.
In quantum state transfer, we study the task of transferring an unknown quantum state between two distant locations of a quantum computer. For solid state devices, this is a tricky problem because all the interactions that we have available to us involve nearest-neighbours. While one can use these localised interactions to progressively swap a state from one qubit to the next, errors build up rather rapidly, thereby limiting the practical distance of transfer. Instead, we look at how to transfer the state without having to actively do anything, i.e. by pre-engineering a system Hamiltonian so that it perfectly transfers the state after a specific time interval. The analytic solutions for this problem are well understood in one dimensional systems. Research continues into the robustness of these systems and their experimental implementation, as well as generalisation to transfer across networks which are not one-dimensional.
The correlations are quantum systems are monogamous i.e. if two parties, Alice and Bob, are maximally (quantum) correlated, then neither can have any quantum correlations at all with a third party, Charlie. In collaboration with the group of Dagomir Kaszlikowski at the Centre for Quantum Technologies, National University of Singapore, we are investigating this monogamy; how to quantify it and its consequences.
There is a close correspondence between this monogamy property and the inability to clone a quantum state (i.e. to make two perfect copies of an unknown input state). If perfect cloning were possible, non-monogamous relations would be possible and vice versa. So, quantifying the extent of monogamy is equivalent to asking how well an input state can be cloned; if we make the quality of one output higher, how much does the quality of the other have to be reduced by? It turns out that this problem can be solved under a very general set of conditions to give the optimal trade-off between the qualities of the clones.
The consequences are many and varied. By knowing how restricted certain correlations are, one can bound properties such as the ground state energy of a system. Perhaps more interestingly, we can use monogamy to interpret why we don't see quantum effects in the world of our every day experience. There are only very limited tests that we know of that can genuinely prove that a system is behaving in a non-classical way. These are known as contextuality tests and Bell tests. The Bell tests in particular suffer from the effect of monogamy; the correlations get diluted and overwhelmed by classical correlations (which are not monogamous) so that if one looks too coarsely at a system, it is impossible to see something non-classical. If you would like more details, Dagomir wrote an article for the Scientific American Blog.
There's also a sort of inverse non-monogamy that quantum states exhibit and is really weird. Imagine two parties, Alice and Bob, share a quantum system which has no quantum correlations. Alice sends part of her system to Bob, but in such a way that the part that she sends has no quantum correlations with anything else. When Bob receives the part that Alice sent him, and he combines it with his part of the quantum system, it turns out that, suddenly, Alice and Bob share some quantum correlations! If these correlations weren't there before, and they weren't sent from Alice to Bob, where did they come from? It turns out that this is actually an extremely common feature, although the amount of correlation that can be generated is rather small.
One of the bizarre things about quantum physics is that we cannot always predict the outcome of measurements. Sure, in the classical world, there are features such as chaos, which mean that we cannot perfectly predict an outcome, but that just comes down to a lack of knowledge of the initial conditions of the system. However, for quantum systems, there are tests that one can perform, known as Bell tests, which prove that the outcomes are not predetermined (it's not just that they are predetermined, but we don't know the underlying rules) subject to certain assumptions.
An important aspect of these Bell tests is the random choice of what measurement to perform, but what if our choices are manipulated? i.e. we don't have perfect free will. In practice, of course, this is just manipulation of a random number generator by a malicious party, rather than manipulation of choices that we actively make. By reducing the amount of free will available, deterministic protocols can give stronger results which are not falsified by experiments. Just how much free will does one have to give up? What difference do correlations make between subsequent choices?
Viewed another way, these Bell tests constitute a falsification of assumptions about an underlying probability distribution. How can we systematically determine what assumptions can or can't be falsified? Is it ever possible to prove the converse? For example, is it ever possible to set up a Bell-type test whose answers give us a lower bound on the amount of free will we have (i.e. exclude the manipulation of an eavesdropper up to some minimal level)?
Can Bell-type tests be used to falsify other predicates?