Your smartphone knows where you are on the planet, all the time, often to within metres. How does it do this? With GPS.
Your smartphone knowing where you are on the planet is very useful if you are trying to find your way through an unfamiliar part of town - but if you pause for a moment to think about this, it is actually quite remarkable (and maybe a bit creepy?).
How does your phone do that?
The answer is, of course: via the global positioning system (GPS).
At any time there are between 24 and 32 working GPS satellites in orbit, roughly at an altitude of 20,000 kilometres, and if your phone can receive signals from at least four of them, it can work out where it is. That's because the satellites know where they are relative to Earth, and in turn they will tell your mobile phone.
The time delay in receiving the message from the satellite on your phone determines how far away the satellite is from your phone: distance is time delay times the speed of light. Knowing the distance to four satellites then allows your phone to triangulate and establish where it is. Simple!
However, let's think a bit more about the precision of GPS. Light travels about 30 cm in one nanosecond, so in order to get your location down to within a metre, the error in the time delay can only be a few nanoseconds.
To get that precision, normal clocks are not sufficient and we need to use much more precise atomic clocks. These clocks monitor the atomic transition between the ground state and first excited state of a caesium-133 atom, which produces radiation with exactly 9,192,631,770 cycles per second. To calculate how the caesium atom jumps between these two states, and how we can read this out in a practical clock, we need quantum mechanics. So you know where you are on the planet every time you check your phone's map because of quantum mechanics!
Another way in which quantum mechanics impacts your life is via transistors. These are tiny devices, a few tens of nanometers, and are typically made from silicon, gallium arsenide, or some other semiconducting material. Transistors are used as very fast current switches in microchips, and they can be made to perform logic operations.
Put enough of them on a chip and you can do very complicated computations, like playing Flappy Bird. A typical mobile phone chip has several billion transistors.
A semiconductor transistor is made up of two types of semiconducting material, called p-type and n-type. The type indicates whether the current in the semiconductor is carried by electrons (n-type) or holes (p-type).
Holes are where an electron is supposed to sit to fill the shell, but is missing instead. In a transistor we can either sandwich a layer of n-type between two layers of p-type, or the other way around, giving us pnp or npn types of transistors. Depending on the voltage that we apply to the middle layer, we can open or close the flow of current between the outer layers. What makes a semiconductor p-type or n-type? Again, we need quantum mechanics to fully understand this.
While we are all happily tasting the fruits of quantum mechanics in our daily lives, quantum physicists are working out how to build the Next Big Thing. Even though computers operate on the principles of quantum mechanics, the actual logic carried out on the computers, the zeros and ones if you like, is following the standard rules formulated by George Boole in the nineteenth century. The unit of information in a computer, the 'bit', is an abstract idea that represents a physical system that can be in two distinct states. For example, a light bulb can be on or off, a wire in a computer can carry a current or not, a capacitor can carry a charge or no charge. In every case, the system is either in one state or another. I can apply logic to these states as follows: If I have three light bulbs, A, B, and C, and set up a circuit such that C is on whenever both A and B are on, then the state of light bulb C is the logical value of 'A and B'. Can you work out when C is on for 'A or B'.
To see how quantum mechanics allows us to move beyond Boolean logic for computers, let's return to the caesium atom in the atomic clock. It has two states, the ground state and the first excited state, which give us the atomic transition. We can also call these two states zero and one, giving us a truly tiny bit. But it gives us more: the caesium atom can be in one of these two states, and also in any quantum superposition of these two states. This is in fact how quantum mechanics allows us to calculate how the atom jumps between these states in an atomic clock.
Now we see that at the fundamental quantum mechanical level a bit is not just a system with two states (labelled zero and one), but also allows superpositions between zero and one. This gives us more room to play with the information stored in the system, and it turns out that computers built on these quantum principles are more powerful than ordinary computers.
To distinguish the fundamental quantum systems in this new type of computer from the bits in regular computers, we call them 'quantum bits', or more snappy: 'qubits'.
There are many ways in which we can construct qubits besides atomic levels: there are electron spin qubits, photon polarisation qubits, superconducting flux qubits, and so on. Currently, many research groups are trying to construct quantum computers based on these systems, and while it is too early to tell when the first full-scale quantum computer will become available, there are already working quantum computers with ten or twenty qubits. I wonder what new mass technology the qubit will eventually lead to.