RSS icon
Twitter icon
Facebook icon
Vimeo icon
YouTube icon

Quantum Control, Measurement, and Sensing


Creating quantum states on demand and controlling them is a critical component to developing practical quantum-based devices. Subsequent measurement of such states is also a challenge, because by definition, quantum superpositions collapse upon interaction, whether through intentional measurement or due to outside disturbances. Notably, the instability of a quantum state can also be used advantageously to create sensors. Quantum systems can be calibrated such that exposure to certain changing environmental conditions will force a switch from one quantum state to another. In some cases, a quantum phase of matter can abruptly change to a non-quantum phase of matter. Alterations to a quantum system can be monitored and detected, giving physicists information on the environment itself. JQI physicists are researching the many facets of quantum measurement and control, which has applications in areas such as precision spectroscopy and sensing.

Latest News

  • Interfering Waves

Scientists at the Joint Quantum Institute (JQI) have been steadily improving the performance of ion trap systems, a leading platform for future quantum computers. Now, a team of researchers led by JQI Fellows Norbert Linke and Christopher Monroe has performed a key experiment on five ion-based quantum bits, or qubits. They used laser pulses to simultaneously create quantum connections between different pairs of qubits—the first time these kinds of parallel operations have been executed in an ion trap. The new study, which is a critical step toward large-scale quantum computation, was published on July 24 in the journal Nature.  

“When it comes to the scaling requirements for a quantum computer, trapped ions check all of the boxes,” says Monroe, who is also the Bice-Sechi Zorn professor in the UMD Department of Physics and co-founder of the quantum computing startup IonQ. “Getting these parallel operations to work further illustrates that advancing ion trap quantum processors is not limited by the physics of qubits and is instead tied to engineering their controllers.” 

Ion traps are devices for capturing charged atoms and molecules, and they are commonly deployed for chemical analysis. In recent decades, physicists and engineers have combined ion traps with sophisticated laser systems to exert control over single atomic ions. Today, this type of hardware is one of the most promising for building a universal quantum computer.

The JQI ion trap used in this study is made from gold-coated electrodes, which carry the electric fields that confine ytterbium ions. The ions are caught in the middle of the trap where they form a line, each one separated from its neighbor by a few microns. This setup enables researchers to have fine control over individual ions and configure them as qubits.

Each ion has internal energy levels or quantum states that are naturally isolated from outside influences. This feature makes them ideal for storing and controlling quantum information, which is notoriously delicate. In this experiment, the research team uses two of these states, called “0” and “1”, as the qubit.

The researchers aim laser pulses at a string of qubits to execute programs on this small-scale quantum computer. The programs, also called circuits, are broken down into a set of single- and two-qubit gates. A single-qubit gate can, for instance, flip the state of an ion from 1 to 0. This is a straightforward task for a laser pulse. A two-qubit gate requires more sophisticated pulses because it involves tailoring the interactions between qubits. Certain two-qubit operations can create entanglement—a quantum connection necessary for quantum computation—between two qubits. 

Until now, circuits in ion trap quantum computers have been limited to a sequence of individual gates, one after another. With this new demonstration, researchers can now do two-qubit gates in parallel, creating entanglement between different pairs of ions simultaneously. The research team achieved this by optimizing the laser pulse sequences used to perform operations, making sure to cancel out unwanted laser-qubit interactions. In this way, they were able to successfully implement simultaneous entangling gates on two separate ion pairs.

According to the authors, parallel entangling gates will enable programs to correct errors during a quantum computation—a near-certain requirement in quantum computers with many more qubits. In addition, a quantum computer that factors large numbers or simulates quantum physics will likely need parallel entangling operations to achieve a speed advantage over conventional computers. 

Story by E. Edwards

In addition to Monroe and Linke, Caroline Figgatt, former JQI graduate student and scientist at Honeywell, was lead author on this research paper and provided background material for this news story. The research paper was published simultaneous to similar work done by former JQI postdoctoral researcher and Tsinghua University professor Kihwan Kim. 

Animation on small-scale programmable quantum computing hardware 

Read More
  • High-resolution imaging technique maps out an atomic wave function
  • Overlapping laser beams offer a new way to extract a quantum system’s essential information.
  • May 17, 2019 PFC, Quantum Control Measurement and Sensing

From NIST News

JQI researchers have demonstrated a new way to obtain the essential details that describe an isolated quantum system, such as a gas of atoms, through direct observation. The new method gives information about the likelihood of finding atoms at specific locations in the system with unprecedented spatial resolution. With this technique, scientists can obtain details on a scale of tens of nanometers—smaller than the width of a virus.

The new experiments use an optical lattice—a web of laser light that suspends thousands of individual atoms—to determine the probability that an atom might be at any given location. Because each individual atom in the lattice behaves like all the others, a measurement on the entire group of atoms reveals the likelihood of an individual atom to be in a particular point in space.  

Published in the journal Physical Review X, the technique (similar work was published simultaneously by a group at the University of Chicago) can yield the likelihood of the atoms’ locations at well below the wavelength of the light used to illuminate the atoms—50 times better than the limit of what optical microscopy can normally resolve. 

“It’s a demonstration of our ability to observe quantum mechanics,” says JQI Fellow and NIST physicist Trey Porto, one of the researchers behind the effort. “It hasn’t been done with atoms with anywhere near this precision.”

To understand a quantum system, physicists talk frequently about its “wave function.” It is not just an important detail; it’s the whole story. It contains all the information you need to describe the system.   

“It’s the description of the system,” says JQI Fellow and UMD physics professor Steve Rolston, another of the paper’s authors. “If you have the wave function information, you can calculate everything else about it—such as the object’s magnetism, its conductivity and its likelihood to emit or absorb light.”

While the wave function is a mathematical expression and not a physical object, the team’s method can reveal the behavior that the wave function describes: the probabilities that a quantum system will behave in one way versus another. In the world of quantum mechanics, probability is everything. 

Among the many strange principles of quantum mechanics is the idea that before we measure their positions, objects may not have a pinpointable location. The electrons surrounding the nucleus of an atom, for example, do not travel in regular planetlike orbits, contrary to the image some of us were taught in school. Instead, they act like rippling waves, so that an electron itself cannot be said to have a definite location. Rather, the electrons reside within fuzzy regions of space.

All objects can have this wavelike behavior, but for anything large enough for unaided eyes to see, the effect is imperceptible and the rules of classical physics are in force—we don’t notice buildings, buckets or breadcrumbs spreading out like waves. But isolate a tiny object such as an atom, and the situation is different because the atom exists in a size realm where the effects of quantum mechanics reign supreme. It’s not possible to say with certainty where it’s located, only that it will be found somewhere. The wave function provides the set of probabilities that the atom will be found in any given place. 

Quantum mechanics is well-enough understood—by physicists, anyway—that for a simple-enough system, experts can calculate the wave function from first principles without needing to observe it. Many interesting systems are complicated, though.

“There are quantum systems that can’t be calculated because they are too difficult,” Rolston says—such as molecules made of several large atoms. “This approach could help us understand those situations.”

As the wave function describes only a set of probabilities, how can physicists get a complete picture of its effects in short order? The team’s approach involves measuring a large number of identical quantum systems at the same time and combining the results into one overall picture. It’s sort of like rolling 100,000 pairs of dice at the same time—each roll gives a single result, and contributes a single point on the probability curve showing the values of all the dice. 

What the team observed were the positions of the roughly 100,000 atoms of ytterbium the optical lattice suspends in its lasers. The ytterbium atoms are isolated from their neighbors and restricted to moving back and forth along a one-dimensional line segment. To get a high-resolution picture, the team found a way to observe narrow slices of these line segments, and how often each atom showed up in its respective slice. After observing one region, the team measured another, until it had the whole picture.

Rolston says that while he hasn’t yet thought of a “killer app” that would take advantage of the technique, the mere fact that the team has directly imaged something central to quantum research fascinates him. 

“It’s not totally obvious where it will be used, but it’s a new technique that offers new opportunities,” he said. “We’ve been using an optical lattice to capture atoms for years, and now it’s become a new kind of measurement tool.” 

The original story was written by C. Boutin/NIST. Minor modifications were made for posting to this website. 

Read More

Electrons inside an atom whip around the nucleus like satellites around the Earth, occupying orbits determined by quantum physics. Light can boost an electron to a different, more energetic orbit, but that high doesn’t last forever. At some point the excited electron will relax back to its original orbit, causing the atom to spontaneously emit light that scientists call fluorescence.   

Scientists can play tricks with an atom’s surroundings to tweak the relaxation time for high-flying electrons, which then dictates the rate of fluorescence. In a new study, researchers at the Joint Quantum Institute observed that a tiny thread of glass, called an optical nanofiber, had a significant impact on how fast a rubidium atom releases light. The research, which appeared as an Editor’s Suggestion in Physical Review A, showed that the fluorescence depended on the shape of light used to excite the atoms when they were near the nanofiber.

“Atoms are kind of like antennas, absorbing light and emitting it back out into space, and anything sitting nearby can potentially affect this radiative process,” says Pablo Solano, the lead author on the study and a University of Maryland graduate student at the time this research was performed.  

To probe how the environment affects these atomic antennas, Solano and his collaborators surround a nanofiber with a cloud of rubidium atoms. Nanofibers are custom-made conduits that allow much of the light to travel on the outside of the fiber, enhancing its interactions with atoms. The atoms closest to the nanofiber—within 200 nanometers—felt its presence the most. Some of the fluorescence from atoms in this region hit the fiber and bounced back to the atoms in an exchange that ultimately modified how long a rubidium atom’s electron stayed excited.   

The researchers found that the electron lifetime and subsequent atomic emissions depended on the wave characteristics of the light. Light waves oscillate as they travel, sometimes slithering like a sidewinder snake and other times corkscrewing like a strand of DNA. The researchers saw that for certain light shapes the electron lingered in the excited state, and for others, it made a more abrupt exit.

“We were able to use the oscillation properties of light as a kind of knob to control how atomic fluorescence near the nanofiber turned on,” Solano says.  

The team originally set out to measure the effects the nanofiber had on atoms, and compare the results to theoretical predictions for this system. They found disagreements between their measurements and existing models that incorporate many of the complex details of rubidium’s internal structure. This new research paints a simpler picture of the atom-fiber interactions, and the team says more research is needed to understand the discrepancies. 

"We believe this work is an important step in the on-going quest for a better understanding of the interaction between light and atoms near a nanoscale light-guiding structure, such as the optical nanofiber we used here," says JQI Fellow and NIST scientist William Phillips, who is also one of the lead investigators on the study.   

Written by Emily Edwards


Solano is currently a postdoctoral researcher at the MIT-Harvard University Center for Ultra Cold Atoms.  In addition, the following researchers were authors on this study.  

P. Barberis-Blostein, professor in the Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México and JQI Visiting Professor at the time of the study; J. A. Grover, former UMD graduate student in Department of Physicc and currently a physicist at Northrop Grumman; J. N. Munday, professor in the Department of Electrical and Computer Engineering and the Institute for Research in Electronics and Applied Physics, UMD; L. A. Orozco, JQI Fellow and professor in the Department of Physics, UMD; W. D. Phillips, JQI Fellow, NIST scientist, UMD College Park Professor and Distinguished University Professor; S. L. Rolston, JQI Fellow and professor in the Department of Physics, UMD ; Y. Xu, former UMD graduate student in Department of Electrical and Computer Engineering and the Institute for Research in Electronics and Applied Physics

Read More

Transistors are tiny switches that form the bedrock of modern computing—billions of them route electrical signals around inside a smartphone, for instance.

Quantum computers will need analogous hardware to manipulate quantum information. But the design constraints for this new technology are stringent, and today’s most advanced processors can’t be repurposed as quantum devices. That’s because quantum information carriers, dubbed qubits, have to follow different rules laid out by quantum physics. 

Scientists can use many kinds of quantum particles as qubits, even the photons that make up light. Photons have added appeal because they can swiftly shuttle information over long distances and they are compatible with fabricated chips. However, making a quantum transistor triggered by light has been challenging because it requires that the photons interact with each other, something that doesn’t ordinarily happen on its own. 

Now, researchers at the Joint Quantum Institute (JQI), led by JQI Fellow Edo Waks have cleared this hurdle and demonstrated the first single-photon transistor using a semiconductor chip. The device, described in the July 6 issue of Science, is compact: Roughly one million of these new transistors could fit inside a single grain of salt. It is also fast, able to process 10 billion photonic qubits every second. 

“Using our transistor, we should be able to perform quantum gates between photons,” says Waks. “Software running on a quantum computer would use a series of such operations to attain exponential speedup for certain computational problems.

The photonic chip is made from a semiconductor with numerous holes in it, making it appear much like a honeycomb. Light entering the chip bounces around and gets trapped by the hole pattern; a small crystal called a quantum dot sits inside the area where the light intensity is strongest. Analogous to conventional computer memory, the dot stores information about photons as they enter the device. The dot can effectively tap into that memory to mediate photon interactions—meaning that the actions of one photon affect others that later arrive at the chip.

“In a single-photon transistor the quantum dot memory must persist long enough to interact with each photonic qubit,” says Shuo Sun, the lead author of the new work who is a Postdoctoral Research Fellow at Stanford University*. “This allows a single photon to switch a bigger stream of photons, which is essential for our device to be considered a transistor.”

To test that the chip operated like a transistor, the researchers examined how the device responded to weak light pulses that usually contained only one photon. In a normal environment, such dim light might barely register. However, in this device, a single photon gets trapped for a long time, registering its presence in the nearby dot. 

The team observed that a single photon could, by interacting with the dot, control the transmission of a second light pulse through the device. The first light pulse acts like a key, opening the door for the second photon to enter the chip. If the first pulse didn’t contain any photons, the dot blocked subsequent photons from getting through. This behavior is similar to a conventional transistor where a small voltage controls the passage of current through it’s terminals. Here, the researchers successfully replaced the voltage with a single photon and demonstrated that their quantum transistor could switch a light pulse containing around 30 photons before the quantum dot’s memory ran out.

Waks, who is also a professor in the University of Maryland Department of Electrical and Computer Engineering, says that his team had to test different aspects of the device’s performance prior to getting the transistor to work. “Until now, we had the individual components necessary to make a single photon transistor, but here we combined all of the steps into a single chip,” Waks says.

Sun says that with realistic engineering improvements their approach could allow many quantum light transistors to be linked together. The team hopes that such speedy, highly connected devices will eventually lead to compact quantum computers that process large numbers of photonic qubits. 

*Other contributors and affiliations

  • Edo Waks has affiliations with the University of Maryland Department of Electrical and Computer Engineering (ECE), Department of Physics, Joint Quantum Institute, and the Institute for Research in Electronics and Applied Physics (IREAP).
  • Shuo Sun was a UMD graduate student at the time of this research. He is now a postdoctoral research fellow at Stanford University.
  • JQI Fellow Glenn Solomon, a physicist at the National Institute of Standards and Technology, grew the sample used in this research.
  • Hyochul Kim was a postdoctoral research at UMD at the time of the research. He is now at Samsung Advanced Institute of Technology.
  • Zhouchen Luo is currently a UMD ECE graduate student.
Read More

Optical highways for light are at the heart of modern communications. But when it comes to guiding individual blips of light called photons, reliable transit is far less common. Now, a collaboration of researchers from the Joint Quantum Institute (JQI), led by JQI Fellows Mohammad Hafezi and Edo Waks, has created a photonic chip that both generates single photons, and steers them around. The device, described in the Feb. 9 issue of Science, features a way for the quantum light to seamlessly move, unaffected by certain obstacles.

"This design incorporates well-known ideas that protect the flow of current in certain electrical devices," says Hafezi. "Here, we create an analogous environment for photons, one that protects the integrity of quantum light, even in the presence of certain defects."

The chip starts with a photonic crystal, which is an established, versatile technology used to create roadways for light. They are made by punching holes through a sheet of semiconductor. For photons, the repeated hole pattern looks very much like a real crystal made from a grid of atoms. Researchers use different hole patterns to change the way that light bends and bounces through the crystal. For instance, they can modify the hole sizes and separations to make restricted lanes of travel that allow certain light colors to pass, while prohibiting others.

Sometimes, even in these carefully fabricated devices, there are flaws that alter the light’s intended route, causing it to detour into an unexpected direction. But rather than ridding their chips of every flaw, the JQI team mitigates this issue by rethinking the crystal’s hole shapes and crystal pattern. In the new chip, they etch out thousands of triangular holes in an array that resembles a bee’s honeycomb. Along the center of the device they shift the spacing of the holes, which opens a different kind of travel lane for the light. Previously, these researchers predicted that photons moving along that line of shifted holes should be impervious to certain defects because of the overall crystal structure, or topology. Whether the lane is a switchback road or a straight shot, the light’s path from origin to destination should be assured, regardless of the details of the road.

The light comes from small flecks of semiconductor—dubbed quantum emitters—embedded into the photonic crystal. Researchers can use lasers to prod this material into releasing single photons. Each emitter can gain energy by absorbing laser photons and lose energy by later spitting out those photons, one at time. Photons coming from the two most energetic states of a single emitter are different colors and rotate in opposite directions. For this experiment, the team uses photons from an emitter found near the chip’s center.

The team tested the capabilities of the chip by first changing a quantum emitter from its lowest energy state to one of its two higher energy states. Upon relaxing back down, the emitter pops out a photon into the nearby travel lane. They continued this process many times, using photons from the two higher energy states. They saw that photons emitted from the two states preferred to travel in opposite directions, which was evidence of the underlying crystal topology.

To confirm that the design could indeed offer protected lanes of traffic for single photons, the team created a 60 degree turn in the hole pattern. In typical photonic crystals, without built-in protective features, such a kink would likely cause some of the light to reflect backwards or scatter elsewhere. In this new chip, topology protected the photons and allowed them to continue on their way unhindered.

“On the internet, information moves around in packets of light containing many photons, and losing a few doesn’t hurt you too much”, says co-author Sabyasachi Barik, a graduate student at JQI. “In quantum information processing, we need to protect each individual photon and make sure it doesn't get lost along the way. Our work can alleviate some forms of loss, even when the device is not completely perfect.”

The design is flexible, and could allow researchers to systematically assemble pathways for single photons, says Waks. "Such a modular approach may lead to new types of optical devices and enable tailored interactions between quantum light emitters or other kinds of matter."

Written by E. Edwards

*Mohammad Hafezi is an Associate Professor in the University of Maryland (UMD) Departments of Electrical and Computer Engineering and Physics. Edo Waks is a Professor in the UMD Department of Electrical and Computer Engineering.

Read More

If you holler at someone across your yard, the sound travels on the bustling movement of air molecules. But over long distances your voice needs help to reach its destination—help provided by a telephone or the Internet. Atoms don’t yell, but they can share information through light. And they also need help connecting over long distances.

Now, researchers at the Joint Quantum Institute (JQI) have shown that nanofibers can provide a link between far-flung atoms, serving as a light bridge between them. Their research, which was conducted in collaboration with the Army Research Lab and the National Autonomous University of Mexico, was published last week in Nature Communications. The new technique could eventually provide secure communication channels between distant atoms, molecules or even quantum dots.

An excited atom—that is, one with some extra energy—emits light when it loses energy. Usually atoms spit this light out in random directions and at different times. But this random process can be tamed if excited atoms are bunched up close together. In that case, atoms can sync up their light emissions, like the rhythmic clapping of an appreciative audience. However, this synchronization effect, which is caused by light of different atoms joining together, doesn’t reach very far because the strength of light weakens drastically over short distances. While your neighbor might hear you yelling over several meters, atoms need to be really close to interact with each other—typically closer than one micron, which is a hundred times smaller than the width of a human hair.

Now, physicists have extended the range over which atoms can synchronize their light emission by using an optical nanofiber. In an experiment, the researchers immerse a nanofiber in a cloud of cold rubidium atoms and excite the atoms with a laser beam. As atoms in the cloud move around, they sometimes get very close to the fiber. If an atom emits light near the fiber, the glass thread can capture the light and pipe it to another atom, even if the atoms are far apart.

The team observed a group of atoms emitting light pulses at different rates than their ordinary, unsynchronized selves—one signature of these far-reaching interactions. The effect persisted even when physicists cleaved the atomic cloud in two so that atoms in separate clouds could only connect through the fiber, and not through other atoms in the cloud.

The atoms in this experiment are only separated by distances of a few pieces of paper, but the authors say that longer distances—meters or even kilometers—should be doable. “We have shown that optical nanofibers are excellent for connecting atoms that are quite far apart—if the atoms were the size of people, it would be a distance of more than 300 kilometers,” says Pablo Solano, the lead author of the paper and a former JQI graduate student.* “The question now is not whether the atoms interact, but how far can we push their optical-fiber-mediated connections.” On the scale of atoms even a few meters is an enormous distance. But the authors say that a combination of optical nanofibers and regular fiber optics—technologies already deployed for long-distance phone calls, cable TV and the Internet—could extend the range of these atomic connections even farther.

* Solano is now a postdoctoral researcher at the MIT-Harvard Center for Ultracold Atoms.

Written by N. Beier

Read More

In Schrödinger's famous thought experiment, a cat seems to be both dead and alive—an idea that strains credulity. These days, cats still don't act this way, but physicists now regularly create analogues of Schrödinger's cat in the lab by smearing the microscopic quantum world over longer and longer distances.

Such "cat states" have found many homes, promising more sensitive quantum measurements and acting as the basis for quantum error-correcting codes—a necessary component for future error-prone quantum computers.

With these goals in mind, some researchers are eager to create better cat states with single ions. But, so far, standard techniques have imposed limits on how far their quantum nature could spread.

Recently, researchers at the Joint Quantum Institute developed a new scheme for creating single-ion cat states, detailing the results this week in Nature Communications. Their experiment places a single ytterbium ion into a superposition—a quantum combination—of two different states. Initially, these states move together in their common environment, sharing the same motion. But a series of carefully timed and ultrafast laser pulses apply different forces to the two ion states, pushing them in opposite directions. The original superposition persists, but the states end up oscillating out of phase with each other. 

Using this technique, the JQI team managed to separate the states by a distance of almost 300 nanometers, roughly twelve times further than previously possible. There's still just one ion, but its quantum nature now extends over a distance more than a thousand times larger than its original size. Such long-range superpositions are highly sensitive, and could enable precise atom interferometry measurements or robust quantum cryptographic techniques.

Read More

Optical fibers are ubiquitous, carrying light wherever it is needed. These glass tunnels are the high-speed railway of information transit, moving data at incredible speeds over tremendous distances. Fibers are also thin and flexible, so they can be immersed in many different environments, including the human body, where they are employed for illumination and imaging.

Physicists use fibers, too, particularly those who study atomic physics and quantum information science. Aside from shuttling laser light around, fibers can be used to create light traps for super-chilled atoms. Captured atoms can interact more strongly with light, much more so than if they were moving freely. This rather artificial environment can be used to explore fundamental physics questions, such as how a single particle of light interacts with a single atom. But it may also assist with developing future hybrid atom-optical technologies.

Now, researchers from the Joint Quantum Institute and the Army Research Laboratory have developed a fast-acting, non-invasive way to use fiber light to reveal information about fiber traps. This technique is reminiscent of biomedical and chemical sensors that use fibers to detect properties of nearby molecules. Fiber sensors are an attractive measurement tool because they can often extract information without totally disrupting interesting phenomena that may be going on. The research appeared as an Editor’s Pick in the journal Optics Letters. The team also published a review article about optical nanoscale fibers in the most recent volume of Advances in Atomic, Molecular, and Optical Physics.

Typical optical fibers, like the ones used in communications and medicine, have only a tiny amount of light near the outside surface, and that is not enough to capture atoms from a surrounding gas. Physicists can push more light to the outside by reshaping the fiber to look like a tiny hourglass instead of a tunnel. The waist of the hourglass is hundreds of nanometers, a few times the width of a human hair and too small to contain light waves that are propagating along the inside of the fiber. But instead of just stopping at the constriction, the light squeezes to the outside surface. When physicists inject light into both ends of such a fiber, the light waves combine together to form a stationary ripple around the constriction. Atoms will be attracted to dips in the wave and line up like a row of eggs in a carton.

This trapping is an example of how light affects atoms, drawing them in. But the light-atom relationship is reciprocal: The presence of atoms can alter the light, too. Light waves, sent into one end of a nanoscale fiber, will pick-up information about the atoms in the vicinity of the fiber, and then convey it to a detector at the opposite end of the fiber.

Each trapped atom acts like a marble in a glass bowl. When pushed, a marble will roll up the side of the bowl, back down, and then up the other side. The speed of this cycle is related to the bowl’s curvature: Steeper walls cause faster cycles. Now imagine shining a flashlight through one side of the bowl. As it goes back and forth the marble will keep passing through the flashlight beam. The beam signal will blink on and off at the rate at which the marble was moving in the bowl. In other words, the information about the marble motion, and therefore the bowl’s shape, is encoded onto the flashlight beam.

In this research, the team uses laser light as the probe, analogous to the flashlight. A mere 70 nanowatts in power is injected into the fiber, gently kicking the atoms into motion. Similar to marble wobbles, the atoms rock back and forth in their bowl traps. Instead of causing the probe light to blink on and off, the atom motion affects the direction that the light waves oscillate. The speed of the atom rocking, which is directly related to the atom trap shape, will be imprinted on the light as faster or slower changes.

When the light waves complete their journey and exit the fiber, the team catches them with a detector to continuously monitor the atom-light oscillations. The process is fast, taking only a fraction of a millisecond, and it can be seamlessly integrated into an experimental sequence.

When it comes to measuring these atom trap properties, physicists want to avoid disturbances. This can be difficult to do because one of the most effective ways to probe atoms involves blasting them with light, which can heat and even release them from their traps. This conventional method is acceptable because scientists can just recool and recapture the atoms. In contrast, the JQI-ARL technique uses very little light and is done in-situ, meaning it collects information while minimizing disruptions. This appealing alternative promises to streamline atom-fiber experiments.

by E. Edwards/JQI

Read More
  • Quantum Thermometer or Optical Refrigerator?
  • Versatile optomechanical beams have potential applications in biology, chemistry, electronics.
  • June 23, 2017 Quantum Control Measurement and Sensing

From NIST News

In an arranged marriage of optics and mechanics, JQI-NIST physicists have created microscopic structural beams that have a variety of powerful uses when light strikes them. Able to operate in ordinary, room-temperature environments, yet exploiting some of the deepest principles of quantum physics, these optomechanical systems can act as inherently accurate thermometers, or conversely, as a type of optical shield that diverts heat. 

Described in a pair of new papers in Science and Physical Review Letters, the potential applications include chip-based temperature sensors for electronics and biology that would never need to be adjusted since they rely on fundamental constants of nature; tiny refrigerators that can cool state-of-the-art microscope components for higher-quality images; and improved “metamaterials” that could allow researchers to manipulate light and sound in new ways.

Made of silicon nitride, a widely used material in the electronics and photonics industries, the beams are about 20 microns (20 millionths of a meter) in length. They are transparent, with a row of holes drilled through them to enhance their optical and mechanical properties.“You can send light down this beam because it’s a transparent material. You can also send sound waves down the beam,” explained Tom Purdy, a NIST physicist who is an author on both papers. The researchers believe the beams could lead to better thermometers, which are now ubiquitous in our devices, including cell phones.

“Essentially we're carrying a bunch of thermometers around with us all the time,” said JQI Fellow Jake Taylor, senior author of the new papers. “Some provide temperature readings, and others let you know if your chip is too hot or your battery is too cold. Thermometers also play a crucial role in transportation systems—airplanes, cars—and tell you if your engine oil is overheating.”

But the problem is that these thermometers are not accurate off the shelf. They need to be calibrated, or adjusted, to some standard. The design of the silicon nitride beam avoids this situation by relying on fundamental physics. To use the beam as a thermometer, researchers must be able to measure the tiniest possible vibrations in the beam. The amount that the beam vibrates is proportional to the temperature of its surroundings.

The vibrations can come from two kinds of sources. The first are ordinary “thermal” sources such as gas molecules buffeting the beam or sound waves passing through it. The second source of vibration comes purely from the world of quantum mechanics, the theory that governs behavior of matter at the atomic scale. The quantum behavior occurs when the researchers send particles of light, or photons, down the beam.

Struck by light, the mechanical beam reflects the photons, and recoils in the process, creating small vibrations in the beam. Sometimes these quantum-based effects are described using the Heisenberg uncertainty relationship—the photon bounce leads to information about the beam’s position, but because it imparts vibrations to the beam, it adds uncertainty to the beam’s velocity.

“The quantum mechanical fluctuations give us a reference point because essentially, you can't make the system move less than that,” Taylor said. By plugging in values of Boltzmann’s constant and Planck’s constant, the researchers can calculate the temperature. And given that reference point, when the researchers measure more motion in the beam, such as from thermal sources, they can accurately extrapolate the temperature of the environment. However, the quantum fluctuations are a million times fainter than the thermal vibrations; detecting them is like hearing a pin drop in the middle of a shower.

In their experiments, the researchers used a state-of-the-art silicon nitride beam built by Karen Grutter and Kartik Srinivasan at NIST’s Center for Nanoscale Science and Technology. By shining high-quality photons at the beam and analyzing photons emitted from the beam shortly thereafter, “we see a little bit of the quantum vibrational motion picked up in the output of light,” Purdy explained. Their measurement approach is sensitive enough to see these quantum effects all the way up to room temperature for the first time, and is published in this week’s issue of Science.

Although the experimental thermometers are in a proof-of-concept phase, the researchers envision they could be particularly valuable in electronic devices, as on-chip thermometers that never need calibration, and in biology.“Biological processes, in general, are very sensitive to temperature, as anyone who has a sick child knows. The difference between 37 and 39 degrees Celsius is pretty large,” Taylor said. He foresees applications in biotechnology, when you want to measure temperature changes in “as small an amount of product as possible,” he said.

The researchers go in the opposite direction in a second proposed application for the beams, described in a theoretical paper published in Physical Review Letters. Instead of letting heat hit the beam and allow it to serve as a temperature probe, the researchers propose using the beam to divert the heat from, for example, a sensitive part of an electromechanical device. In their proposed setup, the researchers enclose the beam in a cavity, a pair of mirrors that bounce light back and forth. They use light to control the vibrations of the beam so that the beam cannot re-radiate incoming heat in its usual direction, towards a colder object.

For this application, Taylor likens the behavior of the beam to a tuning fork. When you hold a tuning fork and strike it, it radiates pure sound tones instead of allowing that motion to turn into heat, which travels down the fork and into your hand.“A tuning fork rings for a long time, even in air,” he said. The two prongs of the fork vibrate in opposite directions, he explained, and cancel out a way for energy to leave the bottom of the fork through your hand.

The researchers even imagine using an optically controlled silicon nitride beam as the tip of an atomic force microscope (AFM), which detects forces on surfaces to build up atom-scale images. An optically controlled AFM tip would stay cool—and perform better. “You’re removing thermal motion, which makes it easier to see signals,” Taylor explained.

This technique also could be put to use to make better metamaterials, complex composite objects that manipulate light or sound in new ways and could be used to make better lenses or even so-called “invisibility cloaks” that cause certain wavelengths of light to pass through an object rather than bouncing from it.

“Metamaterials are our answer to the question, ‘How do we make materials that capture the best properties for light and sound, or for heat and motion?’” Taylor said. “It's a technique that has been widely used in engineering, but combining the light and sound together remains still a bit open on how far we can go with it, and this provides a new tool for exploring that space.”

written by Ben Stein

Read More

Optical fibers are the backbone of modern communications, shuttling information from A to B through thin glass filaments as pulses of light. They are used extensively in telecommunications, allowing information to travel at near the speed of light virtually without loss.

These days, biologists, physicists and other scientists regularly use optical fibers to pipe light around inside their labs. In one recent application, quantum research labs have been reshaping optical fibers, stretching them into tiny tapers (see JQI News on Nanofibers and designer light traps). For these nanometer-scale tapers, or nanofibers, the injected light still makes its way from A to B, but some of it is forced to travel outside the fiber’s exterior surface. The exterior light, or evanescent field, can capture atoms and then carry information about that light-matter interaction to a detector (See JQI News on Thermometry using optical nanofibers).  

Fine-tuning such evanescent light fields is tricky and requires tools for characterizing both the fiber and the light. To this end, researchers from JQI, the Army Research Laboratory (ARL), and the Naval Research Laboratory (NRL) have developed a novel method to measure how light propagates through a nanofiber, allowing them to determine the nanofiber’s thickness to a precision less than the width of an atom. The technique, described in the January 20, 2017 issue of the journal Optica, is direct, fast and, unlike the standard imaging method, preserves the integrity of the fiber. As a result, the probe can be used in-situ with the nanofiber fabrication equipment, which will streamline implementation in quantum optics and quantum information experiments. Developing reliable and precise tools for this platform may enable nanofiber technology for sensing and metrology applications.

Light waves have a characteristic size called the wavelength. For visible light, the wavelength is roughly 100 times smaller than a human hair. Light can also have the appearance of different shapes, such a solid circle, ring, clover and more (see image below). Fibers restrict the way light waves can travel and twisting or bending a fiber will alter the light’s characteristics. Nanofibers are made by reshaping a normal fiber into an hourglass-like design, which further affects the guided light waves.

Examples of light shapes. Each panel shows a 3D (top) and 2D (bottom) intensity profile. The red (blue) areas indicate more (less) light intensity. The effect of the fiber appears in the 3D images as a sharp cutout; in 2D the fiber interface looks like a ring-shaped edge. (Images are calculations courtesy of P. Solano and L. Orozco)

In this experiment, researchers inject a combination of light shapes into a nanofiber. The light passes down a thinning taper, squeezes through a narrow waist, and then exits out the other side of the taper. The changing fiber size distorts the light waves, and multiple patterns emerge from the interfering light shapes (See JQI News on Collecting lost light). This is analogous to musical notes, or sound waves, beating together to form a complex chord.

The researchers make direct measurements of the interference patterns (beats). To do this, they employ a second micron-sized fiber that acts as a non-invasive sensor. The nanofiber is on a moving stage and crosses the probe fiber at an oblique angle. At the touching point, a tiny fraction of nanofiber light evanescently enters the second fiber and travels to a detector. As they scan the probe along the length of the nanofiber, the probe detector collects information about the evolving patterns of nanofiber light. The researchers simultaneously monitor the light transmitting through the nanofiber to ensure that the probe process is harmless.

The team can achieve a high level of precision with this technique because they are not imaging the fiber with a camera, which would have a spatial resolution limited by the collected light’s wavelength. UMD graduate student Pablo Solano explains, “We are actually seeing the different light modes mix together and that sets the limits on determining the fiber waist—in this case sub-angstrom.” A standard tool known as scanning electron microscopy (SEM) can also measure fiber dimensions with nanoscale resolution. This, however, has a comparative disadvantage, says Eliot Fenton, a UMD undergraduate student working on the project, “With our new method, we can avoid using SEM, which destroys the fiber with imaging chemicals and heating.” Other techniques involve collecting randomly scattered light from the fiber, which is less direct and susceptible to errors. Solano summarizes how researchers can benefit from this new tool, “By directly and sensitively measuring the interference (beating) of light without destroying the fiber, we can know exactly the kind of electromagnetic field that we would apply to atoms.”

Read More

When is a traffic jam not a traffic jam? When it's a quantum traffic jam, of course. Only in quantum physics can traffic be standing still and moving at the same time.

A new theoretical paper from scientists at the National Institute of Standards and Technology (NIST) and the University of Maryland suggests that intentionally creating just such a traffic jam out of a ring of several thousand ultracold atoms could enable precise measurements of motion. If implemented with the right experimental setup, the atoms could provide a measurement of gravity, possibly even at distances as short as 10 micrometers—about a tenth of a human hair's width.

While the authors stress that a great deal of work remains to show that such a measurement would be attainable, the potential payoff would be a clarification of gravity's pull at very short length scales. Anomalies could provide major clues on gravity’s behavior, including why our universe appears to be expanding at an accelerating rate.

In addition to potentially answering deep fundamental questions, these atom rings may have practical applications, too. They could lead to motion sensors far more precise than previously possible, or serve as switches for quantum computers, with 0 represented by atomic gridlock and 1 by moving atom traffic.

The authors of the paper are affiliated with the Joint Quantum Institute and the Joint Center for Quantum Information and Computer Science, both of which are partnerships between NIST and the University of Maryland.

Over the past two decades, physicists have explored an exotic state of matter called a Bose-Einstein condensate (BEC), which exists when atoms overlap one another at frigid temperatures a smidgen of a degree away from absolute zero. Under these conditions, a tiny cloud of atoms can essentially become one large quantum “superatom,” allowing scientists to explore potentially useful properties like superconductivity and superfluidity more easily.

Theoretical physicists Stephen Ragole and Jake Taylor, the paper’s authors, have now suggested that a variation on the BEC idea could be used to sense rotation or even explore gravity over short distances, where other forces such as electromagnetism generally overwhelm gravity's effects. The idea is to use laser beams—already commonly used to manipulate cold atoms—to string together a few thousand atoms into a ring 10 to 20 micrometers in diameter.

Once the ring is formed, the lasers would gently stir it into motion, making the atoms circulate around it like cars traveling one after another down a single-lane beltway. And just as car tires spin as they travel along the pavement, the atoms' properties would pick up the influence of the world around them—including the effects of gravity from masses just a few micrometers away.

The ring would take advantage of one of quantum mechanics' counterintuitive behaviors to help scientists actually measure what its atoms pick up about gravity. The lasers could stir the atoms into what is called a "superposition," meaning, in effect, they would be both circulating about the ring and simultaneously at a standstill. This superposition of flow and gridlock would help maintain the relationships among the ring's atoms for a few crucial milliseconds after removing their laser constraints, enough time to measure their properties before they scatter.

Not only might this quantum traffic jam overcome a difficult gravity measurement challenge, but it might help physicists discard some of the many competing theories about the universe—potentially helping clear up a longstanding traffic jam of ideas.

One of the great mysteries of the cosmos, for example, is why it is expanding at an apparently accelerating rate. Physicists have suggested an outward force, dubbed “dark energy,” causes this expansion, but they have yet to discover its origin. One among many theories is that in the vacuum of space, short-lived virtual particles constantly appear and wink out of existence, and their mutual repulsion creates dark energy's effects. While it's a reasonable enough explanation on some levels, physicists calculate that these particles would create so much repulsive force that it would immediately blow the universe apart. So how can they reconcile observations with the virtual particle idea?

"One possibility is that the basic fabric of space-time only responds to virtual particles that are more than a few micrometers apart," Taylor said, "and that's just the sort of separation we could explore with this ring of cold atoms. So if it turns out you can ignore the effect of particles that operate over these short length scales, you can account for a lot of this unobserved repulsive energy. It would be there, it just wouldn't be affecting anything on a cosmic scale."

The research appears in the journal Physical Review Letters.

This story, originally published as a news item by NIST, was writen by Chad T. Boutin.

Read More

From credit card numbers to bank account information, we transmit sensitive digital information over the internet every day. Since the 1990s, though, researchers have known that quantum computers threaten to disrupt the security of these transactions.

That’s because quantum physics predicts that these computers could do some calculations far faster than their conventional counterparts. This would let a quantum computer crack a common internet security system called public key cryptography.

This system lets two computers establish private connections hidden from potential hackers. In public key cryptography, every device hands out copies of its own public key, which is a piece of digital information.  Any other device can use that public key to scramble a message and send it back to the first device. The first device is the only one that has another piece of information, its private key, which it uses to decrypt the message. Two computers can use this method to create a secure channel and send information back and forth.

A quantum computer could quickly calculate another device’s private key and read its messages, putting every future communication at risk. But many scientists are studying how quantum physics can fight back and help create safer communication lines.

One promising method is quantum key distribution, which allows two parties to directly establish a secure channel with a single secret key. One way to generate the key is to use pairs of entangled photons—particles of light with a shared quantum connection. The entanglement guarantees that no one else can know the key, and if someone tries to eavesdrop, both parties will be tipped off.

Tobias Huber, a recently arrived JQI Experimental Postdoctoral Fellow, has been investigating how to reliably generate the entangled photons necessary for this secure communication. Huber is a graduate of the University of Innsbruck in Austria, where he was supervised by Gregor Weihs. They have frequently collaborated with JQI Fellow Glenn Solomon, who spent a semester at Innsbruck as a Fulbright Scholar. Over the past couple of years, they have been studying a particular source of entangled photons, called quantum dots.

A quantum dot is a tiny area in a semiconductor, just nanometers wide, that is embedded in another semiconductor. This small region behaves like an artificial atom. Just like in an atom, electrons in a quantum dot occupy certain discrete energy levels. If the quantum dot absorbs a photon of the right color, an electron can jump to a higher energy level. When it does, it leaves behind an open slot at the lower energy, which physicists call a hole. Eventually, the electron will decay to its original energy, emitting a photon and filling in the hole. The intermediate combination of the excited electron and the hole is called an exciton, and two excited electrons and two holes are called a biexciton. A biexciton will decay in a cascade, emitting a pair of photons.

Huber, Weihs, Solomon and several colleagues have developed a way to directly excite biexcitons in quantum dots using a sequence of laser pulses. The pulses make it possible to encode information in the pair of emitted photons, creating a connection between them known as time-bin entanglement. It’s the best type of entanglement for transmitting quantum information through optical fibers because it doesn’t degrade as easily as other types over long distances. Huber and his colleagues are the first to directly produce time-bin entangled photons from quantum dots.

In their latest work, published in Optics Express, they investigated how the presence of material imperfections surrounding the quantum dots influences this entanglement generation. Imperfections have their own electron energy levels and can steal an electron from a dot or donate an electron to fill a hole. Either way, the impurity prevents an exciton from decaying and emitting a photon, decreasing the number of photons that are ultimately released. To combat this loss, the team used a second laser to fill up the electron levels of the impurities and showed that this increased the number of photons released without compromising the entanglement between them.

The team says the new work is a step in the right direction to make quantum dots a viable source of entangled photons. Parametric down-conversion, a competitor that uses crystals to split the energy of one photon into two, occasionally produces two pairs of entangled photons instead of one. This could allow an eavesdropper to read an encrypted message without being detected. The absence of this drawback makes quantum dots an excellent candidate for producing entangled photons for quantum key distribution.

The advent of quantum computing brings new security challenges, but tools like quantum key distribution are taking those challenges head-on. It’s possible that, one day, we could have not only quantum computers, but quantum-secure communication lines, free from prying eyes.

Read More

For the first time, a team including scientists from the National Institute of Standards and Technology (NIST) and JQI have used neutron beams to create holograms of large solid objects, revealing their interior details in ways that ordinary holograms do not.

Holograms—flat images that look like three-dimensional objects—owe their striking look to interfering waves. Both matter and light behave like waves at the smallest scales, and just like water waves traveling on the surface of the pond, waves of matter or light can combine to create information-rich interference patterns.

Illuminating an object with a laser can create an optical hologram. But instead of merely photographing the light reflected from the object, a hologram records how the reflected light waves interfere with each other. The resulting patterns, based on the waves’ phase differences, or relative positions of their peaks and valleys, contain far more information about an object’s appearance than a simple photo. Generally, though, they don’t reveal much about its interior.

It’s the interior neutron scientists explore. Neutrons are great at penetrating metals and many other solid things, making neutron beams useful for scientists who create a new substance and want to investigate its properties. But neutrons have limitations, too. Neutron beams typically probe average properties—fine for objects with repeating structures like a crystal, but not as good for spotting fine-grained details.

But what if we could have the best of both worlds? New research has found a way.

Previous work performed at the NIST Center for Neutron Research (NCNR) involved shooting neutrons through a cylinder of aluminum that had a tiny “spiral staircase” carved into one of its circular faces. The cylinder’s shape imparted a twist to the whole passing beam, but the beam’s individual neutrons also collected individual twists depending on the section of the cylinder that they passed through: the thicker the section, the greater the twist. Researchers realized this was the information needed to create holograms of objects’ innards, and the new paper details their method.

The discovery won’t change anything about interstellar chess games, but it adds to the palette of techniques scientists have to explore solid materials. The team has shown that all it takes is a beam of neutrons and an interferometer—a detector that measures interference patterns—to create direct visual representations of an object and reveal details about specific points within it.

"Other techniques measure small features as well, only they are limited to measuring surface properties," says team member Michael Huber of NIST’s Physical Measurement Laboratory. "This might be a more prudent technique for measuring small, 10-micron size structures and buried interfaces inside the bulk of the material."

The research was a multi-institutional collaboration that included scientists from NIST and JQI, as well as North Carolina State University and Canada’s University of Waterloo.

Story by Chad Boutin. The original story, along with several animations, was posted at NIST's news site

Read More

Scientists have created a crystal structure that boosts the interaction between tiny bursts of light and individual electrons, an advance that could be a significant step toward establishing quantum networks in the future.

Today’s networks use electronic circuits to store information and optical fibers to carry it, and quantum networks may benefit from a similar framework. Such networks would transmit qubits – quantum versions of ordinary bits – from place to place and would offer unbreakable security for the transmitted information. But researchers must first develop ways for qubits that are better at storing information to interact with individual packets of light called photons that are better at transporting it, a task achieved in conventional networks by electro-optic modulators that use electronic signals to modulate properties of light.

Now, researchers in the group of Edo Waks, a fellow at JQI and an Associate Professor in the Department of Electrical and Computer Engineering at the University of Maryland, have struck upon an interface between photons and single electrons that makes progress toward such a device. By pinning a photon and an electron together in a small space, the electron can quickly change the quantum properties of the photon and vice versa. The research was reported online Feb. 8 in the journal Nature Nanotechnology.

“Our platform has two major advantages over previous work,” says Shuo Sun, a graduate student at JQI and the first author of the paper. “The first is that the electronic qubit is integrated on a chip, which makes the approach very scalable. The second is that the interactions between light and matter are fast. They happen in only a trillionth of a second – 1,000 times faster than previous studies.”


The new interface utilizes a well-studied structure known as a photonic crystal to guide and trap light. These crystals are built from microscopic assemblies of thin semiconductor layers and a grid of carefully drilled holes. By choosing the size and location of the holes, researchers can control the properties of the light traveling through the crystal, even creating a small cavity where photons can get trapped and bounce around.

”These photonic crystals can concentrate light in an extremely small volume, allowing devices to operate at the fundamental quantum limit where a single photon can make a big difference,” says Waks.

The results also rely on previous studies of how small, engineered nanocrystals called quantum dots can manipulate light. These tiny regions behave as artificial atoms and can also trap electrons in a tight space. Prior work from the JQI group showed that quantum dots could alter the properties of many photons and rapidly switch the direction of a beam of light.

The new experiment combines the light-trapping of photonic crystals with the electron-trapping of quantum dots. The group used a photonic crystal punctuated by holes just 72 nanometers wide, but left three holes undrilled in one region of the crystal. This created a defect in the regular grid of holes that acted like a cavity, and only those photons with only a certain energy could enter and leave.

Inside this cavity, embedded in layers of semiconductors, a quantum dot held one electron. The spin of that electron – a quantum property of the particle that is analogous to the motion of a spinning top – controlled what happened to photons injected into the cavity by a laser. If the spin pointed up, a photon entered the cavity and left it unchanged. But when the spin pointed down, any photon that entered the cavity came out with a reversed polarization – the direction that light’s electric field points. The interaction worked the opposite way, too: A single photon prepared with a certain polarization could flip the electron’s spin.

Both processes are examples of quantum switches, which modify the qubits stored by the electron and photon in a controlled way. Such switches will be the coin of the realm for proposed future quantum computers and quantum networks.


Those networks could take advantage of the strengths that photons and electrons offer as qubits. In the future, for instance, electrons could be used to store and process quantum information at one location, while photons could shuttle that information between different parts of the network.

Such links could enable the distribution of entanglement, the enigmatic connection that groups of distantly separated qubits can share. And that entanglement could enable other tasks, such as performing distributed quantum computations, teleporting qubits over great distances or establishing secret keys that two parties could use to communicate securely.

Before that, though, Sun says that the light-matter interface that he and his colleagues have created must create entanglement between the electron and photon qubits, a process that will require more accurate measurements to definitively demonstrate.

“The ultimate goal will be integrating photon creation and routing onto the chip itself,” Sun says. “In that manner we might be able to create more complicated quantum devices and quantum circuits.”

In addition to Waks and Sun, the paper has two additional co-authors: Glenn Solomon, a JQI fellow, and Hyochul Kim, a post-doctoral researcher in the Department of Electrical and Computer Engineering at the University of Maryland.

"Creating a quantum switch" credit: S. Kelley/JQI

Read More

Harnessing quantum systems for information processing will require controlling large numbers of basic building blocks called qubits. The qubits must be isolated, and in most cases cooled such that, among other things, errors in qubit operations do not overwhelm the system, rendering it useless. Led by JQI Fellow Christopher Monroe, physicists have recently demonstrated important steps towards implementing a proposed type of gate, which does not rely on super-cooling their ion qubits. This work, published as an Editor’s Suggestion in Physical Review Letters, implements ultrafast sensing and control of an ion's motion, which is required to realize these hot gates. Notably, this experiment demonstrates thermometry over an unprecedented range of temperatures--from zero-point to room temperature.

Graduate student and first author Kale Johnson explains how this research could be applied, “Atomic clock states found in ions make the most pristine quantum bits, but the speed at which we have been able to access them in a useful way for quantum information processing is slower than it could be. We are changing that by making each operation on the qubit faster while eliminating the need to cool the ion to the ground state after each operation.”

In the experiment the team begins with a single trapped atomic ion. The ion can be thought of as a bar magnet that can be oriented with its north pole ‘up’ or ‘down’ or any combination between the two poles (pointing horizontally along an imaginary equator is up + down).  Physicists can use lasers and microwave radiation to control this orientation. The individual laser pulses are a mere ten picoseconds in length—a time scale that is a tiny fraction of how long it takes for the ion to undergo appreciable motion in the trap. Operating in this regime is precisely what allows researchers to have superior sensing and ultimately control over the ion motion. The speed enables the team to extract the motional behavior of an ion using a technique that works independently of the energy in the motion itself.  In other words, the measurement is equally sensitive to a fast or very slow atom.

The researchers use a method that is based on Ramsey interferometry, named for the Nobel Laureate Norman Ramsey who pioneered it back in 1949. Known then as his “method of separated oscillatory fields,” it is used throughout atomic physics and quantum information science.   

Laser pulses are carefully divided and then reunited to achieve control over the ion’s spin and motion. The researchers call these laser-ion interactions ‘spin-dependent kicks’ (SDK) because each series of specially tailored laser pulses flips the spin, while simultaneously giving the ion a push (this is depicted in the illustration below). With each fast kick, the atom’s quantum wave packet is split into two parts in under three nanoseconds. Those halves are then re-combined at different points in space and time, and the signal from the unique overlap pattern reveals how the population is distributed between the two spin states. In this experimental sequence, that distribution depends on parameters such as the number of SDKs, the time between kicks, and the initial position and speed of the ion. The team repeats this experiment to extract the average motion of the ion, or its effective temperature.


In order to realize proposed two-ion quantum gates that do not require cooling the system into its quantum mechanical ground state, multiple spin dependent kicks must be employed with high accuracy such that errors remain manageable. Here the team was able to clearly demonstrate the necessary high-quality spin dependent kicks. More broadly, this protocol shows that adding ultrafast pulsed laser technology to the ion-trapping toolbox gives physicists ultimate quantum control over what can be a limiting, noise-inducing parameter: the motion.

Read More

The concept of temperature is critical in describing many physical phenomena, such as the transition from one phase of matter to another.  Turn the temperature knob and interesting things can happen.  But other knobs might be just as important for studying some phenomena.  One such knob is chemical potential, a thermodynamic parameter first introduced in the nineteenth century by scientists for keeping track of potential energy absorbed or emitted by a system during chemical reactions.

In these reactions different atomic species rearranged themselves into new configuration while conserving the overall inventory of atoms.  That is, atoms could change their partners but the total number of identity of the atoms remained invariant. 

Chemical potential is just one of many examples of how flows can be described.  An imbalance in temperature results in a flow of energy.  An imbalance in electrical potential results in a flow of charged particles.  Meanwhile, an imbalance in chemical potential results in a flow of particles; and specifically an imbalance in chemical potential for light would result in a flow of photons.

Can the concept of chemical light apply to light?  At first the answer would seem to be no since particles of light, photons, are regularly absorbed when then they interact with regular matter.  The number of photons present is not preserved.  But recent experiments have shown that under special conditions photon number can be conserved, clearing the way for the use of chemical potential for light. 

Now three JQI scientists offer a more generalized theoretical description of chemical potential (usually denoted by the Greek letter mu) for light and show how mu can be controlled and applied in a number of physics research areas.

A prominent experimental demonstration of chemical potential for light took place at the University of Bonn (*) in 2010.  It consisted of quanta of light (photons) bouncing back and forth inside a reflective cavity filled with dye molecules.  The dye molecules, acting as a tunable energy bath (a parametric bath), would regularly absorb photons (seemingly ruling out the idea of photon number being conserved) but would re-emit the light.  Gradually the light warmed the molecules and the molecules cooled the light until they were all at thermal equilibrium.  This was the first time photons had been successfully “thermalized” in this way.  Furthermore, at still colder temperatures the photons collapsed into a single quantum state; this was the first photonic Bose-Einstein condensate (BEC).

In a paper published in the journal Physical Review B the JQI theorists describe a generic approach to chemical potential for light. They illustrate their ideas by showing how a chemical-potential protocol can be implemented a microcircuit array. Instead of crisscrossing a single cavity, the photons are set loose in an array of microwave transmission lines. And instead of interacting with a bath of dye molecules, the photons here interact with a network of tuned circuits.

“One likely benefit in using chemical potential as a controllable parameter will be carrying out quantum simulations of actual condensed-matter systems,” said Jacob Taylor, one of the JQI theorists taking part in the new study.  In what some call a prototype for future full-scale quantum computing, quantum simulations use tuned interactions in a small microcircuit setup to arrive at a numerical solution to calculations that (in their complexity) would defeat a normal digital computer.

In the scheme described above, for instance, the photons, carefully put in a superposition of spin states, could serve as qubits. The qubits can be programmed to perform special simulations. The circuits, including the transmission lines, act as the coupling mechanism whereby photons can be respectively up- or down-converted to lower or higher energy by obtaining energy from or giving energy to excitations of the circuits.

(*) J. Klaers, J. Schmitt, F. Vewinger, and M. Weitz, Nature 468, 545 (2010)

Read More

From NIST-PML — Precise measurements of optical power enable activities from fiber-optic communications to laser manufacturing and biomedical imaging — anything requiring a reliable source of light. This situation calls for light-measuring (radiometric) standards that can operate over a wide range of power levels.

Currently, however, different methods for calibrating optical power measurements are required for different light levels. At high levels, existing radiometric standards employ analog detectors, diodes that generate a current proportional to the incident light intensity, but become imprecise at low levels. Low-power detectors, by contrast, very accurately measure discrete (usually very small) numbers of photons, but cannot handle light at higher levels. Because of the incommensurate scales and incompatible technologies, comparison between the two kinds of measurements isn't easy, resulting in long calibration chains to span the difference.

Linking standards for widely different powers requires extending the dynamic range of detection to cover the region between the two measurement regimes. There are two options for bridging this gap: a "top-down" approach using analog detectors and a "bottom-up" method that starts with counting individual photons.

Exploring the second option, a team of JQI scientists, along with colleagues from NIST's Physical Measurement Laboratory (PML), has demonstrated a technique for extending the range of photon-counting detectors by employing optical attenuators, devices that block controlled fractions of incoming light. The results, recently published in Optics Express, could lead to improved standards to cover a much wider range of optical power.

The benefit of anchoring standards to detectors capable of counting single photons is a matter of precision, explains team member Boris Glebov.

“Measuring frequency of light is probably the most precise measurement science can perform," Glebov says. "Thus, if you have a way to link power measurements to photon counts and frequency measurements, the possible precision is incredibly high."

Knowing the energy of each photon (a function of its frequency) and the number of incident photons enables an extremely accurate determination of optical power. This is because photon counting has inherently low uncertainty, says JQI fellow and Quantum Optics Group leader Alan Migdall.

“A single-photon detection scheme means we are counting discrete things, so in principle the error goes away," Migdall says. "Either we have a count or we don’t.”

Over the past few years, Migdall's group has focused considerable effort on developing better ways to count individual photons. In particular, they have worked on improving the performance of a superconducting detector called a transition edge sensor (TES), using devices developed and produced by Sae Woo Nam and colleagues from PML's Quantum Electronics and Photonics Division at NIST's Boulder, Colo., campus.

A transition edge sensor contains a tiny superconducting circuit. When a photon strikes the superconductor, its energy is absorbed in the form of heat. The rise in temperature causes an increase in electrical resistance and a corresponding drop in current, which registers in the detector electronics. The devices provide excellent photon number-resolving capabilities and can operate over a wide range of frequencies, from radio waves to gamma rays. However, the number of photons has been limited, typically to about 20 photons or fewer. In order to use TES devices at higher optical power levels, the operating range needs to be extended.

Previously, the scientists approached this problem by modeling the relaxation time (the time it takes for the sensor to cool down after absorbing photons) and developing certain algorithms for better processing the output signal from the device. This has enabled them to extend the devices' sensitivity range to as high as 6,000,000 photons in a single pulse.

To extend it even further, the scientists devised a method in which a TES is used to calibrate its own input attenuator. This device provides variable optical attenuation — the selective reduction in the light power that passes through it. Controlled attenuation of high-power light merged with a photon-counting detector could connect the high precision offered by photon-counting measurements to measurements made at higher illumination levels.

To perform the calibration, pulsed laser light is directed through a variable attenuator, which is gradually stepped through a series of attenuation values. The resulting signals from the TES are processed by an improved version of the group's algorithm*, enabling accurate statistical determinations of the photon number at each value. Comparing the change in the measured photon number as the input attenuator is adjusted allows the attenuator to be calibrated in place. Significantly, the approach doesn't require knowledge of the power of the light source, which means no external calibration is necessary.

Since the ratio by which an attenuator reduces the power of a signal is independent of input power (up to some limit), measurements of attenuation made at the few-photon level should agree with those made at much higher intensities. To confirm this, the researchers compared the values obtained with a TES to those obtained with a conventional analog power meter. In every case, the measurements agreed within a small statistical uncertainty.

“Even though we calibrate at the few-photon level, these attenuators can be used at higher powers, extending the utility of a TES well beyond its own operational range,” Glebov says. This means a TES-calibrated attenuator can be used to compare detectors, regardless of the optical power they are designed to operate at. In essence, the low uncertainty now associated with the calibrated attenuator can be transferred to other devices, enabling comparisons between standards through relative measurements.

A TES could also be used to calibrate a series of attenuators with only a small increase in combined uncertainty, enabling an even larger range of operation. The ability to dynamically extend the operating range of a TES in situ — without reliance on external standards or needing to reset optical components — could prove useful in situations that by necessity operate at the few-photon level, for instance in quantum key distribution. Aside from extending the operating range of TES detectors, improved determination of optical attenuation could help when characterizing materials that react differently to high- and low-light levels or with samples that can survive only low-light levels, for instance when analyzing sensitive biological samples.

* The Poisson-influenced K-Means algorithm (PIKA). This is a modified version of the K-means clustering algorithm, a popular method of performing data cluster analysis.

Read More

Experimental quantum physics often resides in the coldest regimes found in the universe, where the lack of large thermal disturbances allows quantum effects to flourish. A key ingredient to these experiments is being able to measure just how cold the system of interest is. Laboratories that produce ultracold gas clouds have a simple and reliable method to do this: take pictures! The temperature of a gas depends on the range of velocities among the particles, namely the size of the difference between the slowest- and the fastest-moving particles. If all the atoms evolve for the same amount of time, the velocity distribution gets imprinted in the position of the atoms. This is analogous to a marathon where all the runners start together so you cannot immediately tell whom is the fastest, but after some time you can discern by eye whom is faster or slower based on their location.

In some experiments, however, the cloud is so well-hidden that snapshots are near impossible. A new technique developed by JQI researchers and published in Physical Review A as an Editor’s Suggestion, circumvents this issue by inserting an optical nanofiber (ONF) into a cold atomic cloud.

ONFs are like the normal optical fibers that form the global telecommunications network, except that they are much thinner – only a few hundred nanometers in diameter (about 200th the width of a human hair). This small size allows ONFs to be integrated with another, much larger system without disturbing it. Moreover, light can actually couple into the ONF through its so-called evanescent field. When an electromagnetic field, like laser light, cannot propagate from one media (e.g. air or glass) to another it does not just reflect or disappear at the interface. The field must be continuous and it gradually turns off as it flows into the new media--this spatially decaying field is called the evanescent field. Evanescent fields occur in nature, such as when an ocean a wave breaks into the shore and it slowly propagates just so far into the sand. Notably, due to its narrow size, light traveling down an ONF has significant fraction of its energy residing outside the fiber in the form of an evanescent field. Additionally, the laws of physics do not forbid the reverse process from happening, so light that originates outside the ONF can couple back into and propagate along the ONF.

In this experiment, laser-cooled atoms slowly move around the ONF and “blink” randomly as they absorb and reemit photons from a laser. The probability of such a photon coupling into the ONF depends directly on the amplitude of the evanescent field, and hence the position of the atom relative to the fiber. Once a photon enters the ONF, it travels down the optical fiber and is recorded with sensitive single-photon detector as a “click.” Tallying up how many times two clicks occur in different time windows gives the authors a picture of how the atoms move near the ONF. The width of the resulting signal is a measure of the average amount of time the atoms interact with the ONF, so that narrower (wider) signals correspond to faster (slower) atoms. Using these times, the authors were able to calculate the temperature of the cloud. When they compared it to the well-known method of taking pictures, they found good agreement.

This technique could be applied to systems where access for traditional imaging systems is limited or even impossible, such as in some types of hybrid quantum systems. One example would be experiments that seek to trap a cloud of rubidium atoms near a superconducting device, all housed within a dilution refrigerator. Operation of the dilution refrigerator requires careful shielding of optical and thermal radiation, preventing the use of the standard imaging temperature measurement. Additionally, other types of nanophotonic systems that use evanescent waves to link to atoms may also benefit from this type of thermometry.

This research summary was written by authors J. Grover and P. Solano. 

Read More

Optical fibers are hair-like threads of glass used to guide light. Fibers of exceptional purity have proved an excellent way of sending information over long distances and are the foundation of modern telecommunication systems. Transmission relies on what’s called total internal reflection, wherein the light propagates by effectively bouncing back and forth off of the fiber’s internal surface. Though the word “total” implies light remains entirely trapped in the fiber, the laws of physics dictate that some of the light, in the form of what’s called an evanescent field, also exists outside of the fiber. In telecommunications, the fiber core is more than ten times larger than the wavelength of light passing through. In this case, the evanescent fields are weak and vanish rapidly away from the fiber. Nanofibers have a diameter smaller than the wavelength of the guided light. Here, all of the light field cannot fit inside of the nanofiber, yielding a significant enhancement in the evanescent fields outside of the core. This allows the light to trap atoms (or other particles) near the surface of a nanofiber.

JQI researchers in collaboration with scientists from the Naval Research Laboratory have developed a new technique for visualizing light propagation through an optical nanofiber, detailed in a recent Optica paper. The result is a non-invasive measurement of the fiber size and shape and a real-time view of how light fields evolve along the nanofiber. Direct measurement of the fields in and around an optical nanofiber offers insight into how light propagates in these systems and paves the way for engineering customized evanescent atom traps.  

In this work, researchers use a sensitive camera to collect light from what’s known as Rayleigh scattering, demonstrating the first in-situ measurements of light moving through an optical nanofiber. Rayleigh scattering happens when light bounces, or scatters, off of particles much smaller than the wavelength of the light. In fibers, these particles can be impurities or density fluctuations in the glass, and the light scattered from them is ejected from the fiber.  This allows one to view the propagating light from the side, in much the same way as one can see a beam of sunlight through fog. Importantly, the amount of light ejected depends on the polarization, or the orientation of oscillation of the light, and intensity of the field at each point, which means that capturing this light is a way to view the field.

The researchers here are interested in understanding the propagation of the field when the light waves are comprised from what are known as higher-order modes. Instead of having a uniform spatial profile, like that of a laser pointer, these modes can look like a doughnut, cloverleaf, or another more complicated pattern. Higher-order modes offer some advantages over the lowest order or “fundamental” mode. Due to their complexity, the evanescent field can have comparatively more light intensity in the region of interest — locally just outside the fiber. These higher order modes can also be used to make different types of optical patterns. Nanofibers aren’t yet standardized and thus careful and complete characterization of both the fiber and the light passing through them is a necessary step towards making them a more practical and adaptable tool for research applications.

This research team had previously developed techniques for controlling the fiber manufacture process in order to support extremely pure higher-order modes. Mode quality depends on things like the width of the fiber core and how this width changes over the length of the fiber. Small deviations in the fiber diameter and other imperfections can cause undesirable combinations and the potential loss of certain modes. By analyzing how the transmitted light changes as the fiber is stretched into a nanofiber, they could infer how the modes change while propagating through the fiber. However, until now there was no way to directly measure the intensity of the field along the fiber, which would offer far more insight and control over how the evanescent fields are shaped at the location of the trapped atoms. This could be useful for analyzing fibers where the propagation conditions change multiple times, or in the case where a fiber undergoes strain or bending during use.

By collecting images of the Rayleigh scattering, the scientists can directly see how the field changes throughout a nanofiber and also the effects of changing the pattern of light injected into the fiber. In addition, the team was able to use the imaging information to feedback to the system and create desired combinations of modes in the nanofiber — demonstrating a high level of control. The same technique can be used to measure the profile and width of the fiber itself. In this case, they were able to estimate a fiber radius of 370 nanometers and variations in the waist down to 3 nm. Notably, this type of visualization is done in-situ with relatively standard optics and does not require destroying the fiber integrity with the special coatings that are necessary when using a scanning electron microscope. This also means these characterizing measurements can be used to optimize the fields that interact with atoms during experiments. “An advantage of this technique is that it can be applied to fibers that are already installed in an apparatus,” explains Fredrik Fatemi, a research physicist at the Naval Research Laboratory and author on the paper: “One could even probe fibers or other nanophotonic structures designed for fundamental modes by using shorter optical wavelengths.”

To further refine this approach, the researchers plan to modify the optics in order to capture the entire length of the nanofiber in a single image. Currently, the images are made by stitching several high-resolution images together, as in the image seen above.  

Read More

From NIST TechBeat

In another advance at the far frontiers of timekeeping by National Institute of Standards and Technology (NIST) researchers, the latest modification of a record-setting strontium atomic clock has achieved precision and stability levels that now mean the clock would neither gain nor lose one second in some 15 billion years*—roughly the age of the universe. Precision timekeeping has broad potential impacts on advanced communications, positioning technologies (such as GPS) and many other technologies. Besides keeping future technologies on schedule, the clock has potential applications that go well beyond simply marking time. Examples include a sensitive altimeter based on changes in gravity and experiments that explore quantum correlations between atoms.

As described in Nature Communications,** the experimental strontium lattice clock at JILA, a joint institute of NIST and the University of Colorado Boulder, is now more than three times as precise as it was last year, when it set the previous world record.*** Precision refers to how closely the clock approaches the true resonant frequency at which the strontium atoms oscillate between two electronic energy levels. The clock's stability—how closely each tick matches every other tick—also has been improved by almost 50 percent, another world record.

The JILA clock is now good enough to measure tiny changes in the passage of time and the force of gravity at slightly different heights. Einstein predicted these effects in his theories of relativity, which mean, among other things, that clocks tick faster at higher elevations. Many scientists have demonstrated this, but with less sensitive techniques.****

"Our performance means that we can measure the gravitational shift when you raise the clock just 2 centimeters on the Earth's surface," JILA/NIST Fellow Jun Ye says. "I think we are getting really close to being useful for relativistic geodesy."

Relativistic geodesy is the idea of using a network of clocks as gravity sensors to make precise 3D measurements of the shape of the Earth. Ye agrees with other experts that, when clocks can detect a gravitational shift at 1 centimeter differences in height—just a tad better than current performance—they could be used to achieve more frequent geodetic updates than are possible with conventional technologies such as tidal gauges and gravimeters.

In the JILA/NIST clock, a few thousand atoms of strontium are held in an "optical lattice," a 30-by-30 micrometer column of about 400 pancake-shaped regions formed by intense laser light. JILA and NIST scientists detect strontium's "ticks" (430 trillion per second) by bathing the atoms in very stable red laser light at the exact frequency that prompts the switch between energy levels.

The JILA group made the latest improvements with the help of researchers at NIST's Maryland headquarters and the Joint Quantum Institute (JQI). Those researchers contributed improved measurements and calculations to reduce clock errors related to heat from the surrounding environment, called blackbody radiation. The electric field associated with the blackbody radiation alters the atoms' response to laser light, adding uncertainty to the measurement if not controlled.

To help measure and maintain the atoms' thermal environment, NIST's Wes Tew and Greg Strouse calibrated two platinum resistance thermometers, which were installed in the clock's vacuum chamber in Colorado. Researchers also built a radiation shield to surround the atom chamber, which allowed clock operation at room temperature rather than much colder, cryogenic temperatures.

"The clock operates at normal room temperature," Ye notes. "This is actually one of the strongest points of our approach, in that we can operate the clock in a simple and normal configuration while keeping the blackbody radiation shift uncertainty at a minimum."

In addition, JQI theorist Marianna Safronova used the quantum theory of atomic structure to calculate the frequency shift due to blackbody radiation, enabling the JILA team to better correct for the error.

Overall, the clock's improved performance tracks NIST scientists' expectations for this area of research, as described in "A New Era in Atomic Clocks" at The JILA research is supported by NIST, the Defense Advanced Research Projects Agency and the National Science Foundation.

* For this figure, NIST converts an atomic clock's systematic or fractional total uncertainty to an error expressed as 1 second accumulated over a certain minimum length of time. That is calculated by dividing 1 by the clock's systematic uncertainty, and then dividing that result by the number of seconds in a year (31.5 million) to find the approximate minimum number of years it would take to accumulate 1 full second of error. The JILA clock has reached a higher level of precision (smaller uncertainty) than any other clock.

** T.L. Nicholson, S.L. Campbell, R.B. Hutson, G.E. Marti, B.J. Bloom, R.L. McNally, W. Zhang, M.D. Barrett, M.S. Safronova, G.F. Strouse, W.L. Tew and J. Ye. Nature Communications. Systematic evaluation of an atomic clock at 2 × 10-18 total uncertainty. April 21, 2015.

**** Another NIST group demonstrated this effect by raising the quantum logic clock, based on a single aluminum ion, about 1 foot. See 2010 NIST news release, "NIST Pair of Aluminum Atomic Clocks Reveal Einstein's Relativity at a Personal Scale," at

Read More

For University of Maryland researchers, the last year has marked a series of new discoveries and innovations. UMD will honor nine nominees for the most promising new inventions at the Celebration of Innovation and Partnerships event on April 29, 2015. UMD’s Office of Technology Commercialization, part of the Division of Research, received a total of 187 disclosures in 2014. The nine nominees for Invention of the Year were selected based on their potential impact on science, society and the open market. Winners will be announced in three categories: life sciences, physical sciences and information sciences.

A single photon detection system developed at NIST, by researchers from JQI and the Jet Propulsion Lab at CalTech, was among the nominees. The co-inventors are: 

Alessandro Restelli, JQI-UMD

Josh Bienfang, JQI-NIST

Alan Migdall, JQI-NIST

William Farr, Jet Propulsion Lab

The group developed a single-photon avalanche diode (SPAD) detection system that is so sensitive that it detects photons that arrive at times well before a readout gate is applied, thus increasing the system’s detection duty cycle. This invention represents a new mode of operation for SPADs, similar to charge-coupled devices (CCD), in which single-photon signals may be accumulated within the detector and read out some time later. This increases the duration of time during which the detector is sensitive to single-photon signals. This new mode of operation will expand the usefulness of SPADs in the areas of Light Detection and Ranging (LIDAR) and quantum cryptography. 

Source: CMNS with modifications for JQI website made by E. Edwards

Read More

The word “defect” doesn’t usually have a good connotation--often indicating failure. But for physicists, one common defect known as a nitrogen-vacancy center (NV center) has applications in both quantum information processing and ultra-sensitive magnetometry, the measurement of exceedingly faint magnetic fields. In an experiment, recently published in Science, JQI Fellow Vladimir Manucharyan and colleagues at Harvard University used NV centers in diamond to sense the properties of magnetic field noise tens of nanometers away from the silver samples.

Diamond, which is a vast array of carbon atoms, can contain a wide variety of defects. An NV center defect is formed when a nitrogen atom substitutes for a carbon atom and is adjacent to a vacancy, or missing carbon atom, in the lattice. NV centers have discrete, atom-like energy levels that can be probed using green laser light. Like atomic systems, the NV centers can be used as a qubit. In this experiment, physicists harness the sensitivity of these isolated quantum systems to characterize electron motion.

A conductive silver sample is deposited onto a diamond substrate that contains NV centers. While these defects can occur naturally, the team here purposefully creates them approximately 15 nanometers away from the silver layer. At temperatures above absolute zero, the electrons inside of the silver layer (or any conductor) bounce around and generate random currents--this is a phenomenon known as Johnson noise. Since electrons are charged particles, their motion results in  fluctuating magnetic fields, which extend outside of the conductor.

Typically, changing magnetic fields can wreak all sorts of havoc, including for the nearby NV centers. Here, each NV center is used as a sensor that can be thought of as switching between two states, 1 and 0. The sensor can be calibrated in the presence of a constant magnetic field such that it is in state 1. If the sensor experiences an oscillating magnetic field,  the sensor switches to state 0. There is one more important component to this sensor--it can detect magnetic field strength as well. For weak magnetic field fluctuations, the NV sensor will slowly decay to state 0; for stronger fluctuations, it will decay much faster from 1 to 0. By detecting different decay times, physicists can precisely measure the fluctuating magnetic fields, which tells them about the electron behavior at a  very small length scale. Like any good sensor, the NV centers are almost completely non-invasive—their read-out with laser light does not disturb the sample they are sensing.

The team studied the scaling of the magnetic noise with different parameters such as temperature and distance from the silver surface and found excellent agreement with theoretical predictions. In addition, by changing the nature of the silver sample from “polycrystalline” to “single-crystalline” they were able to observe a dramatic difference in the behavioral trends of the magnetic field noise, particularly as the sample was cooled. In polycrystalline samples, atoms are not arranged in a regular repeating lattice over long distances, thus electrons travel don’t travel very far-- roughly 10 nanometers  or less-- before scattering off an obstacle. These frequent collisions are the main source of field noise in polycrystalline materials. In contrast, a single crystal is uniform at these length scales and electrons can travel over 100 times farther. The electron movement, and corresponding magnetic field noise from the single silver crystal is a departure from so-called Ohmic predictions of the polycrystalline case, and the team was able to explore both of these extremes non-invasively.

These results demonstrate that single NV centers can be used to directly study electron behavior inside of a conductive material on the nanometer length scale. Notably, this technique does not require electrical leads, applied voltages, or even physical contact with the sample of interest, thus enabling the measurement of much smaller or more fragile samples. Future applications of this technique include the study of complex condensed matter phenomena, as well as metrology for commercial materials science.

This was written by E. Edwards in collaboration with S. Kolkowitz and V. Manucharyan.

Read More

The 2014 chemistry Nobel Prize recognized important microscopy research that enabled greatly improved spatial resolution. This innovation, resulting in nanometer resolution, was made possible by making the source (the emitter) of the illumination  quite small and by moving it quite close to the object being imaged.   One problem with this approach is that in such proximity, the emitter and object can interact with each other, blurring the resulting image.   Now, a new JQI study has shown how to sharpen nanoscale microscopy (nanoscopy) even more by better locating the exact position of the light source.


Traditional microscopy is limited by the diffraction of light around objects.  That is, when a light wave from the source strikes the object, the wave will scatter somewhat.  This scattering limits the spatial resolution of a conventional microscope to no better than about one-half the wavelength of the light being used.  For visible light, diffraction limits the resolution to no be better than a few hundred nanometers.

How then, can microscopy using visible light attain a resolution down to several nanometers?  By using tiny light sources that are no larger than a few nanometers in diameter.  Examples of these types of light sources are fluorescent molecules, nanoparticles, and quantum dots.  The JQI work uses quantum dots which are tiny crystals of a semiconductor material that can emit single photons of light. If such tiny light sources are close enough to the object meant to be mapped or imaged, nanometer-scale features can be resolved.  This type of microscopy, called “Super-resolution imaging,” surmounts the standard diffraction limit. 


JQI fellow Edo Waks and his colleagues have performed nanoscopic mappings of the electromagnetic field profile around silver nano-wires by positioning quantum dots (the emitter) nearby.  (Previous work summarized in press release).  They discovered that sub-wavelength imaging suffered from a fundamental problem, namely that an “image dipole” induced in the surface of the nanowire was distorting knowledge of the quantum dot’s true position. This uncertainty in the position of the quantum dot translates directly into a distortion of the electromagnetic field measurement of the object.

The distortion results from the fact that an electric charge positioned near a metallic surface will produce just such an electric field as if a ghostly negative charge were located as far beneath the surface as the original charge is above it.  This is analogous to the image you see when looking at yourself in a mirror; the mirror object appears to be as far behind the mirror as you are in front.  The quantum dot does not have a net electrical charge but it does have a net electrical dipole, a slight displacement of positive and negative charge within the dot. 

Thus when the dot approaches the wire, the wire develops an “image” electrical dipole whose emission can interfere with the dot’s own emission.  Since the measured light from the dot is the substance of the imaging process, the presence of light coming from the “ image dipole” can interfere with light coming directly from the dot.  This distorts the perceived position of the dot by an amount which is 10 times higher than the expected spatial accuracy of the imaging technique (as if the nanowire were acting as a sort of funhouse mirror).

The JQI experiment successfully measured the image-dipole effect and properly showed that it can be corrected under appropriate circumstances.  The resulting work provides a more accurate map of the electromagnetic fields surrounding the nanowire.

The JQI scientists published their results in the journal Nature Communications.

Lead author Chad Ropp (now a postdoctoral fellow at the University of California, Berkeley) says that the main goal of the experiment was to produce better super-resolution imaging: “Any time you use a nanoscale emitter to perform super-resolution imaging near a metal or high-dielectric structure image-dipole effects can cause errors. Because these effects can distort the measurement of the nano-emitter’s position they are important to consider for any type of super-resolved imaging that performs spatial mapping.”

“Historically scientists have assumed negligible errors in the accuracy of super-resolved imaging,” says Ropp.  “What we are showing here is that there are indeed substantial inaccuracies and we describe a procedure on how to correct for them."

Read More

Measuring faint magnetic fields is a trillion-dollar business.  Gigabytes of data, stored and quickly retrieved from chips the size of a coin, are at the heart of consumer electronics.   Even higher data densities can be achieved by enhancing magnetic detection sensitivity---perhaps down to nano-tesla levels.

Greater magnetic sensitivity is also useful in many scientific areas, such as the identification of biomolecules such as DNA or viruses.  This research must often take place in a warm, wet environment, where clean conditions or low temperatures are not possible.  JQI scientists address this concern by developing a diamond sensor that operates in a fluid environment.  The sensor makes magnetic maps (with a 17 micro-tesla sensitivity) of small particles (a stand-in for actual bio-molecules) with a spatial resolution of about 50 nm.  This is probably the most sensitive magnetic measurement conducted at room temperature in microfluidics.

The results of the new experiment conducted by JQI scientist Edo Waks (a professor at the University of Maryland) and his associates appear in the journal NanoLetters .


At the heart of the sensor is a tiny diamond nano-crystal.  This diamond, when brought close to a magnetic particle while simultaneously being bathed in laser light and a subtle microwave signal, will fluoresce in a manner proportional to the strength of the particle’s own magnetic field.  Thus light from the diamond is used to map magnetism.

How does the diamond work and how is the particle maneuvered near enough to the diamond to be scanned?

The diamond nanocrystal is made in the same process by which synthetic diamonds are formed, in a process called chemical vapor deposition.  Some of the diamonds have tiny imperfections, including  occasionally nitrogen atoms substituting for carbon atoms.  Sometimes a carbon atom is missing altogether from the otherwise tightly-coordinated diamond solid structure.  In those cases where the nitrogen (N) and the vacancy (V) are next to each other, an interesting optical effect can occur.  The NV combination acts as a sort of artificial atom called an NV color center.  If prompted by the right kind of green laser, the NV center will shine.  That is, if will absorb green laser light and emit red light, one photon at a time. 

The NV emission rate can be altered in the presence of magnetic fields at the microscopic level. For this to happen, though, the internal energy levels of the NV center has to be just right, and this comes about when the center is exposed to signals from the radio-frequency source (shown at the edge of the figure) and the fields emitted by the nearby magnetic particle itself.

The particle floats in a shallow lake of de-ionized- water based solution in a setup called a microfluidic chip.  The diamond is attached firmly to the bottom of this lake.  The particle moves, and is steered around the chip when electrodes positioned in the channels coax ions in the liquid into forming gentle currents.  Like a ship sailing to Europe with the help of the Gulf Stream, the particle rides these currents with sub-micron control.  The particle can even be maneuvered in the vertical direction by an external magnetic coil (not shown in the drawing).

“We plan to use multiple diamonds in order to do complex vectorial magnetic analysis.,” says graduact student Kangmook Lim, the lead author on the publication.  “We will also use floating diamonds instead of stationary, which would be very useful for scanning nano- magnetism of biological samples.”

REFERENCE PUBLICATION:  “Scanning localized magnetic fields in a microfluidic device with a single nitrogen vacancy center,” Kangmook Lim, Chad Ropp, Benjamin Shapiro, Jacob M. Taylor, Edo Waks, NanoLetters, 5 February 2015;

Read More

 A new experiment conducted at the University of California at Berkeley used quantum information techniques for a precision test of a cornerstone principle of physics, namely Lorentz invariance.  This precept holds that the results of a physics experiment do not depend on its absolute spatial orientation.  The work uses quantum-correlated electrons within a pair of calcium ions to look for shifts in quantum energy levels with unprecedented sensitivity.   JQI Adjunct Fellow and University of Delaware professor Marianna Safronova, who contributed a theoretical analysis of the data, said that the experiment was able to probe Lorentz symmetry violation at the level comparable to the ratio of the electroweak and Planck energy scales.  These correspond, respectively, to the energy scale of the universe at which the electromagnetic and weak forces become comparable in strength, and the scale where gravity becomes comparable in strength to the other physical forces.

 The Lorentz symmetry is fundamental to both the standard model of particle physics and general relativity. However, the theoretical effort aimed at unifying gravity with other fundamental interactions suggests that Lorentz invariance may not be an exact symmetry. Moreover, it may be possible to detect minuscule Lorentz-violating effects at the experimentally accessible energy scales.  Thus, Lorentz symmetry tests such as carried out at Berkeley may provide a low-energy window into the possible scenarios of theories beyond the standard model and general relativity.  Safronova notes that tabletop experiments such as the Berkeley effort complements direct searches for new physics conducted at high-energy labs such as the Large Hadron Collider.

The Berkeley experiment did not detect any telltale shifts in energy levels.  However, the importance of probing the efficacy of the Lorentz principle is so great that even a null result at high sensitivity is notable.  The scientists use the data to establish that no shifts in the behavior of electrons (and hence no evidence of Lorentz-violating effects) were observed at a sensitivity of one part in 1018.  This is some 100 times better than the best previous measurements. The experiment also improved by a factor of five the assertion that the speed of light is isotropic (equal in all directions).  All these results of the Berkeley experiment are published in the 29 January issue of Nature magazine.

The authors include UC Berkeley team of Hartmut Häffner, Thaned Pruttivarasin (now at the Quantum Metrology Laboratory, RIKEN, Japan), Michael Ramm, and Michael Hohensee (now at the Lawrence Livermore National Lab); Sergey Porsev at the University of Delaware and PNPI, Russia; Ilya Tupitsyn from the St. Petersburg State University in Russia and Marianna Safronova, JQI and the University of Delaware.



The Berkeley experiment imposed stringent limits on the Lorentz symmetry violation  in the same way that the classic experiment conducted by Albert A. Michelson and Edward W. Morley in 1887 ruled out the existence of subtle “aether” fields.  In those years scientists supposed that light waves, like all then known waves, had to propagate through an underlying medium---as ocean waves roll through water and sound waves are pressure waves moving through air.  To look at the status of aether, Michelson and Morley broke a pulse of light into two parts, which then took equal but perpendicular paths.  Reflecting from mirrors these pulses were recombined to form an interference pattern.  The apparatus’ (riding on the Earth around its orbit) moving through the presumed stationary aether would impose a slightly different pathway for the two beams.  This in turn would shift the interference pattern, heralding the aether.  No shift was discovered, supporting Albert Einstein’s later assertion that the aether does not exist.

The Berkeley experiment does the same thing for electron waves for an apparatus that rotates around along the Earth in its daily revolution.  According to modern field theory, all particles including atoms, and the atoms’ own constituents such as electrons, can be thought of as fields, variations in the likelihood of quantum energy being present at various places and times.  Lorentz invariance might, in principle, have a different standing for each of the known fields.  The Berkeley experiment may be interpreted as test of Lorentz invariance for either electrons or light. 



The electrons in question are the outer electrons of calcium ions maintained in an electric enclosure called a Paul trap.  In quantum information experiments energy levels of calcium ions serve as the basis for quantum bits, or qubits, as depicted in Figure 1a.

Such atomic qubits can be manipulated with laser beams into residing in a superposition of two discrete levels simultaneously.  In the trap two ions 16 microns apart are exposed to a static magnetic field.  In a process called the Zeeman Effect, the magnetic field causes the internal quantum levels of the ions to split into finely spaced sub-levels.  These subsidiary levels are designated by the possible z-component of the magnetic moment labelled mJ.  For a calcium ion the outermost electron is in a state called D5/2, meaning that the electron is in a D orbital (the probability of the electron being in space is highest along a dumbbell-shaped surface) and the total electron angular momentum J has a value of 5/2 units.  Turning on a magnetic field splits what was a single quantum energy level into six sub-states identified by the strength of the electron’s magnetic strength along the direction singled out by the external magnetic field as shown in Figure 1b.  The six values are designated as +5/2, +3/2, +1/2, -1/2, -3/2, and -5/2.

One of the biggest potential sources of uncertainty in the measurement of the transition between the two states is introduced by the tiny fluctuations in the strength of the external magnetic field.  Since the energy spacing of the sub-levels is directly proportional to the magnetic field, any noise in the field translates directly into noise, or uncertainty, in the measured energy levels. To counteract magnetic noise and to make their measurements as sensitive as possible, the physicists use not a single ion, but two correlated ions in the quantum superposition of  (-1/2,  +1/2) and (-5/2, +5/2) states as shown in Figure 2. 

By using two quantum-correlated ions, magnetic noise can be minimized since fluctuations would produce cancelling effect on such pairings of qubit configurations.  Construction of such “decoherence-free state” has been developed for quantum information applications.

As a result, the electron wavefunction (denoted by the Greek letter psi) of the two-ion object (or the wave packet) is in a superposition of ±1/2 and ±5/2 states.  This condition is allowed to last for about 93 milliseconds.  During this time, because they have a different spatial orientation, the 1/2 part and the 5/2 part of the wave packet will evolve differently if a spatially-anisotropic, Lorentz-violating effect is at work.    This in turn will subtly alter the nature of the superposition of the two-ion state. That fact, coupled with the motion of the atom trap through space as the Earth rotates over its daily period, would create an interference effect in measurements of the status of the ions if a spatial-orientation-dependent force were there.  The lack of any observed interference suggested that there was no Lorenz-violating effect, at a sensitivity down to a level of 11 millihertz (times Planck’s constant).   This translates into a Lorentz-violation sensitivity of one part per 1018.

University of Indiana physicist Alan Kostelecky, an expert on the subject of Lorentz violations, had this to say of the experiment in his “News and Views” article published in the same issue of Nature: “This represents a milestone sensitivity, because it is smaller than the dimensionless ratio of about 10−17 between the strengths of the electroweak and gravitational forces that could naturally be expected to govern violations of Lorentz invariance arising in unified theories of quantum physics and gravity.  The authors' experiment is thus the first to delve into this realm of sensitivity for electrons.”

Could the Lorentz-violating effects be there after all, but at a harder-to-measure magnitude?  Scientists are compelled to keep looking as such observation would be unambiguous evidence for new physics.   “We believe we can achieve much higher sensitivity, maybe by a factor of 10000,” said Safronova, who is an Adjunct Fellow of the Joint quantum Institute and a physics professor at the University of Delaware.   “This could happen by using ions that are more sensitive to Lorentz violation, such as Yb+ or certain highly-charged ions and by recording measurements over a longer period.”  JQI’s Christopher Monroe is using trapped Yb+ ions for his forefront quantum information research.

Read More

We want data.  Lots of it.  We want it now.  We want it to be cheap and accurate.

 Researchers try to meet the inexorable demands made on the telecommunications grid by improving various components.  In October 2014, for instance, scientists at the Eindhoven University of Technology in The Netherlands did their part by setting a new record for transmission down a single optical fiber: 255 terabits per second

 Alan Migdall and Elohim Becerra and their colleagues at the Joint Quantum Institute do their part by attending to the accuracy at the receiving end of the transmission process.  They have devised a detection scheme with an error rate 25 times lower than the fundamental limit of the best conventional detector.  They did this by employing not passive detection of incoming light pulses.  Instead the light is split up and measured numerous times.

 The new detector scheme is described in a paper published in the journal Nature Photonics.

 “By greatly reducing the error rate for light signals we can lessen the amount of power needed to send signals reliably,” says Migdall.  “This will be important for a lot practical applications in information technology, such as using less power in sending information to remote stations.  Alternatively, for the same amount of power, the signals can be sent over longer distances.”

Phase Coding

Most information comes to us nowadays in the form of light, whether radio waves sent through the air or infrared waves send up a fiber.  The information can be coded in several ways.  Amplitude modulation (AM) maps analog information onto a carrier wave by momentarily changing its amplitude.  Frequency modulation (FM) maps information by changing the instantaneous frequency of the wave.  On-off modulation is even simpler: quickly turn the wave off (0) and on (1) to convey a desired pattern of binary bits.

 Because the carrier wave is coherent---for laser light this means a predictable set of crests and troughs along the wave---a more sophisticated form of encoding data can be used.  In phase modulation (PM) data is encoded in the momentary change of the wave’s phase; that is, the wave can be delayed by a fraction of its cycle time to denote particular data.  How are light waves delayed?  Usually by sending the waves through special electrically controlled crystals.

 Instead of using just the two states (0 and 1) of binary logic, Migdall’s experiment waves are modulated to provide four states (1, 2, 3, 4), which correspond respectively to the wave being un-delayed, delayed by one-fourth of a cycle, two-fourths of a cycle, and three-fourths of a cycle.  The four phase-modulated states are more usefully depicted as four positions around a circle (figure 2).  The radius of each position corresponds to the amplitude of the wave, or equivalently the number of photons in the pulse of waves at that moment.  The angle around the graph corresponds to the signal’s phase delay.

 The imperfect reliability of any data encoding scheme reflects the fact that signals might be degraded or the detectors poor at their job.  If you send a pulse in the 3 state, for example, is it detected as a 3 state or something else?  Figure 2, besides showing the relation of the 4 possible data states, depicts uncertainty inherent in the measurement as a fuzzy cloud.  A narrow cloud suggests less uncertainty; a wide cloud more uncertainty.  False readings arise from the overlap of these uncertainty clouds.  If, say, the clouds for states 2 and 3 overlap a lot, then errors will be rife.

 In general the accuracy will go up if n, the mean number of photons (comparable to the intensity of the light pulse) goes up.  This principle is illustrated by the figure to the right, where now the clouds are farther apart than in the left panel.  This means there is less chance of mistaken readings.  More intense beams require more power, but this mitigates the chance of overlapping blobs.

Twenty Questions

So much for the sending of information pulses.  How about detecting and accurately reading that information?  Here the JQI detection approach resembles “20 questions,” the game in which a person identifies an object or person by asking question after question, thus eliminating all things the object is not.

In the scheme developed by Becerra (who is now at University of New Mexico), the arriving information is split by a special mirror that typically sends part of the waves in the pulse into detector 1.  There the waves are combined with a reference pulse.  If the reference pulse phase is adjusted so that the two wave trains interfere destructively (that is, they cancel each other out exactly), the detector will register a nothing.  This answers the question “what state was that incoming light pulse in?” When the detector registers nothing, then the phase of the reference light provides that answer, … probably.

That last caveat is added because it could also be the case that the detector (whose efficiency is less than 100%) would not fire even with incoming light present. Conversely, perfect destructive interference might have occurred, and yet the detector still fires---an eventuality called a “dark count.”  Still another possible glitch: because of optics imperfections even with a correct reference–phase setting, the destructive interference might be incomplete, allowing some light to hit the detector.

The way the scheme handles these real world problems is that the system tests a portion of the incoming pulse and uses the result to determine the highest probability of what the incoming state must have been. Using that new knowledge the system adjusts the phase of the reference light to make for better destructive interference and measures again. A new best guess is obtained and another measurement is made.

As the process of comparing portions of the incoming information pulse with the reference pulse is repeated, the estimation of the incoming signal’s true state was gets better and better.  In other words, the probability of being wrong decreases.

Encoding millions of pulses with known information values and then comparing to the measured values, the scientists can measure the actual error rates.  Moreover, the error rates can be determined as the input laser is adjusted so that the information pulse comprises a larger or smaller number of photons.  (Because of the uncertainties intrinsic to quantum processes, one never knows precisely how many photons are present, so the researchers must settle for knowing the mean number.) 

A plot of the error rates shows that for a range of photon numbers, the error rates fall below the conventional limit, agreeing with results from Migdall’s experiment from two years ago. But now the error curve falls even more below the limit and does so for a wider range of photon numbers than in the earlier experiment. The difference with the present experiment is that the detectors are now able to resolve how many photons (particles of light) are present for each detection.  This allows the error rates to improve greatly.

For example, at a photon number of 4, the expected error rate of this scheme (how often does one get a false reading) is about 5%.  By comparison, with a more intense pulse, with a mean photon number of 20, the error rate drops to less than a part in a million.

The earlier experiment achieved error rates 4 times better than the “standard quantum limit,” a level of accuracy expected using a standard passive detection scheme.  The new experiment, using the same detectors as in the original experiment but in a way that could extract some photon-number-resolved information from the measurement, reaches error rates 25 times below the standard quantum limit.

“The detectors we used were good but not all that heroic,” says Migdall.  “With more sophistication the detectors can probably arrive at even better accuracy.”

The JQI detection scheme is an example of what would be called a “quantum receiver.”  Your radio receiver at home also detects and interprets waves, but it doesn’t merit the adjective quantum.  The difference here is single photon detection and an adaptive measurement strategy is used.  A stable reference pulse is required. In the current implementation that reference pulse has to accompany the signal from transmitter to detector.

Suppose you were sending a signal across the ocean in the optical fibers under the Atlantic.  Would a reference pulse have to be sent along that whole way?  “Someday atomic clocks might be good enough,” says Migdall, “that we could coordinate timing so that the clock at the far end can be read out for reference rather than transmitting a reference along with the signal.”

Read More
Thermal interference

Observing the quantum behavior of light is a big part of Alan Migdall’s research at the Joint Quantum Institute.  Many of his experiments depend on observing light in the form of photons---the particle complement of light waves---and sometimes only one photon at a time, using “smart” detectors that can count the number of individual photons in a pulse.  Furthermore, to observe quantum effects, it is normally necessary to use a beam of coherent light, light for which knowing the phase or intensity for one part of the beam allows you to know things about distant parts of the same beam.

In a new experiment, however, Migdall and his JQI colleagues perform an experiment using incoherent light, where the light is a jumble of waves.  And they use what Migdall calls “stupid” detectors that, when counting the number of photons in a light pulse, can really only count up to zero, as anything more than zero befuddles these detectors and is considered as number that is known only to be more than zero.

Basically the surprising result is this: using incoherent light (with a wavelength of 800 nm) sent through a double-slit baffle, the JQI scientists obtain an interference pattern with fringes (the characteristic series of dark and light stripes denoting respectively destructive and constructive interference) as narrow as 30 nm.

This represents a new extreme in the degree to which sub-wavelength interference (to be defined below) has been pushed using thermal light and small-photon-number light detection.  The physicists were surprised that they could so easily obtain such a sharp interference effect using standard light detectors.  The importance of achieving sub-wavelength imaging is underscored by the awarding of the 2014 Nobel Prize for chemistry to scientists who had done just that.

The results of Migdall’s new work appear in the journal Applied Physics Letters (1).  Achieving this kind of sharp interference pattern could be valuable for performing a variety of high-precision physics and astronomy measurements.


When they pass through a hole or past a material edge, light waves will diffract---that is, a portion of the light will fan out as if the edge were a source of waves itself.  This diffraction will limit the sharpness of any imaging performed by the light.  Indeed, this diffraction limitation is one of the traditional features of classical optical science dating back to the mid 19th century.  What this principle says is that in using light with a certain wavelength (denoted by the Greek letter lambda) an object can in general be imaged with a spatial resolution roughly no finer than lambda.  One can improve resolution somewhat by increasing lens diameters, but unless you can switch to light of shorter lambda, you are stuck with the imaging resolution you’ve got.  And since all the range of available wavelengths for visible light covers only a range of about 2, gaining much resolution by switching wavelengths requires exotic sources and optics. 

The advent of quantum optics and the use of “nonclassical light” dodged the diffraction limit.  It did this, in certain special circumstances, by considering light as consisting of particles and using the correlations between those particles 

The JQI experiment starts out with a laser beam, but it purposely degrades the coherence of the light by sending it through a moving disk of ground glass.  Thereafter the light waves propagating toward the measuring apparatus downstream originate from a number of places across the profile of the rough disk and are no longer coordinated in space and time (in contrast to laser light).  Experiments more than a decade ago, however, showed that “thermal” light (not unlike the light emitted haphazardly by an incandescent bulb) made this way, while incoherent over long times, is coherent for times shorter than some value easily controlled by the speed of the rotating ground glass disk.

Why should the JQI researchers use such thermal light if laser light is available?  Because in many measurement environments (such as light coming from astronomical sources) coherent light is not available, and one would nevertheless like to make sharp imaging or interference patterns.  And why use “stupid” detectors?  Because they are cheaper to use. 


In the case of coherent light, a coordinated train of waves approach a baffle with two openings (figure, top).  The light waves passing through will interfere, creating a characteristic pattern as recorded by a detector, which is moved back and forth to record the arrival of light at various points.  The interference of coherent light yields a fixed pattern (right top in the figure).   By contrast, incoherent light waves, when they pass through the slits will also interfere (lower left), but will not create a fixed pattern.  Instead the pattern will change from moment to moment. 

In the JQI experiment, the waves coming through the slits meets with a beam splitter, a thin layer of material that reflects roughly half the waves at an angle of 90 degrees and transmits the other half straight ahead.  Each of these two portions of light will strike movable detectors which scan across sideways.  If the detectors could record a whole pattern, they would show that the pattern changes from moment to moment.  Adding up all these patterns washes out the result.  That is, no fringes would appear.

Things are different if you record not just the instantaneous interference pattern but rather a correlation between the two movable detectors.  Correlation, in this case, means answering this question: when detector 1 observes light at a coordinate x1 how often does detector 2 observe light at a coordinate x2?

Plotting such a set of correlations between the two detectors does result in an interference-like pattern, but it is important to remember that this is not a pattern of light and dark regions.  Instead, it is a higher order effect that tells you the probability of finding light “here” given that you found it “over there.”  Because scientists want to record those correlations over a range of separations between “here” and “over there” that includes separations that pass through zero, there is a problem. If the two locations are too close, the detectors would run into each other.

To avoid that a simple partially silvered mirror, commonly called a beam splitter, effectively makes two copies of the light field.  That way the two detectors can simultaneously sample the light from virtual positions that can be as close as desired and even pass through each other.  

And what about the use of stupid detectors, those for which each “click” denoting an arrival tells us only that more than zero photons have arrived? However, here the time structure of the incoming light pulse becomes important in clarifying the measurement. If we look at a short enough time, we can arrange that the probability of more than one photon is very low, so a click tells us that with good accuracy that indeed just one photon has arrived. But then if we design the light so that its limited coherence time is larger than the recovery time of our stupid detectors, it is possible for the detector to tell us that a specific number of photons were recorded, perhaps 3 or 10, not just the superfluous  “more than zero” answer.  “In this way, we get dumb detectors to act in a smart way,” says Migdall.

This improved counting the number of photons, or equivalently the intensity of the light at various places at the measuring screen, ensures that the set of correlations between the two detectors does result in an interference-like pattern in those correlations.  Not only that, but the fringes of this correlation pattern---the distance between the successive peaks---can be as small as 30 nm.

So while seeing an interference pattern could not be accomplished with dumb detectors, it could be accomplished by engineering the properties of the light source to accommodate the lack of ability of the detectors and then accumulating a pattern of correlation between two detectors.

Considering that the incoming light has a wavelength of 800 nm, the pattern is sharper by a factor of 20 or more from what you would expect if the diffraction limitation were at work.  The fact that the light used is thermal in nature, and not coherent, makes the achievement more striking.

This correlation method is not the same as imaging an object.  But the ease and the degree to which the conventional diffraction resolution limit could be surmounted will certainly encourage a look for specific applications that might take advantage of that remarkable feature. 

Read More
Superfluid interference

In certain exotic situations, a collection of atoms can transition to a superfluid state, flouting the normal rules of liquid behavior. Unlike a normal, viscous fluid, the atoms in a superfluid flow unhindered by friction. This remarkable free motion is similar to the movement of electron pairs in a superconductor, the prefix ‘super’ in both cases describing the phenomenon of resistanceless flow. Harnessing this effect is of particular interest in the field of atomtronics, since superfluid atom circuits can recreate the functionality of superconductor circuits, with atoms zipping about instead of electrons. Now, JQI scientists have added an important technique to the atomtronics arsenal, a method for analyzing a superfluid circuit component known as a ‘weak link’. The result, detailed in the online journal Physical Review X, is the first direct measurement of the current-phase relationship of a weak link in a cold atom system.

“What we have done is invented a way to characterize a particular circuit element [in a superfluid atomtronic circuit],” says Stephen Eckel, lead author of the paper. “This is similar to characterizing a component in an ordinary electrical circuit, where one measures the current that flows through the component vs. the voltage across it.”

Properly designing an electronic circuit means knowing how each component in the circuit affects the flow of electrons. Otherwise, your circuit won’t function as expected, and at worst case will torch your components into uselessness. This is similar to the plumbing in a house, where the shower, sink, toilet, etc. all need the proper amount of water and water pressure to operate. Measuring the current-voltage relationship, or how the flow of current changes based on a voltage change, is an important way to characterize a circuit element. For instance, a resistor will have a different current-voltage relationship than a diode or capacitor. In a superfluid atom circuit, an analogous measurement of interest is the current-phase relationship, basically how a particular atomtronic element changes the flow of atoms.

Interferometric Investigations

The experiment, which took place at a JQI lab on the NIST-Gaithersburg campus, involves cooling roughly 800,000 sodium atoms down to an extremely low temperature, around a decidedly chilly hundred billionths of a degree above absolute zero. At these temperatures, the atoms behave as matter waves, overlapping to form something called a Bose-Einstein condensate (BEC). The scientists confine the condensate between a sheet-like horizontal laser and a target shaped vertical laser. This creates two distinct clouds, the inner one shaped like a disc and the outer shaped like a ring. The scientists then apply another laser to the outer condensate, slicing the ring vertically. This laser imparts a repulsive force to the atoms, driving them apart and creating a low density region known as a weak link (Related article on this group's research set-up).

The weak link used in the experiment is like the thin neck between reservoirs of sand in an hourglass, constricting the flow of atoms across it. Naturally, you might expect that a constriction would create resistance. Consider pouring syrup through a straw instead of a bottle -- this would be a very impractical method of syrup delivery. However, due to the special properties of the weak link, the atoms can flow freely across the link, preserving superfluidity. This doesn’t mean the link has no influence: when rotated around the ring, the weak link acts kind of like a laser ‘spoon’, ‘stirring’ the atoms and driving an atom current.

After stirring the ring of atoms, the scientists turn off all the lasers, allowing the two BECs to expand towards each other. Like ripples on a pond, these clouds interfere both constructively and destructively, forming intensity peaks and valleys. The researchers can use the resulting interference pattern to discern features of the system, a process called interferometry.

Gleaning useful data from an interference pattern means having a reference wave. In this case, the inner BEC serves as a phase reference. A way to think of phase is in the arrival of a new day. A person who lives on the other side of the planet from you experiences a new day at the same frequency as you do, once every 24 hours. However, the arrival of the day is offset in time, that is to say there is a phase difference between your day and the other person's day.

As the two BECs interfere, the position of the interference fringes (peaks in the wave) depends on the relative phase between the two condensates. If a current is present in the outer ring-shaped BEC, the relative phase is changing as a function of the position of the ring, and the interference fringes assume a spiral pattern. By tracing a single arm of the spiral a full 360 degrees and measuring the radial difference between the beginning and end of the trace, the researchers can extract the magnitude of the superfluid current present in the ring.

They now know the current, so what about the phase across the weak link? The same interferometry process can be applied to the two sides of the weak link, again yielding a phase difference. When coupled with the measured current, the scientists now have a measure of how much current flows through the weak link as a function of the phase difference across the link, the current-phase relationship. For their system, the group found this dependence to be roughly linear (in agreement with their model).

A different scenario, where the weak link has a smaller profile, might produce a different current response, one where non-linear effects play a larger role. Extending the same methods makes it possible to characterize these weak links as well, and could be used to verify a type of weak link called a Josephson junction, an important superconducting element, in a cold atom system. Characterizing the current-phase relationship of other atomtronic components should also be possible, broadening the capabilities of researchers to analyze and design new atomtronic systems.

This same lab, led by JQI fellow Gretchen Campbell, had recently employed a weak link to demonstrate hysteresis, an important property of many electronic systems, in a cold atom circuit. Better characterizing the weak link itself may help realize more complex circuits.  “We’re very excited about this technique,” Campbell says, “and hope that it will help us to design and understand more complicated systems in the future."

This article was written by S. Kelley/JQI.

Read More

Atomtronics is an emerging technology whereby physicists use ensembles of atoms to build analogs to electronic circuit elements. Modern electronics relies on utilizing the charge properties of the electron. Using lasers and magnetic fields, atomic systems can be engineered to have behavior analogous to that of electrons, making them an exciting platform for studying and generating alternatives to charge-based electronics.

Using a superfluid atomtronic circuit, JQI physicists, led by Gretchen Campbell, have demonstrated a tool that is critical to electronics: hysteresis. This is the first time that hysteresis has been observed in an ultracold atomic gas. This research is published in the February 13 issue of Nature magazine, whose cover features an artistic impression of the atomtronic system.

Lead author Stephen Eckel explains, “Hysteresis is ubiquitous in electronics. For example, this effect is used in writing information to hard drives as well as other memory devices.  It’s also used in certain types of sensors and in noise filters such as the Schmitt trigger.” Here is an example demonstrating how this common trigger is employed to provide hysteresis.  Consider an air-conditioning thermostat, which contains a switch to regulate a fan. The user sets a desired temperature. When the room air exceeds this temperature, a fan switches on to cool the room. When does the fan know to turn off? The fan actually brings the temperature lower to a different set-point before turning off. This mismatch between the turn-on and turn-off temperature set-points is an example of hysteresis and prevents fast switching of the fan, which would be highly inefficient.

In the above example, the hysteresis is programmed into the electronic circuit. In this research, physicists observed hysteresis that is an inherent natural property of a quantum fluid. 400,000 sodium atoms are cooled to condensation, forming a type of quantum matter called a Bose-Einstein condensate (BEC), which has a temperature around 0.000000100 Kelvin (0 Kelvin is absolute zero). The atoms reside in a doughnut-shaped trap that is only marginally bigger than a human red blood cell. A focused laser beam intersects the ring trap and is used to stir the quantum fluid around the ring.

While BECs are made from a dilute gas of atoms less dense than air, they have unusual collective properties, making them more like a fluid—or in this case a superfluid.  What does this mean? First discovered in liquid helium in 1937, this form of matter, under some conditions, can flow persistently, undeterred by friction. A consequence of this behavior is that the fluid flow or rotational velocity around the team’s ring trap is quantized, meaning it can only spin at certain specific speeds. This is unlike a non-quantum (classical) system, where its rotation can vary continuously and the viscosity of the fluid plays a substantial role.

Because of the characteristic lack of viscosity in a superfluid, stirring this system induces drastically different behavior. Here, physicists stir the quantum fluid, yet the fluid does not speed up continuously. At a critical stir-rate the fluid jumps from having no-rotation to rotating at a fixed velocity. The stable velocities are a multiple of a quantity that is determined by the trap size and the atomic mass.

This same laboratory has previously demonstrated persistent currents and this quantized velocity behavior in superfluid atomic gases. Now they have explored what happens when they try to stop the rotation, or reverse the system back to its initial velocity state. Without hysteresis, they could achieve this by reducing the stir-rate back below the critical value causing the rotation to cease. In fact, they observe that they have to go far below the critical stir-rate, and in some cases reverse the direction of stirring, to see the fluid return to the lower quantum velocity state.

Controlling this hysteresis opens up new possibilities for building a practical atomtronic device. For instance, there are specialized superconducting electronic circuits that are precisely controlled by magnetic fields and in turn, small magnetic fields affect the behavior of the circuit itself. Thus, these devices, called SQuIDs (superconducting quantum interference devices) are used as magnetic field sensors. “Our current circuit is analogous to a specific kind of SQuID called an RF-SQuID”, says Campbell. “In our atomtronic version of the SQuID, the focused laser beam induces rotation when the speed of the laser beam “spoon” hits a critical value. We can control where that transition occurs by varying the properties of the “spoon”. Thus, the atomtronic circuit could be used as an inertial sensor.”

This two-velocity state quantum system has the ingredients for making a qubit. However, this idea has some significant obstacles to overcome before it could be a viable choice. Atomtronics is a young technology and physicists are still trying to understand these systems and their potential. One current focus for Campbell’s team includes exploring the properties and capabilities of the novel device by adding complexities such as a second ring.

This research was supported by the NSF Physics Frontier Center at JQI. 

This article was written by E. Edwards/JQI

Read More