WordPress Ad Banner

Quantum Computer Creates Particle That Can Remember Its Past

In a significant advancement for quantum computing, a recent report by New Scientist reveals that a quantum computer has successfully generated a particle known as an anyon, which possesses the ability to retain its past states. This groundbreaking development carries the potential to enhance the capabilities of quantum computing systems.

Unlike conventional particles, anyons possess a unique characteristic of maintaining a form of memory concerning their previous locations. Initially observed in the 1970s, anyons exist solely in two dimensions and exhibit quasiparticle properties—collective vibrations that exhibit particle-like behavior.

Of particular interest are the so-called swapping anyons, which retain a record of the number of swaps they undergo, influencing their vibrational patterns. This intriguing quality makes them a compelling avenue for quantum computing. However, until now, experimental confirmation of their existence had remained elusive.

Enter Henrik Dryer and his team at the quantum computing company Quantinuum. They have made a remarkable breakthrough with the development of a cutting-edge quantum processor called H2. This quantum processor has the capability to generate qubits, the fundamental units of quantum information, and also introduce surface anyons—a significant achievement in the field.

With this advancement, the potential for leveraging anyons in quantum computing systems takes a significant leap forward. The ability of anyons to retain and manipulate information from previous states holds tremendous promise for enhancing the computational power and efficiency of future quantum computers.

A Kagome Lattice

They did this by entangling these qubits in a formation called a Kagome lattice, a pattern of interlocking stars common in traditional woven Japanese baskets, giving them identical quantum mechanical properties to those predicted for anyons.

“This is the first convincing test that’s been able to do that, so this would be the first case of what you would call non-Abelian topological order,” told New Scientist Steven Simon at the University of Oxford. 

MIT Scientists Create More Powerful, Dense Computer Chips

The demand for more powerful, potent, and denser computer chips is constantly growing with the rise of electronic gadgets and data centers. Traditional methods for making these chips involve bulky 3D materials, which make stacking difficult. However, a team of interdisciplinary MIT researchers has developed a new technique that can grow transistors from ultrathin 2D materials directly on top of fully fabricated silicon chips.

The researchers published their findings in the peer-reviewed scientific journal Nature Nanotechnology. The new process involves growing smooth and uniform layers of 2D materials across 8-inch wafers, which can be critical for commercial applications where larger wafer sizes are typical.

The team focused on using molybdenum disulfide, a flexible and transparent 2D material with powerful electronic and photonic properties. Typically, these thin films are grown using metal-organic chemical vapor deposition (MOCVD) at temperatures above 1022 degrees Fahrenheit, which can degrade silicon circuits.

To overcome this, the researchers designed and built a new furnace with two chambers: the front, where the silicon wafer is placed in a low-temperature region, and the back, a high-temperature region. Vaporized molybdenum and sulfur compounds are then pumped into the furnace. Molybdenum stays and decomposes at the front, while the sulfur compound flows into the hotter rear and decomposes before flowing back into the front to react and grow molybdenum disulfide on the surface of the wafer.

This innovative technique is a significant advancement in the development of more powerful and denser computer chips. With this breakthrough, the researchers were able to construct multistory building-like structures, significantly increasing the density of integrated circuits. In the future, the team hopes to fine-tune their technique and explore growing 2D materials on everyday surfaces like textiles and paper, potentially revolutionizing the industry.

A Computer Breakthrough Helps Solve a Complex Math Problem 1 Million Times Faster

Reservoir computing, a machine learning algorithm that mimics the workings of the human brain, is revolutionizing how scientists tackle the most complex data processing challenges, and now, researchers have discovered a new technique that can make it up to a million times faster on specific tasks while using far fewer computing resources with less data input.

With the next-generation technique, the researchers were able to solve a complex computing problem in less than a second on a desktop computer — and these overly complex problems, such as forecasting the evolution of dynamic systems like weather that change over time, are exactly why reservoir computing was developed in the early 2000s.

These systems can be extremely difficult to predict, with the “butterfly effect” being a well-known example. The concept, which is closely associated with the work of mathematician and meteorologist Edward Lorenz, essentially describes how a butterfly fluttering its wings can influence the weather weeks later. Reservoir computing is well-suited for learning such dynamic systems and can provide accurate projections of how they will behave in the future; however, the larger and more complex the system, more computing resources, a network of artificial neurons, and more time are required to obtain accurate forecasts.

However, researchers know only how reservoir computing works, not what goes on inside. The artificial neural networks in reservoir computing are constructed on mathematics, and it appears that all the system required to run more efficiently was a simplification. A team of researchers led by Daniel Gauthier, lead author of the study and professor of physics at The Ohio State University, was able to do just that, dramatically reducing the need for computing resources and saving significant time.

When the concept was put to the test on a forecasting task, it was discovered that the next-generation reservoir computing technique was clearly superior to others, according to the study published in the journal Nature Communications.

Depending on the data, the new approach proved to be 33 to 163 times faster. However, when the work objective was changed to favor accuracy, the new model was 1 million times faster. This increase in speed was made possible by the fact that next-generation reservoir computing requires less warmup and training than previous generations.

“For our next-generation reservoir computing, there is almost no warming time needed,” explained Gauthier, in a press release. “Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points.”

Furthermore, the new technique was able to attain the same accuracy with only 28 neurons, as opposed to the 4,000 required by the current-generation model.

“What’s exciting is that this next generation of reservoir computing takes what was already very good and makes it significantly more efficient,” Gauthier stated. And it looks like this is only the beginning. The researchers plan to test the super-efficient neural network against more difficult tasks in the future, expanding the work to even more complex computer issues, such as fluid dynamics forecasting.

“That’s an incredibly challenging problem to solve,” Gauthier said. “We want to see if we can speed up the process of solving that problem using our simplified model of reservoir computing.”