WordPress Ad Banner

Georgia Tech Unveils Tennis Robot ESTHER: Conquer Your Opponents

Dr. Matthew Gombolay, an assistant professor of Interactive Computing at Georgia Institute of Technology, has introduced ESTHER (Experimental Sport Tennis Wheelchair Robot), a groundbreaking robotic tennis partner. Unlike traditional static ball machines, ESTHER mimics human opponents, providing athletes with realistic match conditions to enhance their skills and performance.

With a deep understanding of tennis, Dr. Gombolay recognized the need for a robot that could adapt to different playing styles and exploit weaknesses in players’ games. Drawing inspiration from wheelchair tennis design, his team successfully tackled the challenge of maneuvering ESTHER on the court.

After two years of dedicated work with over 20 students, Dr. Gombolay achieved a remarkable breakthrough. ESTHER can now locate an incoming tennis ball and consistently execute return shots, although it doesn’t match the skill level of renowned wheelchair tennis player Esther Vergeer.

ESTHER relies on two DC motors and a gearbox to swiftly cover both sides of the tennis court. The main hurdle lies in pathfinding, as the robot must anticipate the ball’s trajectory and determine the optimal interception path. To address this, the team employs a network of high-resolution cameras strategically positioned around the court. Computer vision algorithms process the camera data, allowing the robot to recognize the incoming ball, predict its trajectory, and respond accordingly using triangulation from multiple camera angles.

Future applications and the potential for training revolution

Although ESTHER’s current capabilities are limited to hitting back-and-forth rallies, Dr. Gombolay and his team have ambitious plans for the robot’s future development. The next phase involves teaching ESTHER to strategize shot selection, elevate its playing abilities and enhance its value as a training tool.

By incorporating reinforcement learning methods, the robot would be capable of autonomously improving its decision-making and shot execution, becoming more aggressive and effective in winning games.

The implications of ESTHER’s technology extend far beyond tennis, according to Zulfiqar Zaidi, a lead student on the project. “While tennis is a great starting point, a system that can play tennis well can have applications in other fields that similarly require fast dynamic movements, accurate perception, and the ability to safely move around humans.”

“This technology could be useful in manufacturing, construction, or any other field that requires a robot to interact with humans while performing fast and precise movements,” he added.

The revolutionary aspect of ESTHER lies in its potential to transform athletic training and preparation. Athletes could hone their skills by playing against the robot which would mimic the styles and tactics of specific opponents.

The day may not be far when robots like ESTHER become integral partners for athletes across various sports, pushing the boundaries of performance and redefining how we train.

Robots Evolve: Empowering AI to Comprehend Material Composition

Robots are rapidly advancing in intelligence and capabilities with each passing day. However, there remains a significant challenge that they continue to face – comprehending the materials they interact with.

Consider a scenario where a robot in a car garage needs to handle various items made from the same material. It would greatly benefit from being able to discern which items share similar compositions, enabling it to apply the appropriate amount of force.

Material selection, the ability to identify objects based on their material, has proven to be a difficult task for machines. Factors such as object shape and lighting conditions can further complicate the matter as materials may appear different.

Nevertheless, researchers from MIT and Adobe Research have made remarkable progress by leveraging the power of artificial intelligence (AI). They have developed a groundbreaking technique that empowers AI to identify all pixels in an image that represent a specific material.

What sets this method apart is its exceptional accuracy, even when faced with objects of varying shapes, sizes, and lighting conditions that may deceive human perception. None of these factors trick the machine-learning model.

This significant breakthrough brings us closer to a future where robots possess a profound understanding of the materials they interact with. Consequently, their capabilities and precision are substantially enhanced, paving the way for more efficient and effective robotic applications.

The development of the model 

To train their model, the researchers used “synthetic” data—computer-generated images created by modifying 3D scenes to generate various ideas with different material appearances. Surprisingly, the developed system seamlessly works with natural indoor and outdoor settings, even those it has never encountered before.

Moreover, this technique isn’t limited to images but can also be applied to videos.

For example, once a user identifies a pixel representing a specific material in the first frame, the model can subsequently identify objects made from the same material throughout the rest of the video.

The potential applications of this research are vast and exciting.

Beyond its benefits in scene understanding for robotics, this technique could enhance image editing tools, allowing for more precise manipulation of materials.

Additionally, it could be integrated into computational systems that deduce material parameters from images, opening up new possibilities in fields such as material science and design.

One intriguing application is material-based web recommendation systems. For example, imagine a shopper searching for clothing from a particular fabric.

By leveraging this technique, online platforms could provide tailored recommendations based on the desired material properties.

Prafull Sharma, an electrical engineering and computer science graduate student at MIT and the lead author of the research paper, emphasizes the importance of knowing the material with which robots interact.

Even though two objects may appear similar, they can possess different material properties.

Sharma explains that their method enables robots and AI systems to select all other pixels in an image made from the same material, empowering them to make informed decisions.

As AI advances, we can look forward to a future where robots are intelligent and perceptive of the materials they encounter.

The collaboration between MIT and Adobe Research has brought us closer to this exciting reality.

Berkeley Researcher Deploys Robots and AI to Increase Pace Of Research By 100 Times

A research team led by Yan Zeng, a scientist at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab), has built a new material research laboratory where robots do the work and artificial intelligence (AI) can make routine decisions. This allows work to be conducted around the clock, thereby accelerating the pace of research.

Research facilities and instrumentation have come a long way over the years, but the nature of research remains the same. At the center of each experiment is a human doing the measurements, making sense of data, and deciding the next steps to be taken. At the A-Lab set up at Berkeley, the researchers led by Zeng want to break the current pace of research by using robotics and AI.

What is the A-Lab?

The A-Lab is a 600 square feet space equipped with three robotic arms and eight furnaces. Work on the concept began in 2020, and with funding from the Department of Energy, the construction began in 2022 and was completed in a little over a year.

The role of the A-Lab is to synthesize novel materials that can be used to build a range of new products such as solar cells, fuel cells, and thermoelectrics – which generate energy from differences in temperatures and other technologies in the clean energy sector.

Scientists have been using computational methods to predict novel materials for many years now, but the testing of the material has been a major bottleneck since it is a slow process. At the A-Lab, the process can be accelerated by as much as 100 times compared to a human.

To do this, researchers or AI selects a target material to be synthesized, and a robotic arm begins by weighing and mixing different ingredients needed to create the novel material. These can be various metals and their oxides, which are available in their powdered form and are mixed in a solvent to distribute them evenly and then moved to a furnace.

A crucible containing the mixture is then sent to a furnace by the robot, where it can be heated to 2,200 degrees Fahrenheit (1,200 degrees Celsius) while gases can be injected as required. The AI is in control of the entire process and sets reaction ingredients and conditions as per desired results.

Another robotic arm recovers the new material from the crucible and moves it onto a slide, where it undergoes analysis under an X-ray diffractometer and electron microscope, the results of which are sent to the AI, which then rapidly iterates the process until the desired novel material is achieved.

Humans keep an eye on the process through video feeds and system-generated alerts and are currently fine-tuning the laboratory to add the ability to restock supplies and change precursors while also allowing liquids to be mixed and heated in the process.

If you have been wondering what the A stands for in the nomenclature of the laboratory, it is ambiguous on purpose.

Like Human, Robot also Collapsed from Tiredness after a Long Day of Work

After a long day of work, a robot can be seen falling in a scary video that has gone viral. The video has sparked questions about the need for rest and the impact of automation on these robots.

The manufacturing robot was doing repetitious work when it abruptly stopped and tumbled to the floor. It couldn’t stand back up and seemed to have lost all vigour.

This incident makes it clear that automation needs to be handled with more care. Robots can be extremely effective and prolific, but they also need upkeep and downtime like any other machine. Failures, malfunctions, and even accidents may result from neglecting this, which is a major matter.

This is not the first time a robot has passed out from exhaustion. Numerous incidents of robots overheating, malfunctioning, and failing after extended use have surfaced in recent years. These accidents have occasionally led to injury or fatalities.

The video was posted by “Videos that will surprise you” with the title, “Video of a robot collapsing in a scene that seemed to fall from fatigue after a long day’s work.” A robot is seen in a video loading plastic containers onto a conveyor belt.

The robot is shown labouring on the task for a very long time in the time-lapse video. The machine is seen picking up a container in the last few frames, then it promptly collapses.ac

It is crucial to keep in mind that these robots are adaptable as we continue to rely on automation to do activities that are getting more complex. For them to operate correctly and safely, they need routine upkeep and attention.

The video of the robot that has fallen to the ground serves as a sharp reminder of the value of responsible automation. We need to make sure that these machines receive the same respect and care that we would give to any other employee or equipment as we continue to incorporate them into our workplaces and homes. In order to prevent overworking and tiredness, this entails giving them the rest and upkeep they require to operate effectively.

The video has been shared on Twitter, and from the date it was posted, it has got 9,790 views and 86 likes till now. Said one of the users commented, “I love how everyone is standing around watching.” Could someone please assist him?

New Sensors that Can Control Robots Through Brain

Researchers Develop New Sensors that Can Control Robots Through Brain, A recent study published in the journal ACS Applied Nano Materials contributed to the idea that robots can be directly controlled by the human brain via 3D-patterned sensors. These new sensors do not rely on sticky conductive gels, according to scientists.  The “dry” sensors can capture the brain’s electrical activity regardless of the hair and the bumps and curves of the head.

Researchers at UTS Developing the New Sensors:

Researchers at the University of Technology Sydney (UTS) have built biosensor technology that will allow enabling robots and machines to be operated fully by mind control. The improved brain-computer interface was developed by Distinguished Professor Chin-Teng Lin and Professor Francesca Iacopi of the UTS Faculty of Engineering and IT in collaboration with the Australian Army and the Defence  Centre.

Applications of the new technology:

In addition to military applications, the technology has enormous potential in areas such as advanced manufacturing, aerospace, and healthcare, where it might enable those with impairments to operate wheelchairs or prostheses.

Doctors use electroencephalography (EEG) to monitor electrical impulses from the brain by implanting or putting specialized electrodes on the head’s surface. In addition to aiding in the diagnosis of neurological disorders, EEG may also be utilized in “brain-machine interfaces,” which use brain waves to control an external item such as a prosthetic limb, robot, or video game.

The majority of non-invasive variants use “wet” sensors that attach to the scalp using a thick gel that might irritate the scalp and occasionally induce allergic reactions. Therefore, researchers have been developing “dry” sensors that do not require gels as a substitute. However, none have performed as well as the “wet” sensors that are the industry standard. Nanomaterials such as graphene may be a feasible option, but their flat and frequently flaky nature renders them incompatible with the irregular contours of the human skull, particularly over extended durations.

3D graphene-based sensor

In order to tackle it, Francesca Iacopi and her colleagues set out to create a 3D graphene-based sensor based on polycrystalline graphene that could reliably record brain activity without adhering. Each of the several graphene-coated three-dimensional structures constructed by the researchers was 10 meters thick. On the curved, hairy surface of the occipital region — the area at the base of the skull where the visual cortex of the brain resides — a hexagonal pattern performed the best of the evaluated patterns. Eight of these sensors were attached to the back of the head by an elastic headband containing eight sensors.

When paired with an augmented reality headset that provided visual cues, the electrodes could identify which line was being studied and then work with a computer to transform the signals into commands that controlled the movement of a four-legged robot — without the need for any physical controls.

Nevertheless, the new electrodes did not work as well as the wet sensors. On the other hand, the researchers believe their study is a first step towards developing durable, readily deployable dry sensors that will assist in expanding the applications of brain-machine interfaces.