The latest innovation from Brilliant Labs that integrates augmented reality (AR) and artificial intelligence (AI) seamlessly into your daily life. These groundbreaking eyeglasses, designed to look like ordinary eyewear, are equipped with advanced AI capabilities that can translate languages, identify objects, fetch information from the internet, and much more. Meet Noa, the special AI assistant embedded in the Frame, ready to provide answers to all your queries using state-of-the-art language models.
Unlocking Superpowers
Brilliant Labs has joined forces with industry giants like OpenAI, Whisper, and Perplexity to imbue the Frame with extraordinary capabilities. Through cutting-edge coding and technology, these glasses empower users with what can only be described as superpowers. In a captivating video demonstration, the Frame showcases its ability to display real-time insights directly in front of the user’s eyes, thanks to OpenAI’s GPT model.
Enhanced Functionality
With Whisper’s speech recognition and transcription, the Frame can seamlessly translate conversations in different languages. Moreover, powered by Perplexity AI, the glasses serve as your research assistant, providing accurate and sourced information on demand. Brilliant Labs co-founder and CEO Bobak Tavangar envisions a future where users can switch between AI models effortlessly, catering to their specific needs and preferences.
The Ultimate AR Experience
Brilliant Labs doesn’t just stop at the Frame; they’ve also introduced the monocle, the world’s smallest AR device that can be clipped onto existing glasses. The Frame boasts advanced display and camera technology, featuring a micro OLED display for vibrant visuals and a high-definition camera for detailed visual recognition and AR applications. Comparing it to the revolutionary impact of multitouch on smartphones, Tavangar emphasizes the transformative potential of the Frame in shaping the future of technology.
Pre-order Now
Available for pre-order in black, gray, and transparent hues, the Frame AI glasses are priced at $349 each, with an option for prescription glasses at $448. Be among the first to experience the future of augmented reality and artificial intelligence with the Frame, set to ship in April. Don’t miss out on this opportunity to embark on a journey into the world of tomorrow, right from the comfort of your own eyes.
After an extensive four-year research endeavor, the engineers at the Istituto Italiano di Tecnologia (IIT) in Genova have introduced a groundbreaking innovation – the iCub3 avatar system. This technological marvel, developed by IIT’s Artificial and Mechanical Intelligence (AMI) lab, incorporates advanced avatar technologies that promise to redefine human interaction and virtual presence globally. In this blog post, we’ll delve into the key features of the iCub3 system and its successful real-world applications.
Advanced Avatar Technology:
The iCub3 avatar system is a testament to the IIT’s commitment to advancing humanoid robot capabilities. Designed to facilitate the seamless embodiment of humanoid robots by human operators, the system encompasses critical aspects such as locomotion, manipulation, voice, and facial expressions. It provides comprehensive sensory feedback, including visual, auditory, haptic, weight, and touch modalities, making it a versatile and immersive experience.
Key Components:
At the core of the iCub3 avatar system is the iCub3 humanoid robot, an evolved version of its predecessor born two decades ago. This innovative robot is complemented by wearable technologies named iFeel, developed in collaboration with the Italian National Institute for Insurance against Accidents at Work (INAIL). The meticulous design ensures that human operators can seamlessly embody humanoid robots, opening up a realm of possibilities for applications in various fields.
Real-World Tests and Success Stories:
The IIT research group subjected the iCub3 avatar system to rigorous tests in diverse real-world scenarios, showcasing its adaptability and robust capabilities.
Biennale di Venezia Trial (November 2021): In this initial trial, a human operator in Genoa remotely controlled the iCub3 avatar at the Biennale di Venezia in Venice, 300 kilometers away. Overcoming challenges of stable communication and cautious interaction in the delicate art exhibition, the iFeel suit tracked the operator’s motions, allowing the iCub3 robot to replicate precise movements, including fingers and eyelids.
We Make Future Show Festival (June 2022): At the We Make Future Show festival, the iCub3 avatar system showcased its prowess as the Genova operator controlled a Rimini-based robot, performing tasks on stage before an audience of 2000 amid electromagnetic interference. Specialized haptic devices conveyed the robot’s weight perception, enhancing its expressive abilities for audience engagement.
ANA Avatar XPrize (November 2022): The iCub3 system’s presentation at the ANA Avatar XPrize in Los Angeles demonstrated its adaptability and potential applications in space exploration. Operated by individuals outside the research team, the robot faced time-sensitive, diverse tasks, revealing its robust capabilities. Sensorized skin on the robot’s hands allowed texture perception, and intuitive control enabled direct manipulation.
Future Developments:
The experience gained from the iCub3 avatar system has paved the way for a new robot, the ergoCub, currently under development at IIT. The ergoCub robot is designed to maximize acceptability within work environments, minimizing risk and fatigue in collaborative tasks for industries and healthcare.
Microsoft has recently been granted a patent for augmented reality (AR) glasses that boasts a groundbreaking feature: a swappable battery. This development could potentially position Microsoft as a frontrunner in the AR glasses market once the product becomes available. The patent, which was made public just last week, has garnered attention for its potential to revolutionize the way we interact with wearable AR technology.
The concept of AR glasses has long been seen as the next frontier in mobile technology, with the potential to eventually replace the ubiquitous smartphone. About a decade ago, Google made an early foray into this field with its Google Glass, but the high costs and limited functionality of the device ultimately led to its downfall, despite the enduring appeal of the concept.
Another tech giant, Apple, has been eyeing the AR glasses market with great interest, but it has held off on entering the fray, believing that the technology needed to create a truly compelling product is not yet fully matured. Microsoft, it seems, is gearing up to join the competition and is actively seeking innovative ways to set its AR glasses apart from the rest.
Modular Design: Swappable Batteries
Drawing from their experience in the mixed reality (MR) market, Microsoft introduced the HoloLens in 2016, followed by an updated version four years later. However, the company faced challenges in gaining widespread adoption for the product. The weight of the device, largely attributable to the battery pack, made it uncomfortable for extended use. Additionally, the limited battery life required frequent recharging, disrupting the user experience.
Microsoft’s solution to these issues lies in the patent-approved design of their AR glasses. These glasses feature a modular design, allowing the battery to be positioned in the temple section of the frames or even in a detachable earpiece worn by the user. This innovative approach enables users to continue wearing the glasses while a second battery is charging, significantly extending the device’s usability throughout the day, if not longer. Furthermore, a smaller battery reduces the overall weight of the glasses, making them more comfortable for extended wear.
What’s particularly intriguing is that Microsoft’s ambitions extend beyond just battery placement. The company is exploring the possibility of connecting the glasses to other accessories, including necklaces and backpacks. These connections could be established using technologies such as Wi-Fi or Li-Fi, potentially allowing the computational and storage components to be located elsewhere while the glasses focus solely on displaying and gathering information.
Given the recent surge in remote work arrangements, AR glasses could open up entirely new realms of virtual interaction, enabling people to collaborate in virtual environments and complete tasks without the need for physical presence.
If Microsoft can bring these innovations to fruition in the near future, it could become the top choice for users seeking to transition away from traditional smartphones in favor of more immersive and versatile AR glasses.
Meta has made an announcement regarding the arrival of Roblox on Meta Quest VR headsets. The introduction will begin with an open beta, which will soon be accessible through App Lab. This move comes as no surprise, as Roblox, a prominent player in the metaverse domain, has been speculated to become available on Meta Quest VR headsets this year.
Players will have the opportunity to enjoy the Quest version of Roblox on the Quest 2 and Quest Pro headsets. Additionally, it will be compatible with the forthcoming Quest 3.
According to Meta, the open beta phase will allow the Roblox developer community to optimize their existing games for the Quest platform, as well as create new ones specifically for virtual reality. They will receive valuable feedback from the Quest community during this process, enabling them to experiment and refine VR experiences prior to the full release of Roblox on the Meta Quest Store.
In a blog post, Meta mentioned, “Roblox is automatically publishing some experiences that use default player scripts to support VR devices. They’ve found that those experiences typically run well in VR without modifications, so they’re seeding the Roblox VR library with great content from day one. And because Roblox is cross-platform, you’ll be able to connect, play, and hang out with friends across Xbox, iOS, Android, and desktop—helping to make VR more social than ever before.”
Roblox CEO Dave Baszucki had already hinted at this development in 2021, stating that Quest would be a natural fit for Roblox. This indicated the company’s intention to make Roblox available on Meta Quest in the future, as mentioned during a call with investors.
Seeding the Roblox VR Library:
While Roblox is currently compatible with various VR headsets such as Oculus Rift and HTC Vive, users need to connect their PCs to a VR headset to play. The Quest version of Roblox will offer a much more convenient and accessible experience, especially since it will eventually be downloadable directly from the Meta Quest Store.
Roblox on Quest will be accessible for individuals aged 13 and above. Meta plans to share more details as the open beta approaches.
Meta Quest 3 Features and Pricing:
Priced at $499, the Meta Quest 3 is set to be released this fall. Compared to its 2020 predecessor, the Quest 3 carries a $100 higher price tag. Meta has emphasized that the headset features high-resolution color mixed reality and is 40% thinner than the Quest 2. It boasts the latest Snapdragon chipset, along with higher resolution displays and a 2x GPU performance.
ServiceNow today announced its latest generative AI solution, Now Assist for Virtual Agent, with the aim of revolutionizing self-service by offering intelligent and relevant conversational experiences. The new capability expands on ServiceNow’s strategy of integrating generative AI capabilities into its Now Platform, which helps customers to streamline digital workflows and optimize productivity.
This tool utilizes generative AI to deliver direct and contextually accurate responses to user inquiries. Integrated with the Now Platform, it will enable users to swiftly access relevant information and connect with digital workflows tailored to their needs. Now Assist provides user assistance with internal code snippets, product images or videos, document links and summaries of knowledge base articles.
According to the company, this self-service capability will help users obtain quick and accurate solutions, even when they need guidance on whom to approach or where to begin. The company believes that by enhancing self-solve rates and accelerating issue resolution, the feature significantly boosts productivity.
“One of the key goals of our new offering is to unlock additional productivity without added complexity by providing direct, relevant conversational responses,” Jeremy Barnes, VP for platform product AI at ServiceNow, told VentureBeat. “By connecting exchanges to automated workflows, customers can get the information they need within the context of their organization.”
ServiceNow’s launch of Now Assist aligns with the introduction of their Generative AI Controller, which serves as the foundation for all generative AI functionality on the Now Platform. In addition, the company has also collaborated with Nvidia to develop customized large language models (LLMs) for workflow automation.
Leveraging generative AI to streamline user inquiries
Now Assist for Virtual Agent can be easily configured using Virtual Agent Designer in a low-code, drag-and-drop environment. Additionally, users can create and deploy conversational self-service with the tool’s diagram drag-and-drop designer, which incorporates natural language understanding (NLU).
ServiceNow says this integration can be easily incorporated into an organization so they can begin automating and streamlining digital workflows to achieve faster responses.
“Now Assist allows organizations to easily connect across a company’s internal knowledge base, and then supplement answers with general purpose LLMs like Microsoft Azure OpenAI Service LLM and OpenAI API,” said Barnes.
In partnership with Nvidia, the company is actively developing custom LLMs tailored specifically for ServiceNow. These LLMs will be readily available and integrated into the Now Platform.
Barnes highlighted that the company’s strategy encompasses supporting both general-purpose LLMs and providing domain-specific LLMs. The ongoing collaboration with Nvidia aims to address a broad spectrum of customer requirements with custom LLMs.
Custom LLMs built with Nvidia
The company is developing custom LLMs using Nvidia’s software, services and infrastructure, trained on data specifically for the ServiceNow Platform, Barnes explained.
“We believe there will be many more exciting advances as we continue to strengthen workflow automation and increase productivity,” he said.
Barnes explained that if an organization’s knowledge base lacks sufficient information to provide a contextual response to a general question, Now Assist will establish a connection with general-purpose LLMs to augment the answer.
“If a user doesn’t know who to ask or where to start, our new solution will help them quickly determine the most relevant answer without having to scroll through endless links or knowledge base articles,” Barnes added. “For our customers, this is about simplification and not having to slow down to understand how and where to get the help you need — but to be able to get it at the speed of your work.”
The company said that Now Assist for Virtual Agent and Now Assist for Search are presently accessible to a select group of customers and are anticipated to be widely available in ServiceNow’s Vancouver release scheduled for September 2023.
What’s next for ServiceNow?
Barnes said that ServiceNow is actively exploring future use cases of generative AI to enhance productivity across various business functions, such as IT, employee experience and customer service.
“We are exploring additional future use cases to help agents more quickly resolve a broad range of user questions and support requests with purpose-built AI chatbots that use LLMs and focus on defined IT tasks,” he said. “Internally, ServiceNow is exploring how AI can be used to generate and document code and scripts as well as evaluating how it can help employees find information faster for things like benefits, PTO policies, opening incidents and more.”
The company aims to integrate all workflows with generative AI and low code. By doing so, ServiceNow believes it will unlock new use cases that effectively leverage the technology’s potential across industries and enable the creation of new revenue streams.
“We’re incredibly excited about enterprise AI,” said Barnes. “There are hundreds of use cases where generative AI — applied to a business problem you’re solving for — can radically transform the productivity curve.”
In the wake of Apple’s recent launch of the Vision Pro AR/VR headset, Meta CEO Mark Zuckerberg expressed his underwhelmment, stating that it is “seven times more expensive” and “requires excessive energy.” This comparison was made in reference to Meta’s own headset, Quest 3, which was unveiled just two days prior to Apple’s announcement, possibly timed strategically ahead of Apple’s WWDC event.
Zuckerberg, speaking to his team on June 8, expressed curiosity about what Apple had brought to the table under the leadership of Tim Cook. However, he appeared unfazed, as reported by The Verge. Addressing a group of Meta employees, he conveyed that Apple had no exclusive solutions to the limitations imposed by the laws of physics, asserting that Meta’s teams had already explored and contemplated these challenges extensively.
While some may consider Apple’s Vision Pro and Meta’s Quest 3 as direct competitors, given their close release dates and shared mixed reality functionality, early reviews suggest that both headsets become uncomfortable for wearers after just 30 minutes of use. However, one notable distinction sets them apart: Vision Pro comes with a hefty price tag of $3,499, whereas Quest 3 is priced at a more affordable $499. This significant price difference may dissuade AR/VR enthusiasts seeking budget-friendly technology from opting for Apple’s offering.
Even Tesla CEO Elon Musk took a ‘trippy’ swipe at Vision Pro’s cost in a tweet.
‘There’s a real philosophical difference’
Apart from the price difference, Zuckerberg thinks there’s a difference in the values shared by Meta and Apple. Sharing his vision, Zuckerberg said he wants the tech to be affordable and accessible, and that the company has already sold tens and millions of Quests.
“By contrast, every demo that they showed was a person sitting on a couch by themself,” he said. “I mean, that could be the vision of the future of computing, but like, it’s not the one that I want.”
He said, “Our device is about being active and doing things,” calling Metaverse ‘fundamentally social.’
However, Zuckerberg did take note that Apple’s Vision Pro is using a higher-resolution display than Meta’s Quest 3, with the former delivering 23 million pixels across two displays, more pixels for each eye than a 4K TV.
Meta isn’t playing around
Zuckerberg is not faffing around when it comes to making dents in the AR/VR space. Meta is funneling big bucks, to the tune of billions of dollars monthly, towards Reality Labs, the research unit of Meta. This has been a cause of worry for Meta’s investors due to the economic downturn and major economies slipping into recession.
But undaunted, Zuckerberg is sprinting to push virtual reality, while keeping an eye on artificial intelligence. In the same meeting, Meta announced that they will put generative AI into Facebook and Instagram in the upcoming months. It has been reported earlier on how Meta might be planning on embedding an AI chatbot with Instagram.
Every year, the Worldwide Developers Conference (WWDC) draws eager attention from tech enthusiasts worldwide. It is the moment when Apple shares its latest product updates, operating system software advancements, and offers developers new avenues for innovation and app development.
However, this year’s WWDC has generated exceptional excitement and anticipation. All eyes are on Apple as it prepares to enter the realm of mixed reality—an immersive blend of virtual and augmented realities (VR/AR). Industry experts have already labeled this event as the potential “iPhone moment” for the VR/AR industry, sparking soaring expectations of yet another groundbreaking leap by Apple that could revolutionize the technology landscape.
As Apple’s offerings remain veiled before their public unveiling, the world eagerly awaits the company’s announcement. The tech giant’s track record of transformative innovations has raised hopes for a significant leap forward in the VR/AR field. With its penchant for redefining technology, Apple has the potential to reshape the future of mixed reality and captivate audiences with a new generation of immersive experiences.
Stay tuned as WWDC unfolds, revealing Apple’s foray into mixed reality and potentially altering the course of technological innovation once again.
Apple’s Mixed Reality Headset
For years, Apple has been rumored to be working on a virtual reality headset. never really revealing its progress. Based on these rumors, it appears like Apple will unveil a mixed reality headset at the WWDC with an iOS-like interface and high-resolution immersive displays.
Dubbed Reality One or Reality Pro, the device will allow users to switch between VR and AR using a dial and control it using hands and eye movements, making controllers obsolete. The headset could also have an outward-facing display to display the facial expressions of the user and not make them appear like some form of RoboCop.
With CEO Tim Cook stressing the need for “connection” and “communication” when it comes to uses of AR, the headset could also allow users to FaceTime with full face and body renders in addition to access to various games and apps on the device.
Can Apple succeed where others have failed?
The headset is also believed to have been made after overcoming multiple technical challenges and the price of the final product is estimated to be around $3,000. At such a steep ask, the device must supersede expectations and deliver something that no other headset manufacturer has managed before.
One can trust Apple to do both of these things with quite some ease. After all, it did this with the iPhone, and the Apple Watch and it knows very well how to enter the market at the right time with the right product. The company has time and again entered the market after the initial hype around a device has subsided and its mixed reality bet seems no different.
Google, Microsoft, and MagicLeap have also taken shots at the prize in the past decade and failed to make a product that attracts the masses. Mark Zuckerberg’s Meta has made some sizable progress in the area but not all its products get the same traction.
Just last week, the company made another effort to woo users to mixed-reality with its Quest 3 headsets but its Meta Quest Pro, a high-end device launched last year, was quite a dud. Amongst headset manufacturers, Meta is quite the leader when it comes to absolute numbers in sales but the product is far from the tool to engage in a new digital universe that Zuckerberg promised.
Since Apple does not just products but the entire ecosystem around it, the WWDC event offers a glimpse of what the device could be really capable of and how developers could leverage it to make some exciting products that users simply do not want to miss.
Apart from the $3,000 price tag, Apple also needs to counter the challenges of using VR/ AR headsets that the likes of Microsoft, Meta, and even Sony have failed to crack so far. Will Apple come out glorious as always or fall severely short of expectations will be known in a few hours from now.
Apple’s success might also define if the tech world will continue to talk about artificial intelligence (AI) over the next few months or if there will be a new buzzword in town.
In a strategic move just days before Apple’s anticipated entry into the mixed reality headset market, Mark Zuckerberg took the stage to unveil Meta’s Quest 3 headset. As the dominant player in the virtual reality headset space, Meta faces the prospect of formidable competition from Apple this year.
While Zuckerberg has been vocal about Meta’s ambitions to shape the future of the internet through the metaverse, Apple has quietly been working on its own vision for the digital world.
Reports indicate that Apple has faced two delays in announcing its mixed reality headset due to the device not meeting the company’s stringent standards for design and functionality.
As tech enthusiasts and Apple fans eagerly await the upcoming Worldwide Developers Conference (WWDC) next week, Meta is strategically aiming to capture attention and maintain a competitive edge with the unveiling of its Quest 3 headset.
What to expect from Meta’s Quest 3?
With Apple looking set to jump into the AR/ VR segment, Meta needs to up its game and the Quest 3 is an obvious attempt to do get this done. For starters, the device is 40 percent thinner than its predecessor while the graphic performance has been doubled.
Meta is moving away from being just a VR headset company by adding three cameras on the front and giving users a connection to the real world around them. Smartly, it will also leverage these cameras to lets users play virtual games on a tabletop, increasing the ways the headset can be used.
The company has added a depth sensor to this headset and dropped the halos around the controllers to make them feel more natural. The device is priced at $499 and Meta is dropping the prices of its other headsets to stir up demand, after seeing a dip in sales over the past year.
Meta is perhaps hopeful that the rumored price of $3,000 for Apple’s upcoming headset will serve as a deterrent for many buyers, who will pick its pocket-friendly offering instead.
However, knowing Apple’s track record of entering the market with the right offering at the right time, makes one wonder if Meta is missing a trick by trying to keep its device light on the pockets. Reports suggest that Apple could pack 4k displays inside its headset, leaving nothing to chance when it comes to user experience.
The WWDC will perhaps give the world the first glimpse of Apple’s vision for mixed-reality headsets and it would not be a surprise if others are found falling severely short of what Apple achieves. It did it with the smartphone, the smartwatch and the mixed reality could be its next big offering.
The gaming industry has undergone remarkable transformations, progressing from classics like Pong and Space Invaders to immersive modern titles like Minecraft and The Legend of Zelda. Graphics, technology, and gameplay mechanics have advanced significantly, contributing to the overall evolution of gaming.
Within this ever-evolving landscape, gaming peripherals have played a pivotal role in elevating the gaming experience. These devices act as extensions of the player, enabling interactive and immersive experiences with precise control. From the iconic joystick to the cutting-edge VR gloves, peripherals have become indispensable tools for gamers.
The introduction of game controllers allowed players to fully immerse themselves in virtual worlds and explore new realms. Additionally, motion controllers have provided intuitive physical interactions, enabling players to swing swords, cast spells, and throw punches using their own movements.
Gaming peripherals have transformed gaming from a mere form of entertainment into an experiential journey, blurring the lines between reality and fantasy. In this exploration, we delve into the history of gaming peripherals, tracing their evolution from joysticks to VR gloves, and contemplate the future of the gaming experience.
Early gaming peripherals
Some of the early gaming peripherals aren’t as alien as we might think. They were vital in shaping the gaming landscape as they gave better control to the player. One such innovation was the joystick, which quickly became the go-to input device for early gaming systems. But they were initially developed as early as 1908 to allow aircraft pilots to control ailerons and elevators. Their use in gaming systems came much later.
A version of the joystick was first used for gaming in 1972 when Ralph Baer and his team created the Magnavox Odyssey console. The knob-shaped controller could be used to move a spot’s horizontal and vertical positions on the screen.
Since then, many different versions and types of joysticks have been developed that allow the player to maneuver characters and objects in a game with great precision. As gaming continued to evolve, the introduction of paddles and trackballs expanded control options.
Paddles offered rotational control, enabling players to manipulate objects or characters more nuancedly. Trackballs, on the other hand, provided a versatile input method, allowing for both precise and rapid movements. Atari introduced paddles and trackballs in games, with the arcade version of Pong in 1972 and the single-button joystick used with the Atari 2600, released in 1977.
Light guns were also becoming popular in gaming around that time, despite their use in arcade games as early as 1936. These devices later brought the thrill of arcade-style shooting games to home consoles. The Magnavox Odyssey had the first light gun shooting game, which used a replica pump-action shotgun. This was followed by the NES Zapper, also known as the Video Shooting Series light gun, a pistol-style light gun released in Japan in 1984.
Light guns allowed players to aim and shoot at targets on the screen, immersing them in exciting shooting experiences. With light guns, players could enjoy the adrenaline rush of battling enemies or hitting targets with impressive accuracy.
The early gaming peripherals set the stage for future innovations and demonstrated the potential for enhanced control and interactivity.
Advancements in input devices
With the continued advancements in technology, newer peripherals like gamepads, mice, keyboards, racing wheels, and flight sticks have emerged as new ways to enjoy and experience gaming.
Gamepads, mice, and keyboards became standard input devices for consoles due to their familiarity and easy-to-use nature. Tennis for Two was one of the earliest games that used a gamepad. Since then, gamepads have come a long way. Now, they enable users to perform complex actions, such as performing martial arts in Mortal Kombat or making Mario jump over buildings in Super Mario.
On the other hand, the mouse and keyboard combo is perfect for strategy and shooting games due to the ability to have greater precision, such as Counter-Strike.
To make the gaming experience more hands-on, manufacturers also started developing racing wheels and flight sticks to replicate the thrill of experiences like racing and flying. With force feedback and realistic steering controls, racing wheels provide an immersive and authentic racing simulation. Similar, although much more advanced, systems are used by Formula One drivers to practice when they are not racing on track.
Similarly, flight sticks, similar to joysticks, are designed to mimic aircraft controls, allowing players to navigate the skies in flight simulation games such as the Microsoft Flight Simulator released in 2020.
These advancements had allowed gaming to be enjoyed by a more diverse group of people than before, when only joysticks, light guns, and paddles were available.
Innovative peripherals for immersive experiences
With the rise of virtual reality (VR) and augmented reality (AR), games have reached another level of immersion. VR peripherals, such as head-mounted displays, motion-tracking sensors, haptics, and handheld controllers, aim to create a completely immersive experience, allowing players to explore virtual worlds and make them feel like they are actually there.
VR peripherals are designed to take you into the world of the game, whereas AR peripherals bring the digital world into the real one. AR peripherals such as AR glasses or headsets overlay virtual or digital content onto real-life things, allowing players to see the virtual world blend seamlessly into their surroundings.
Before AR and VR peripherals, other gaming devices had already introduced immersive experiences, such as motion controllers and haptic feedback devices. These peripherals revolutionized gaming by providing a more tactile experience.
The Wii Remote, developed for the Nintendo Wii in 2006, popularized motion controllers, allowing players to throw punches and swing swords (or tennis rackets) using physical gestures. These motion controllers track movements and convert them into in-game actions.
Haptic feedback devices, such as the joystick, VR gloves, and steering wheel, provide vibrations, force feedback, and thermal and other tactile cues, adding another layer of immersion to the gaming experience. From feeling the rumble of a car engine to experiencing the impact of a sword swing, haptic feedback devices create a more immersive and sensory-rich gaming experience.
Wireless and mobile gaming peripherals
Wireless and mobile gaming peripherals have also opened up new avenues for gaming. These peripherals offer increased freedom and convenience, allowing gamers to enjoy their favorite games with enhanced mobility.
Wireless controllers liberate gamers from the constraints of cords and cables, giving them greater freedom of movement without the worry of tangling wires. From console controllers to dedicated gaming controllers for PCs, they all enhance the gaming experience allowing the player to focus solely on the game. Some of the most popular ones are made by Xbox and PlayStation.
Mobile gaming controllers have been a game changer for gaming, offering a console-like experience on smartphones and tablets while giving freedom. These controllers come with familiar layouts, including buttons, analog sticks, and triggers, providing precise control for a wide range of games.
In addition to mobile gaming controllers, AR and VR mobile peripherals add an element of immersive experience, whether the game takes place on a plane or a remote island. These peripherals leverage mobile devices’ processing powers and capabilities to deliver stunning visuals and interactive experiences, enabling gamers to take their gaming adventures with them on the go.
It is unbelievable how far gaming peripherals have come, and it is exciting to see where they might go!
What does the future hold?
As technology develops, so will gaming consoles. New advances in AR and VR technologies are set to revolutionize the gaming landscape. Headsets and glasses will become lighter and more sophisticated, offering better resolutions, less lag, and larger fields of view. Motion-tracking sensors and controllers will become more accurate, allowing users a more seamless gaming experience.
One of the most exciting developments in gaming peripherals is the integration of AI. AI-powered peripherals can personalize the gaming experience by analyzing the player’s input, and present unique challenges, resulting in a more specialized and unique gaming experience. These don’t yet exist, but they are coming.
Cloud gaming is a relatively new gaming experience where the games are streamed and processed on remote servers. With cloud gaming, peripheral requirements may change as processing power shifts to the cloud, reducing the need for high-end hardware. This could mean lightweight consoles that can be transported anywhere!
We don’t know which changes will revolutionize the gaming industry or if the next advances will come from something that hasn’t even been invented yet!
Conclusion
From game controllers to custom-built setups for gaming, the world of gaming peripherals has undergone many exciting developments, and there are many more to look forward to.
It is not simply the controllers that enhance the gaming experience. Things such as gaming monitors and chairs are also equally important to the overall gaming experience. Monitors with high-resolution displays with features like low response time and high refresh rates allow for smooth visuals.
The future of gaming looks very promising with the advancement of not just the gameplay and software but also the hardware that allows the user to experience the game like never before.
Meta is reportedly in talks with a company called Magic Leap with an eye to a partnership that could see Meta developing its alternative reality (AR) headset in the future.
According to the Financial Times, the two are negotiating a multi-year intellectual property (IP) and manufacturing alliance. The report’s timing is significant for a few reasons.
Meta is facing investor pressure to demonstrate the results of its substantial investments in pursuing CEO Mark Zuckerberg’s vision for the future of computing, namely the so-called “Metaverse.” And this, many experts believe, could become a huge thing in the future.
“Facebook has been pushing the use case for its social possibilities in particular, whereby groups of friends can ‘meet up’ and watch a film together or watch a live performer. You’re able to see the live movements and reactions of your friends around you, and as the AR, VR, and haptic technologies improve, the level of definition on that will mean it really will feel like you’re sitting together as a group. So that stands to be something of a game-changer”.
The company does not anticipate generating profits from its metaverse projects for a few more years. Meanwhile, it is spending approximately $10 billion each year on its “Reality Labs” division. Additionally, many anticipate Apple to enter the AR headset market during its upcoming WWDC developer conference next month.
This, among other things, could well be the driving force behind this development. There is limited information available about the negotiations, but sources suggest that a partnership between Magic Leap and Meta could soon be a reality. However, it is unlikely that the two companies will jointly develop a headset. Instead, the deal may involve Magic Leap sharing some of its optical technology with Meta. Additionally, there is a possibility that Meta could receive assistance from Magic Leap in the manufacturing of their devices.
This partnership would enable Meta to produce more VR headsets domestically, which is becoming increasingly important as U.S. companies aim to reduce their reliance on China. Magic Leap told theFinancial Times that partnerships were becoming a“significant line of business and growing opportunity for Magic Leap.”
Last year, Magic Leap’s CEO Peggy Johnson wrote a blog post entitled “What’s Next for Magic Leap” where she shared the company’s future plans. Within she said that the company had “received an incredible amount of interest from across the industry to license our IP and utilize our patented manufacturing process to produce optics for others seeking to launch their own mixed-reality technology.”