WordPress Ad Banner

Apple Introduces ReALM: Advancing Contextual Understanding in AI

Apple unveils ReALM, a revolutionary AI system poised to transform contextual understanding in voice assistants. Explore the innovative approach of ReALM, its practical applications, and Apple’s strategic moves to stay competitive in the rapidly evolving AI landscape.

Revolutionizing Contextual Understanding

Apple researchers introduce ReALM, an AI system adept at deciphering ambiguous references and context. Leveraging large language models, ReALM converts reference resolution into a language modeling problem, achieving significant performance gains compared to existing methods. With a focus on screen-based references, ReALM reconstructs visual layouts to enable more natural interactions with voice assistants.

Enhancing Voice Assistants

By enabling users to issue queries about on-screen elements, ReALM enhances the conversational experience with voice assistants. The system’s ability to understand context, including references, marks a crucial milestone in achieving true hands-free interactions. With impressive performance surpassing GPT-4, ReALM sets a new standard for contextual understanding in AI.

Practical Applications and Limitations

While ReALM demonstrates remarkable capabilities, it also acknowledges limitations, particularly in handling complex visual references. Incorporating computer vision and multi-modal techniques may be necessary for addressing more intricate tasks. Despite these challenges, ReALM signifies Apple’s commitment to making Siri and other products more conversant and context-aware.

Apple unveils ReALM

Apple’s AI Ambitions

Amidst fierce competition in the AI landscape, Apple accelerates its AI research efforts. Despite trailing rivals, Apple’s steady stream of breakthroughs underscores its commitment to AI innovation. As it gears up for the Worldwide Developers Conference, Apple is expected to unveil new AI-powered features across its ecosystem, signaling its determination to close the AI gap.

Conclusion: Shaping the Future of AI

As Apple navigates the evolving AI landscape, ReALM stands as a testament to its ongoing advancements in contextual understanding. With the race for AI supremacy intensifying, Apple’s strategic initiatives underscore its ambition to shape the future of ubiquitous, truly intelligent computing. As June approaches, all eyes will be on Apple to see how its AI endeavors unfold.

OpenAI Launches ChatGPT App for Apple Vision Pro: A Glimpse into the Future of Human-AI Interaction

OpenAI, a leading research organization in artificial intelligence, has unveiled a groundbreaking ChatGPT app tailored for Apple Vision Pro, the innovative augmented reality headset recently introduced by Apple. This new app leverages OpenAI’s cutting-edge GPT-4 Turbo model, enabling users to engage in natural language interactions, obtain information, and even generate content seamlessly within the app. In this blog post, we explore the significance of this release and its implications for the future of human-AI interaction.

Revolutionizing Human-AI Interaction with ChatGPT

Embracing Natural Language Processing

The ChatGPT app for Vision Pro represents a significant stride in natural language processing, empowering users to converse, seek guidance, and explore various topics effortlessly. By integrating GPT-4 Turbo, OpenAI continues to redefine the boundaries of human-AI interaction, offering a glimpse into a more intuitive and immersive future.

Multimodal AI Capabilities

Beyond text-based communication, ChatGPT for Vision Pro embraces multimodal AI, enabling seamless processing of inputs across different modes such as text, speech, images, and videos. This versatility enhances the app’s adaptability, paving the way for complex problem-solving and innovative content generation.

Vision Pro: Redefining Digital Experiences

Unveiling visionOS and Its Features

ChatGPT’s debut on Apple’s visionOS platform underscores the platform’s capabilities in delivering immersive digital experiences. Leveraging features like Optic ID for biometric authentication, Spatial Audio for realistic sound effects, and VisionKit for advanced sensory functionalities, visionOS sets a new standard for augmented reality interaction.

A Paradigm Shift in App Development

With over 600 new apps introduced for visionOS, including ChatGPT, Apple propels the industry towards a new era of app development. These apps leverage Vision Pro’s capabilities to offer users unparalleled experiences, blurring the lines between digital and real-world interactions.

Unlocking Endless Possibilities with ChatGPT

Enhanced User Experience

ChatGPT for Vision Pro offers users a seamless interface for communication and content creation. From troubleshooting automotive issues to planning meals based on fridge contents, users can leverage ChatGPT’s multimodal AI to tackle diverse challenges effortlessly.

Subscription Options and Accessibility

Available for free on visionOS, ChatGPT also offers a subscription-based ChatGPT Plus option, providing access to advanced features and faster response times powered by GPT-4. This ensures accessibility while catering to varying user needs and preferences.

Conclusion: Shaping the Future of AI-Powered Interaction

In conclusion, OpenAI’s ChatGPT app for Apple Vision Pro heralds a new era in human-AI interaction. By seamlessly integrating advanced AI capabilities with augmented reality, ChatGPT redefines how users engage with technology, opening doors to unprecedented possibilities. As users embrace ChatGPT’s intuitive interface and multimodal functionalities, the boundaries between reality and virtuality blur, propelling us towards a future where AI seamlessly enhances our daily lives. Explore the transformative potential of ChatGPT on visionOS today, and embark on a journey into the future of human-AI synergy.

Apple’s AI Breakthrough: Affordable Language Models Redefine the Game

Language models serve as indispensable tools for various tasks, from summarizing to translation and essay writing. However, their high training and operational costs often pose challenges, particularly for specialized domains requiring precision and efficiency. In a significant stride, Apple’s latest AI research unveils a breakthrough that promises high-level performance at a fraction of the usual cost. With their paper titled “Specialized Language Models with Cheap Inference from Limited Domain Data,” Apple pioneers a cost-efficient approach to AI development, offering newfound opportunities for businesses constrained by budget constraints.

Unveiling Apple’s AI Engineering Triumph

A Paradigm Shift in AI Development

Apple’s groundbreaking research marks a pivotal moment in AI engineering. By devising language models that excel in performance while remaining cost-effective, Apple extends a lifeline to businesses navigating the financial complexities of sophisticated AI technologies. The paper’s publication garners swift recognition, including a feature in Hugging Face’s Daily Papers, underscoring its significance within the AI community.

Navigating Cost Arenas

The research tackles the multifaceted challenge of AI development by dissecting key cost arenas. Through strategic management of pre-training, specialization, inference budgets, and in-domain training set size, Apple offers a roadmap for building AI models that balance affordability with effectiveness.

The Blueprint for Budget-Conscious Language Processing

Two Distinct Pathways

In response to the cost dilemma, Apple’s research presents two distinct pathways tailored to different budget scenarios. Hyper-networks and mixtures of experts cater to environments with generous pre-training budgets, while smaller, selectively trained models offer viable solutions for tighter budget constraints.

Empirical Findings and Practical Guidelines

Drawing from extensive empirical evaluations across biomedical, legal, and news domains, the research identifies optimal approaches for various settings. Practical guidelines provided within the paper empower developers to select the most suitable method based on domain requirements and budget constraints.

Redefining Industry Standards with Cost-Effective Models

Fostering Accessibility and Utility

Apple’s research contributes to a growing body of work aimed at enhancing the efficiency and adaptability of language models. Collaborative efforts, such as Hugging Face’s initiative with Google, further accelerate progress by facilitating the creation and sharing of specialized language models across diverse domains and languages.

Striking a Balance: Efficiency vs. Precision

While deliberating between retraining large AI models and adapting smaller, efficient ones, businesses face critical trade-offs. Apple’s research underscores that precision in AI outcomes is not solely determined by model size but by its appropriateness for the given task and context.

Conclusion: Shaping the Future of AI Accessibility

In conclusion, Apple’s AI breakthrough signals a transformative shift towards accessible and cost-effective language models. By democratizing AI development, Apple paves the way for innovation across industries previously hindered by financial barriers. As businesses embrace budget-conscious models, the narrative shifts from the biggest to the most fitting language model for optimal results. With Apple’s pioneering research, the future of AI accessibility and utility looks brighter than ever.

Appleā€™s AI Chief Has Announced iOS 17 Update Gives Users Choice of Search Engine

Former high-ranking Google executive John Giannandrea recently highlighted a significant alteration in the latest iPhone software update, iOS 17, which was unveiled on September 25. This update introduces a noteworthy change that allows users to opt for a search engine other than Google when navigating in private mode.

In the wake of growing privacy concerns among users, Google, the tech behemoth, has found itself under increased scrutiny from the public regarding issues of user choice and competition within the search engine market.

The iOS 17 software release has introduced a pivotal feature by adding a second setting that empowers iPhone users to seamlessly switch between Google and alternative search engines. This development was emphasized by the head of Apple’s artificial intelligence division during his testimony in a federal court in Washington as part of the Justice Department’s antitrust lawsuit against Alphabet Inc.’s Google.

This newly added feature simplifies the process of changing search engines with a single tap, a move aimed at addressing concerns surrounding Google’s alleged monopoly in online search. This issue has gained prominence in light of the U.S. government’s antitrust lawsuit, which contends that Google has been unlawfully maintaining its dominant position through agreements with web browsers and mobile device manufacturers, including Apple.

Initially, Google refuted these allegations, asserting in its opening statement that users can easily switch search engines in a matter of seconds. However, Gabriel Weinberg, the CEO of rival search engine DuckDuckGo, testified on September 28 that Google’s default status on browsers is perceived as a barrier to users changing their preferences, citing a convoluted process.

Furthermore, Google’s default position as the search engine in Apple’s Safari, the web browser for Apple devices, is a result of contractual obligations between the two tech giants. As part of this arrangement, Google shares a portion of its advertising revenue with Apple, although the exact sum remains confidential. According to reports, the Justice Department has indicated that Google pays Apple an annual amount estimated to be between $4 billion and $7 billion.

Giannandrea clarified in his testimony that Google will continue to be the default search engine for Safari in private mode, which does not store browsing history. However, the new update offers users the flexibility to choose from a range of search engines, including Yahoo Inc., Microsoft Corp.’s Bing, DuckDuckGo, and Ecosia, for their private browsing experience.

John Giannandrea, currently leading Apple’s AI division, previously worked at Google from 2010 to 2018 in the role of Senior Vice President of Engineering. In his current capacity, Giannandrea is spearheading machine learning initiatives at Apple and driving AI-powered endeavors for the company.

Apple’s Foray into Generative AI: Introducing “Apple GPT”

With every new Apple product launch, the tech world takes notice, and the introduction of artificial intelligence (AI) tools was no exception. Amidst the AI frenzy, it was only a matter of time before the iPhone giant made its mark with something groundbreaking.

Rumors have been circulating that Apple is developing its own large language model (LLM), a direct competitor to OpenAI’s GPT-3 and GPT-4, Google’s Bert and LaMDA, Meta’s LLaMa-1 and LLaMa-2, among others. According to insider sources and a Bloomberg report, Apple’s LLM, codenamed “Ajax,” has also led to the creation of a chatbot service akin to ChatGPT, aptly named “Apple GPT.”

The Ajax project, a machine learning development initiative, was established by Apple last year and has since gained momentum. Currently, the Apple GPT chatbot is undergoing in-house testing, assisting Apple employees in product prototyping. The chatbot demonstrates the ability to summarize text and respond to questions based on its training data.

While Apple might seem to have joined the AI race later than its competitors, it is crucial for the company to stay ahead in this field. The success of its AR/VR headset Pro Vision, which was launched well after Meta’s progress in the space, illustrates the importance of timely innovations in maintaining its $320 billion revenue.

Recognizing the significance of AI, Apple is actively working on multiple AI-focused teams collaborating on the project. Google, too, felt the pressure and established a task force to develop AI products shortly after OpenAI’s ChatGPT launch.

Tim Cook’s Vision: Weaving AI into Future Products

Apple CEO Tim Cook had previously hinted at incorporating AI into future products, and the company has already integrated AI features into various devices. For instance, their watches have Fall Detection, and Siri serves as their AI voice assistant. Some iPhone and Apple Watch models also boast features like Crash Detection and ECG functionality, capturing heartbeats’ electrical signals’ timing and intensity.

Despite their somewhat conservative approach to AI, Apple is dedicated to defining the consumer-facing aspects of generative AI while addressing privacy concerns, an issue flagged by other AI firms like OpenAI.

Although Apple may have entered the generative AI realm later than others, the company’s efforts have certainly captured our curiosity. As the tech giant delves deeper into AI, the future promises exciting advancements in the world of Apple products.

Mosyle Introduces Generative AI to Apple Mobile Device Management

While Apple itself may not have directly integrated generative AI into its hardware platform, other vendors are stepping up to fill the gap. Today, mobile device management (MDM) vendor Mosyle announced an innovative approach that leverages generative AI to enhance the management, security, and compliance capabilities of Apple macOS-powered hardware. This exciting development is part of an update to the Mosyle Apple Unified Platform, which was made widely available in May 2022, coinciding with a remarkable $196 million funding round secured by the company. By combining MDM with endpoint security, the Mosyle Apple Unified Platform empowers organizations to seamlessly deploy and manage their Apple devices.

Traditionally, enterprise administrators heavily rely on complex scripts to manage Apple devices, enabling them to identify specific usage patterns and deployment characteristics for individual devices. For instance, a script could be designed to detect encounters with a particular WiFi access point. Until now, script creation has been primarily the domain of experts. However, the landscape is rapidly changing, largely due to the game-changing capabilities of generative AI.

Mosyle CEO Alcyr Araujo explained in an exclusive interview with VentureBeat, “The idea here is really to help customers have access to that very specific layer of Mac management that is scripting. We see Mac admins reach the highest level when they can really take advantage of scripting, where they can basically automate anything on the fleet.”

With Mosyle’s new generative AI integration, Apple device administrators gain unprecedented ease and efficiency in managing their fleets, enabling automation of various tasks that were previously arduous and time-consuming. The breakthrough technology heralds a new era in Apple device management, empowering organizations to harness the full potential of their macOS-powered hardware while ensuring streamlined operations and enhanced security.

How Mosyle AI Script automates Apple management

The path toward generative AI for Mosyle was not a straight line.

Araujo explained that his team had been working on developing a script catalog, to help make it easier for users to find and select the right scripts to automate MDM functions. Not coincidentally, Mosyle Script Catalog is a new feature that is also part of the companyā€™s latest platform update.

Then ChatGPT happened in late 2022 and every technology vendor (and nearly every user) was suddenly aware of the power of generative AI. Araujo recounted that he started testing gen AI with ChatGPT tooling for Mosyleā€™s own internal needs first, to potentially make support more efficient by finding answers quicker. 

In addition to being the CEO of Mosyle, Araujo is the companyā€™s IT administrator. One day he was looking to create a specific script that was needed for macOS. That need led to the revelation that by combining gen AI with the script catalog project, a user could use natural language queries to rapidly find, or even create, a script to execute a specific task.

OpenAI is under the hood, with more generative AI support to come

The first release of Mosyle AIscript relies on OpenAIā€™s GPT models. But Araujo emphasized that his goal is to have an open approach, where multiple large language models (LLMs) for gen AI could be chosen.

Mosyle isnā€™t simply connecting OpenAIā€™s API to its own MDM technology. Araujo explained that numerous steps taken on the Mosyle side help ensure privacy of user data as well as accuracy of the generated script output.

Araujo explained that with Mosyle AIScript, the system first attempts to understand what a user query for a script really means. If needed, Mosyle then adds elements to better define the script to get the desired output. On top of that, Mosyle validates the generated script to make sure that it will run as expected on Apple hardware.

ā€œThere is a lot of polishing there in terms of making sure weā€™re guiding the requests in the correct way and understanding the result before showing it to the customer,ā€ he said.

Apple’s Mixed Reality Headset Unveiling Today Amidst High Anticipation

Every year, the Worldwide Developers Conference (WWDC) draws eager attention from tech enthusiasts worldwide. It is the moment when Apple shares its latest product updates, operating system software advancements, and offers developers new avenues for innovation and app development.

However, this year’s WWDC has generated exceptional excitement and anticipation. All eyes are on Apple as it prepares to enter the realm of mixed realityā€”an immersive blend of virtual and augmented realities (VR/AR). Industry experts have already labeled this event as the potential “iPhone moment” for the VR/AR industry, sparking soaring expectations of yet another groundbreaking leap by Apple that could revolutionize the technology landscape.

As Apple’s offerings remain veiled before their public unveiling, the world eagerly awaits the company’s announcement. The tech giant’s track record of transformative innovations has raised hopes for a significant leap forward in the VR/AR field. With its penchant for redefining technology, Apple has the potential to reshape the future of mixed reality and captivate audiences with a new generation of immersive experiences.

Stay tuned as WWDC unfolds, revealing Apple’s foray into mixed reality and potentially altering the course of technological innovation once again.

Apple’s Mixed Reality Headset

For years, Apple has been rumored to be working on aĀ virtual reality headset. never really revealing its progress. Based on these rumors, it appears like Apple will unveil a mixed reality headset at the WWDC with an iOS-like interface and high-resolution immersive displays.

Dubbed Reality One or Reality Pro, the device will allow users to switch between VR and AR using a dial and control it using hands and eye movements, making controllers obsolete. The headset could also have an outward-facing display to display the facial expressions of the user and not make them appear like some form of RoboCop.

With CEO Tim Cook stressing the need for “connection” and “communication” when it comes to uses of AR, the headset could also allow users to FaceTime with full face and body renders in addition to access to various games and apps on the device.

Can Apple succeed where others have failed?

The headset is also believed to have been made after overcoming multiple technical challenges and the price of the final product is estimated to be around $3,000. At such a steep ask, the device must supersede expectations and deliver something that no other headset manufacturer has managed before.

One can trust Apple to do both of these things with quite some ease. After all, it did this with the iPhone, and the Apple Watch and it knows very well how to enter the market at the right time with the right product. The company has time and again entered the market after the initial hype around a device has subsided and its mixed reality bet seems no different.

Google, Microsoft, and MagicLeap have also taken shots at the prize in the past decade and failed to make a product that attracts the masses. Mark Zuckerberg’s Meta has made some sizable progress in the area but not all its products get the same traction.

Just last week, the company made another effort to woo users to mixed-reality with itsĀ Quest 3 headsetsĀ but itsĀ Meta Quest Pro, a high-end device launched last year, was quite a dud. Amongst headset manufacturers, Meta is quite the leader when it comes to absolute numbers in sales but the product is far from the tool to engage in a new digital universe that Zuckerberg promised.

Since Apple does not just products but the entire ecosystem around it, the WWDC event offers a glimpse of what the device could be really capable of and how developers could leverage it to make some exciting products that users simply do not want to miss.

Apart from the $3,000 price tag, Apple also needs to counter the challenges of using VR/ AR headsets that the likes of Microsoft, Meta, and even Sony have failed to crack so far. Will Apple come out glorious as always or fall severely short of expectations will be known in a few hours from now.

Apple’s success might also define if the tech world will continue to talk about artificial intelligence (AI) over the next few months or if there will be a new buzzword in town.

Russia’s FSB Alleges NSA Used Malware to Exploit Apple Phones

In a statement issued Thursday, the Russian Federal Security Service (FSB) claimed to have uncovered a covert operation by the U.S. National Security Agency (NSA) to infiltrate Apple phones using previously unknown malware. The FSB says the alleged plot, as reported by Reuters, was targeted at exploiting specially crafted “back door” vulnerabilities.

FSB Uncovers Plot

The FSB, the primary successor agency to the Soviet KGB, estimates several thousand iPhones, including those owned byĀ RussianĀ citizens, have been compromised. In addition, in a move that underscores the global implications of this alleged operation, the FSB reports that phones belonging to foreign diplomats stationed in Russia and former Soviet territories were also targeted. These reportedly include devices owned by representatives from NATO member countries, Israel, Syria, and China.

“The FSB has uncovered an intelligence action of the American special services using Apple mobile devices,” the agency stated. As of this report, Apple and the NSA have yet to respond to requests for comment.

Russia’s foreign ministry also chimed in, saying that the plot demonstrates the tight-knit relationship between the NSA and Apple. It claimed that the clandestine data collection was conducted “through software vulnerabilities in US-made mobile phones.”

The foreign ministry further accused U.S. intelligence services of using I.T. corporations for mass data collection, often without the knowledge of the targeted individuals. “The U.S. intelligence services have been using I.T. corporations for decades to collect large-scale data of internet users without their knowledge,” the ministry said.

This revelation comes at a time when the United States is regarded as the world’s top cyber power in terms of intent and capability, according to Harvard University’s Belfer Centre Cyber 2022 Power Index.

Global Cybersecurity Implications

These allegations come amidst heightened tensions. For example, after Russian troops moved into Ukraine last year, U.S. and British intelligence went public with information suggesting President Vladimir Putin planned the invasion. The source of this intelligence, however, remains unclear.

As Western intelligence agencies have accusedĀ RussiaĀ of constructing an advanced domestic surveillance structure, Russian officials have continually questioned the security of U.S. technology. Putin has said he does not own a smartphone but uses the internet occasionally.

Earlier this year, the Kremlin reportedly directed officials involved in preparation for Russia’s 2024 presidential election to cease using Apple iPhones due to concerns about potential vulnerability to Western intelligence agencies.

The FSB’s recent findings bring to the surface a narrative of alleged cooperation between tech giants and intelligence agencies, which will undoubtedly stir debates regarding data privacy, surveillance ethics, and cyber warfare’s geopolitics.

Apple Introduces Personal Voice Feature for Individuals with Disabilities

Apple unveiled a range of new features aimed at enhancing accessibility for individuals with mobility, cognitive, vision, or hearing impairments. Among these advancements, the standout feature is Personal Voice, specifically designed for those who may experience a loss of speech ability.

With the Personal Voice function, users can generate a synthesized voice that closely resembles their own, facilitating easier communication with friends and family. By reading a series of text prompts aloud on their iPhone or iPad for approximately 15 minutes, users can create their unique Personal Voice. They can then input their desired message, which will be vocalized using their personalized synthesized voice through the Live Speech integration.

To prioritize privacy and security, Apple emphasizes that this feature utilizes on-device machine learning, ensuring that personal data remains confidential.

Enhanced Assistive Access and Magnifier Functionality

In addition to the Personal Voice function, Apple is introducing condensed versions of its major applications through a feature called Assistive Access. This feature aims to simplify experiences and applications, lightening the cognitive load for individuals with cognitive disorders.

Another notable improvement is the new detection option within the Magnifier function, which benefits users who are blind or visually impaired. By pointing the device’s camera at physical objects with text labels, such as a microwave keypad, users can have the labels read aloud as they navigate their touch across each number or setting on the appliance. This enables a seamless interaction experience.

These accessibility features from Apple demonstrate the company’s commitment to inclusivity and ensuring that individuals with disabilities have equal access to technology and its benefits.

Improvements for Mac Users

Apple also revealed a number of other accessibility improvements for Mac users. Individuals who are deaf or hard of hearing will be able to connect their Made forĀ iPhoneĀ hearing aids to a Mac, increasing their accessibility options. Apple is also adding a simpler way to change the text size in Mac apps including Finder, Messages, Mail, Calendar, and Notes.

The upcoming improvements will also allow you to pause GIFs in Messages and Safari, give Siri different speaking speeds, and utilize Voice Control to provide phonetic suggestions while you edit text. The existing accessibility features from Apple, such as Live Captions, Voice Over, Door Detection, and others, are built upon by these additional features.

Apple’s wide range of new accessibility features highlights the company’s continuous dedication to diversity and making sure its products are usable by people with a variety of needs. Apple continues to empower and improve the lives of persons with impairments by utilizing technology. Later this year, these developments are expected to be made available, maybe as part of iOS 17, further establishing Apple’s status as a pioneer in accessibility innovation.

ā€œAccessibility is part of everything we do at Apple,ā€ Sarah Herrlinger, Appleā€™s senior director of global accessibility policy and initiatives, said in a statement. ā€œThese groundbreaking features were designed with feedback from members of disability communities every step of the way, to support a diverse set of users and help people connect in new ways.ā€

Meet Apple’s M3 Chipset: A 12-Core CPU and 18-Core GPU Monster

According to various news outlets, like Bloomberg, Apple is currently testing its latest chipset, the M3. The new chipset, it is claimed, will come with a mighty 12-core processor and 18-core graphical processing unit (GPU). Bloomberg claims they came across this information from the reporter’s receipt of an App Store developer log showing the chip running on an unannounced MacBook Pro with macOS 14.

If true,Ā BloombergĀ speculates that the new M3 chip is likely the base-level M3 Pro that Apple plans to release sometime in 2024. This is interesting as Apple is about to introduce its new M2 Macs. Apple’s latest silicon technology,Ā the M2 chip,Ā boasts improved speed and power efficiency compared to its predecessor,Ā the M1 chip.

The 8-core CPU offers increased processing power, enabling faster task completion. The 10-core GPU is ideal for creating stunning images and animations. Moreover, users can work with multiple 4K and 8K ProRes video streams thanks to the powerful media engine. The cherry on top, according to Apple, is the impressive battery life of up to 18 hours, allowing users to work or play uninterrupted throughout the day.

The M3 series is anticipated to benefit fromĀ Taiwan Semiconductor Manufacturing Company’s (TSMC’s)Ā upcoming 3nm node process. The decrease in core density would be caused by the switch from 5nm to 3nm. Recall that the M1 Pro and M2 Pro have 14 and 16-core GPUs and eight and 10-core processors.

In other words, the M3 Pro is said to have 50 percent more CPU cores than its first-generation forerunner. Bloomberg also claims that Apple chose to have an equal number of high-performance and efficient cores on the new silicon. He claims that the chip was discovered with 36 GB of RAM installed. To put things in perspective, the M2 Pro comes standard with 16 GB of memory, but you can upgrade it to 32 GB.

Naturally,Ā AppleĀ must release the M3 processor in its standard form before announcing the M3 Pro. According toĀ Bloomberg‘s report, “the first Macs with M3 chips will start showing up toward the end of the year or early next year.” The long-rumored 15-inch MacBook Air is anticipated to be unveiled by Apple at WWDC 2023 in the interim.