WordPress Ad Banner

OpenAI Launches ChatGPT Enterprise: Unleashing GPT-4’s Power for Businesses

OpenAI, led by Sam Altman, has introduced ChatGPT Enterprise, marking a significant milestone following the initial launch of their conversational AI, ChatGPT. This new enterprise-level tool heralds a major advancement, granting businesses unrestricted access to GPT-4, boasting performance twice as fast as its predecessors, according to a report by CNBC.

In the preceding year, OpenAI gained widespread recognition with the introduction of ChatGPT. This AI marvel allowed numerous users to experience the capabilities of generative artificial intelligence firsthand. Within a few months, ChatGPT garnered over 100 million active monthly users, outpacing popular platforms like Instagram and Spotify in this remarkable achievement.

Subsequently, OpenAI captured attention through its deepening collaboration with Microsoft, which generously provided substantial financial support in exchange for access to OpenAI’s advanced AI model to enhance its own suite of tools. Notably, the unveiling of ChatGPT Enterprise marks OpenAI’s first product launch since the ChatGPT Plus subscription service, which offered enhanced access to the tool’s features.

Empowering Enterprises

Delving into the specifics of ChatGPT Enterprise, as detailed in CNBC’s report, OpenAI diligently crafted this enterprise version over the span of less than a year. Collaborating with over 20 companies spanning diverse industries and sizes, OpenAI officially launched this version, bestowing enterprises with access to GPT-4 and Application Programming Interface (API) credits. OpenAI asserts that an impressive 80 percent of Fortune 500 companies currently utilize ChatGPT. The Enterprise iteration empowers these enterprises to leverage their own data for training custom models, aiming to alleviate concerns about sensitive information inadvertently being shared with OpenAI through ChatGPT usage.

Addressing these concerns, OpenAI refutes allegations of training its models on user data. To enhance data security, the Enterprise version incorporates an additional layer of encryption for client data. However, the pricing structure for this enhanced offering remains undisclosed at this time.

Racing Ahead

In terms of competitors, ChatGPT Enterprise has already garnered clients such as Block, led by Jack Dorsey, and investment group Carlyle. While an official launch date remains unspecified, OpenAI also has plans to introduce a Business version tailored to smaller companies and teams. Notably, this strategic move positions OpenAI in direct competition with its primary financier, Microsoft. The Azure OpenAI service from Microsoft has enabled businesses to access ChatGPT, but OpenAI’s independent offering could potentially save businesses costs by negating the need for a Microsoft Azure subscription.

OpenAI’s extensive operations, particularly its management of ChatGPT, involve substantial financial expenditure due to the sheer volume of requests processed each month. This prompts OpenAI to seek innovative revenue streams to sustain these services and continue refining their product line. Amidst intensifying competition in the generative AI sector, as exemplified by Anthropic’s upgraded AI model Claude and rumors of Amazon’s prospective AI offering, OpenAI is positioning itself for the enduring competition that lies ahead.

The pivotal question remains whether businesses are inclined to embrace GPT-powered decision-making in the immediate future.

Navigating Challenges in Assessing AI Success

Generative artificial intelligence, especially in the form of systems like ChatGPT and LaMDA, is dominating conversations across various sectors. These applications have triggered significant disruptions, holding the potential to reshape our interactions with technology and the way we conduct our work.

A central aspect distinguishing AI from conventional software is its non-deterministic behavior. Unlike traditional software that consistently produces the same output for a given input, AI generates diverse results with each computation iteration. While this aspect contributes to the remarkable possibilities of AI, it also introduces challenges, particularly when evaluating the effectiveness of AI-driven applications.

Outlined below are the complexities tied to these challenges, along with potential strategies that strategic research and development (R&D) management can employ to address them.

The Unique Traits of AI Applications

AI applications differ from conventional software in their behavior. Traditional software thrives on predictability and repetition, vital for functionality. In contrast, the non-deterministic nature of AI applications means they don’t yield consistent, predictable outcomes for the same inputs. This variability is intentional and pivotal for the appeal of AI — for instance, ChatGPT’s allure stems from its ability to provide novel responses, not repetitive ones.

This unpredictability is a result of the algorithms underpinning machine learning and deep learning. These algorithms rely on intricate neural networks and statistical models. AI systems continually learn from data, leading to diverse outputs based on factors like context, training input, and model configurations.

The Challenge of Evaluation

Given their probabilistic outputs, algorithms built to handle uncertainty, and reliance on statistical models, determining a clear measure of success based on predefined expectations becomes challenging with AI applications. In essence, AI can think, learn, and create like the human mind, but validating the correctness of its output is intricate.

Furthermore, data quality and diversity exert a significant influence. AI models heavily rely on the quality, relevance, and diversity of their training data. To succeed, these models must be trained on diverse data encompassing various scenarios, including edge cases. The adequacy and accuracy of training data become pivotal for gauging the overall success of an AI application. However, since AI is relatively new and standards for data quality and diversity are yet to be established, outcomes vary widely across applications.

In certain instances, it’s the role of the human mind, specifically contextual interpretation and human bias, that complicates success measurement in AI. Human assessment is often necessary to adapt these applications to different situations, user biases, and subjective factors. Consequently, measuring success becomes intricate, involving user satisfaction, subjective evaluations, and user-specific outcomes that may lack easy quantification.

Navigating the Challenges

To devise strategies for enhancing success evaluation and optimizing AI performance, grasping the root of these challenges is crucial. Here are three strategies to consider:

  1. Develop Probabilistic Success Metrics Given the inherent uncertainty of AI outcomes, assessing success necessitates novel metrics designed to capture probabilistic results. Metrics suitable for conventional software systems are ill-suited for AI. Instead of fixating on deterministic metrics like accuracy, introducing probabilistic measures such as confidence intervals or probability distributions can offer a more comprehensive view of success.
  2. Strengthen Validation and Evaluation Establishing robust validation and evaluation frameworks is paramount for AI applications. This encompasses comprehensive testing, benchmarking against relevant sample datasets, and conducting sensitivity analyses to gauge system performance under varying conditions. Regularly updating and retraining models to adapt to evolving data patterns is crucial for maintaining accuracy and dependability.
  3. Prioritize User-Centric Evaluation AI success isn’t confined to algorithmic outputs alone. The effectiveness of these outputs from the user’s perspective holds equal significance. Incorporating user feedback and subjective assessments is vital, particularly for consumer-facing tools. Insights from surveys, user studies, and qualitative assessments can offer valuable insights into user satisfaction, trust, and perceived utility. Balancing objective performance metrics with user-centric output evaluations yields a more comprehensive success assessment.

Evaluating for Triumph

Assessing the success of any AI tool demands a nuanced approach that acknowledges the probabilistic nature of its outputs. Stakeholders involved in AI development and fine-tuning, especially from an R&D viewpoint, must recognize the challenges introduced by inherent uncertainty. Only through defining suitable probabilistic metrics, rigorous validation, and user-centric evaluations can the industry effectively navigate the dynamic landscape of artificial intelligence.

Navigating the Intersection of AI and Operational Technology (OT)

In recent times, the spotlight has been cast on artificial intelligence (AI), with particular emphasis on generative AI applications like ChatGPT and Bard. This surge in interest commenced around November 2022, triggering discussions and debates about the immense potential of AI as well as its ethical and practical implications. This article examines the growing dominance of AI, especially within operational technology (OT), shedding light on its impact, testing, and reliability.

The Phenomenon of Generative AI

Generative AI, or “gen AI,” has impressively ventured into diverse creative domains such as songwriting, image generation, and even email composition. However, along with its remarkable achievements come valid concerns about its ethical utilization and possible misuse. Introducing gen AI to the OT landscape raises profound inquiries about its potential consequences, methods of rigorous testing, and its safe and effective implementation.

Implications, Testing, and Trustworthiness in OT

Operational technology revolves around consistency and repetition, aiming to predict outcomes based on established input-output relationships. In this realm, human operators are ready to make swift decisions when unpredictability arises, especially in critical infrastructures. Unlike the relatively lesser consequences of errors in information technology, OT errors could result in loss of life, environmental harm, and extensive liability, amplifying the need for accurate crisis-time decisions.

AI relies on extensive data to make informed choices and formulate logic for appropriate responses. In OT, incorrect decisions by AI could lead to far-reaching negative effects and unresolved liability concerns. Addressing these issues, Microsoft has proposed a comprehensive framework for the public governance of AI, advocating for government-led safety frameworks and safety mechanisms in AI systems overseeing critical infrastructure.

Enhancing Resilience through Red Team and Blue Team Exercises

Drawing from the “red team” and “blue team” strategies originating in military contexts, cybersecurity experts collaboratively test and fortify systems. The red team simulates attacks to reveal vulnerabilities, while the blue team focuses on defense. These exercises offer valuable insights to bolster security.

Applying AI to these exercises could narrow the skill gap and mitigate resource limitations. AI may uncover hidden vulnerabilities or suggest alternative defense strategies, thereby illuminating new methods to safeguard production systems and enhance overall security.

Unveiling Potential with Digital Twins and AI

Leading organizations have embraced the concept of digital twins, creating virtual replicas of their OT environments for testing and optimization. These replicas allow for safe exploration of potential changes and optimizations, aided by AI-driven stress testing. However, the transition from the digital realm to the real world entails considerable risk, necessitating meticulous testing and risk management.

AI’s Role in SOC and Noise Mitigation

AI’s utilization extends to security operations centers (SOC), where it aids in anomaly detection and interpretation of rule sets. Leveraging AI in this context mitigates noise in alarm systems and asset visibility tools, enhancing operational efficiency and enabling staff to focus on priority tasks.

Anticipating the AI-OT Convergence

As AI increasingly permeates information technology (IT), its influence on OT also grows. Instances like the Colonial pipeline ransomware attack underscore the interconnectedness of these domains. To balance innovation and safety, AI adoption in OT should commence cautiously in lower-impact areas. This measured approach necessitates robust checks and internal testing.

Striking a Balance

While the potential of AI in enhancing efficiency and safety is undeniable, a balanced approach is paramount. Ensuring safety and reliability in the realm of OT is crucial as AI and machine learning continue to evolve. By embracing these technologies responsibly, the industry can harness their benefits while safeguarding against potential risks.

Opera’s iOS Browser Introduces AI Assistant Aria

Opera’s iOS web browser app is set to receive a boost with the integration of an AI assistant named Aria. In a recent announcement, Opera unveiled its collaboration with OpenAI, bringing Aria directly into the iOS web browser, providing this AI-powered product to all users at no cost.

Aria originally launched on Opera’s desktop version and Android browser, amassing over a million users. Now, with its inclusion in Opera for iOS, Aria boasts compatibility across all major platforms, encompassing Mac, Windows, Linux, Android, and iOS.

It’s worth noting that engaging with Aria is at the user’s discretion; there’s no compulsion to use the AI service. After opting in, Aria extends an array of intelligent insights, thoughtful suggestions, and responsive voice commands. To utilize Aria, users are required to log into their Opera accounts. For newcomers, creating an account directly from the app is an option.

Composer Foundation: Powering Aria’s AI Capabilities

The foundation of Aria lies in Opera’s proprietary “Composer” infrastructure, which seamlessly integrates with OpenAI’s GPT technology. This connectivity empowers Aria with the ability to access various AI models, setting the stage for future advancements in search and AI services. These expansions include delving deeper into generative AI, among other revelations that Opera plans to unveil in due course.

Functioning akin to other AI-driven search companions, Opera’s iOS version incorporates a chatbot-like interface, facilitating user queries and providing responses as an alternative to conventional web searches. Users can access the AI’s capabilities through vocal interactions, eliminating the need for text input, by tapping into the “more” menu situated in the far-right tab on the bottom navigation bar within the Opera iOS app.

Following the recent milestone of Aria achieving one million users, Opera highlighted the significant impact that AI integration had on various metrics. Lin Song, co-CEO of Opera, expressed satisfaction with both the initial adoption and the quality of engagement with Aria, noting an increase in overall time spent on the platform, accompanied by heightened search activity and pageviews per session.

Notably, Aria isn’t Opera’s inaugural foray into AI solutions. The company had previously introduced AI Prompts, a feature enabling users to swiftly initiate conversations with generative AI services. This facilitated the summarization or elucidation of articles, tweet generation, and requesting pertinent content based on highlighted text.

Opera’s iOS browser is accessible as a complimentary download and encompasses an array of beneficial attributes, such as built-in ad blocking, a VPN service, tracking prevention, a cryptocurrency wallet, private browsing capabilities, and more.

IBM Introduces Innovative Analog AI Chip That Works Like a Human Brain

IBM has taken the wraps off a groundbreaking analog AI chip prototype, designed to mimic the cognitive abilities of the human brain and excel at intricate computations across diverse deep neural network (DNN) tasks.

This novel chip’s potential extends beyond its capabilities. IBM asserts that this cutting-edge creation has the potential to revolutionize artificial intelligence, significantly enhancing its efficiency and diminishing the power drain it imposes on computers and smartphones.

Unveiling this technological marvel in a publication from IBM Research, the company states, “The fully integrated chip features 64 AIMC cores interconnected via an on-chip communication network. It also implements the digital activation functions and additional processing involved in individual convolutional layers and long short-term memory units.”

A Paradigm Shift in AI Computing

Fashioned within the confines of IBM Albany NanoTech Complex, this new analog AI chip comprises 64 analog in-memory compute cores. Drawing inspiration from the operational principles of neural networks within biological brains, IBM has ingeniously incorporated compact, time-based analog-to-digital converters into every tile or core. This design enables seamless transitions between the analog and digital domains.

Furthermore, each tile, or core, is equipped with lightweight digital processing units adept at executing uncomplicated nonlinear neuronal activation functions and scaling operations, as elaborated upon in an August 10 blog post by IBM.

A Potential Substitution for Existing Digital Chips

In the not-so-distant future, IBM’s prototype chip may very well take the place of the prevailing chips propelling resource-intensive AI applications in computers and mobile devices. Elucidating this perspective, the blog post continues, “A global digital processing unit is integrated into the middle of the chip that implements more complex operations that are critical for the execution of certain types of neural networks.”

As the market witnesses a surge in foundational models and generative AI tools, the efficacy and energy efficiency of conventional computing methods upon which these models rely are confronting their limits.

IBM has set its sights on bridging this gap. The company contends that many contemporary chips exhibit a segregation between their memory and processing components, consequently stymying computational speed. This dichotomy forces AI models to be stored within discrete memory locations, necessitating constant data shuffling between memory and processing units.

Drawing a parallel with traditional computers, Thanos Vasilopoulos, a researcher based at IBM’s Swiss research laboratory, underscores the potency of the human brain. He emphasizes that the human brain achieves remarkable performance while consuming minimal energy.

According to Vasilopoulos, the heightened energy efficiency of the IBM chip could usher in an era where “hefty and intricate workloads could be executed within energy-scarce or battery-constrained environments,” such as automobiles, mobile phones, and cameras.

He further envisions that cloud providers could leverage these chips to curtail energy expenditures and reduce their ecological footprint.

Enhancements to Microsoft Services Agreement Introduce AI Usage Restrictions

Microsoft’s updated Terms of Service, set to become effective on September 30, include new regulations and limitations governing their AI offerings. These alterations, which were made public on July 30, encompass a segment that outlines the concept of “AI Services.” This term is defined within the section as referring to “services designated, described, or identified by Microsoft as incorporating, utilizing, driven by, or constituting an Artificial Intelligence (‘AI’) system.”

The section homes in on five rules and restrictions for Microsoft AI services, saying:

  1. Reverse Engineering. You may not use the AI services to discover any underlying components of the models, algorithms, and systems. For example, you may not try to determine and remove the weights of models.
  2. Extracting Data. Unless explicitly permitted, you may not use web scraping, web harvesting, or web data extraction methods to extract data from the AI services.
  3. Limits on use of data from the AI Services. You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service.
  4. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.
  5. Third party claims. You are solely responsible for responding to any third-party claims regarding Your use of the AI services in compliance with applicable laws (including, but not limited to, copyright infringement or other claims relating to content output during Your use of the AI services).

Microsoft Services Agreement Updates Amidst AI-Focused Changes

The alterations to the Microsoft Services Agreement arrive during a period where shifts in terms of service, specifically those concerning artificial intelligence (AI), are capturing significant attention. A notable instance involves Zoom, a provider of video conferencing and messaging services, which encountered substantial criticism due to inconspicuous adjustments made to its Terms of Service (TOS) in March, centering around AI. These modifications have spurred fresh inquiries into matters of customer privacy, autonomy, and reliance. Recent reports disseminated widely highlighted Zoom’s TOS changes, indicating the company’s capacity to employ user data for training AI without an available opt-out mechanism.

Zoom’s Evolving Position on TOS Amendments and AI

In response to these developments, Zoom has issued a subsequent statement pertaining to its updated TOS and corresponding blog post. The statement affirmed that, following feedback, Zoom has chosen to revise its Terms of Service to underscore that it refrains from utilizing user-generated content—such as audio, video, chat, screen sharing, attachments, and interactive elements—for the training of either Zoom’s proprietary AI models or those of third-party entities. Notably, the policy modifications were designed to enhance transparency and provide users with clarity regarding Zoom’s approach.

The New York Times’ Stance on AI-Related Terms of Service

Recently, The New York Times also revised its Terms of Service in an effort to forestall AI companies from scraping its content for various purposes. The updated clause explicitly delineates that non-commercial usage excludes actions like employing content to develop software programs or AI systems through machine learning. Furthermore, it prohibits the provision of archived or cached data sets containing content to external parties.

Clarification and Intent Behind The New York Times’ Updates

A representative from The New York Times Company communicated with VentureBeat, affirming that the company’s terms of service had consistently disallowed the use of their content for AI training and development. The recent revisions, in part, were undertaken to reinforce and make unequivocal this existing prohibition, aligning with the company’s stance on responsible content utilization in the AI landscape.

US Penalizes Chinese Firm Misusing AI in Recruitment

In what serves as a stark reminder against the unlawful application of AI in business operations, a significant settlement has been reached in the US, marking the inaugural resolution involving AI-driven recruitment tools. The Equal Employment Opportunity Commission (EEOC) successfully resolved a dispute with a Chinese online education platform, underscoring the growing importance of ethical AI practices in the hiring domain.

The focal point of the settlement is iTutorGroup, an entity that came under scrutiny in 2020 for allegedly employing AI tools to engage in discriminatory practices during the recruitment process. The platform, which hires online educators across a range of subjects, was accused of segregating older and younger job applicants through its AI-powered processes.

The EEOC, in its charge sheet filed in 2022, stated, “Three interconnected enterprises offering English-language tutoring services under the ‘iTutorGroup’ brand in China violated federal law by programming their online recruitment software to automatically dismiss older candidates based on their age.”

Having initiated an initiative in 2021 aimed at ensuring that Artificial Inteligence software employed by US employers adheres to anti-discrimination legislation, the EEOC underscored its commitment to scrutinizing and addressing instances of AI misuse. According to a report by the Economic Times, the EEOC made it clear that it would focus its enforcement endeavors on companies found to be misappropriating AI capabilities.

The culmination of this effort resulted in a settlement agreement, with iTutorGroup agreeing to pay $365,000 to over 200 ‘senior’ job applicants whose applications were purportedly disregarded due to their age. The settlement, documented in a joint submission to the New York federal court and reported by Reuters, encompasses remedies such as back pay and liquidated damages.

Central to the allegations against iTutorGroup was its AI software’s systematic exclusion of female candidates aged above 55 and male candidates above 60, contravening the provisions of the Age Discrimination in Employment Act (ADEA). This case exemplifies the significance of fair and just application of AI in HR processes.

Interestingly, a parallel lawsuit was filed by another entity, accusing iTutorGroup of having developed AI-powered software that aids other companies in singling out applicants based on characteristics such as race, age, and disability. This legal action was brought forth by Derek Mobley against Workday, claiming that the AI software developed by the latter facilitated biased candidate screening. Mobley, a Black man over 40 who grapples with anxiety and depression, alleged that Workday’s software worked against him as he applied for positions at organizations utilizing Workday’s recruitment screening tool.

The unfolding scenario highlights the imperative for automated AI systems, designed to assist HR departments, to be equitable and secure. Noteworthy players like Accenture and Lloyds Banking Group have already incorporated innovative techniques like virtual reality games into their hiring processes. With the rise of AI in recruitment, a report by Aptitude Research revealed that 55% of companies are augmenting their investments in recruitment automation. This underscores the need for a thoughtful, ethical, and legal approach to AI utilization in the employment sphere.

Nvidia Unveils GH200 GraceHopper: Next-Gen Superchips for Complex AI Workloads

In a recent press release, Nvidia, the world’s foremost supplier of chips for artificial intelligence (AI) applications, has introduced its latest breakthrough: the next generation of superchips, designed to tackle the most intricate generative AI workloads. This revolutionary platform, named GH200 GraceHopper, boasts an unprecedented feature: the world’s first HBM3e processor.

Combining Power: The Birth of GH200 GraceHopper

Nvidia’s ingenious GH200 GraceHopper superchip is the result of merging two distinct platforms: the Hopper platform, housing the graphic processing unit (GPU), and the Grace CPU platform, responsible for processing needs. These platforms, named in honor of computer programming pioneer Grace Hopper, have been seamlessly amalgamated into a singular superchip, paying homage to her legacy.

From Graphics to AI: The Evolution of GPUs

Historically, GPUs have been synonymous with high-end graphic processing in computers and gaming consoles. However, their immense computational capabilities have found new applications in fields like cryptocurrency mining and AI model training.

Powering AI through Collaborative Computing

Notably, Microsoft’s Azure and OpenAI have harnessed Nvidia’s chips to build substantial computing systems. By employing Nvidia’s A100 chips and creating infrastructures to distribute the load of large datasets, Microsoft facilitated the training of GPT models, exemplified by the popular ChatGPT.

Nvidia’s Pursuit of AI Dominance

Nvidia, the driving force behind chip production, now seeks to independently construct large-scale data processing systems. The introduction of the Nvidia MGX platform empowers businesses to internally train and deploy AI models, underscoring Nvidia’s commitment to AI advancement.

The GH200 GraceHopper: A Leap Forward in Superchip Technology

Nvidia’s achievement in crafting the GH200 superchip can be attributed to its proprietary NVLink technology, which facilitates chip-to-chip (C2C) interconnections. This innovation grants the GPU unfettered access to the CPU’s memory, resulting in a robust configuration that offers a substantial 1.2 TB of high-speed memory.

Unveiling the HBM3e Processor

The GH200 GraceHopper is distinguished by the inclusion of the world’s inaugural HBM3e processor, surpassing the computational speed of its predecessor, HBM3, by an impressive 50%. In a single server setup, featuring 144 Neoverse cores, a staggering eight petaflops of AI performance can be achieved. With a combined bandwidth of 10TB/sec, the GH200 platform possesses the capability to process AI models that are 3.5 times larger and 3 times faster than previous Nvidia platforms.

Nvidia’s Unrivaled Market Position

Having briefly entered the $1 trillion valuation echelon earlier in the year, Nvidia commands over 90% of the market share in chip supply for AI and related applications. The demand for GPUs extends beyond training AI models to their operational execution, and this demand is poised to escalate as AI integration becomes commonplace. Evidently, not only chip manufacturers such as AMD, but also tech giants like Google and Amazon, are actively developing their offerings in this burgeoning sector.

Charting a Technological Course: GH200’s Arrival

The unveiling of the GH200 GraceHopper superchip solidifies Nvidia’s status as the premier technology provider. Anticipated to be available for users in Q2 2024, these groundbreaking chips promise to reshape the landscape of AI processing, further establishing Nvidia’s dominance in the industry.

Spotify Expands its AI-powered DJ Feature Globally

After successfully debuting its AI-powered DJ feature in North America six months ago, Spotify is now rolling out this innovative tool to numerous international markets.

Accessible via the “music” feed section within the Spotify mobile app, the DJ function personalizes users’ listening experiences by curating a selection of music. This selection is accompanied by spoken-word commentary, brought to life by a synthetic voice. The commentary encompasses playful conversations and contextual insights, referencing specific songs and artists that the user has previously enjoyed.

In essence, it’s akin to having a personalized radio DJ who customizes their show for each individual listener.

Spotify initially introduced DJ to audiences in the United States and Canada in February. Subsequently, the company expanded its availability to the United Kingdom and Ireland three months later. Although DJ will continue to be in beta testing, it is now accessible to premium subscribers across approximately 50 markets worldwide. These markets include countries such as Sweden, Australia, New Zealand, Ghana, Nigeria, Pakistan, Singapore, and South Africa.

However, it’s important to note that a large portion of the European Union will not yet have access to this feature. Furthermore, in the newly added markets, DJ will only be offered in the English language.

Great Wall Motor and Baidu Team Up for AI-Integrated Cars

Great Wall Motor (GWM), the Chinese automaker, is introducing Baidu’s AI system, similar to ChatGPT, into its mass-market cars to enable seamless conversation between drivers and vehicles, according to a report by the South China Morning Post (SCMP). This collaboration between GWM and Baidu aims to make cars more intelligent and user-friendly.

Baidu’s AI model, known as Ernie Bot, is positioned as a Chinese competitor to OpenAI’s ChatGPT. GWM stated that they have been testing innovative features in their mass-produced vehicles, and these features will gradually be incorporated into commercial use on a wider scale.

Baidu has been heavily investing in AI, with a particular focus on the development of its language model, Ernie. The company announced a substantial investment of $140 million (1 billion yuan) to support Chinese startups working on generative AI.

In their pursuit of enhancing the in-car experience, Baidu recently revealed that their Ernie 3.5 beta has shown significant progress, outperforming both ChatGPT (3.5) and GPT-4 in various Chinese language skills.

GWM and Baidu have been working together using the latest iteration of the Ernie model to research applications of this advanced language model in intelligent in-car interactions. They have already identified many novel features that can be implemented in their upcoming vehicle models.

During the Shanghai Auto Show in April 2023, Baidu Apollo, the autonomous driving solutions platform of the Chinese search giant, showcased various intelligent driving technologies based on the ERNIE model. These applications included journey planning, in-car entertainment, knowledge Q&A, and AI sketching.

The demand for intelligent solutions in the automotive industry is growing rapidly, as consumers and manufacturers alike seek more intuitive interfaces, expanded functionalities, and smoother driving experiences. Other Chinese manufacturers, such as Lynk and Smart, have also expressed their intentions to incorporate Ernie Bot technology into their vehicles.

However, GWM has not disclosed which specific car models will first include the built-in Ernie Bot technology or provided a timeline for its release. Additionally, Baidu is actively exploring opportunities to integrate Ernie Bot into other businesses, including its cloud services, to compete with Western rivals like OpenAI, Google, Microsoft, and Apple in the AI market.