WordPress Ad Banner

Microsoft Set to Unveil Its Latest AI Chip, Codenamed ‘Athena,’ Next Month

After years of development, Microsoft is on the cusp of revealing its highly-anticipated AI chip, codenamed ‘Athena,’ at the upcoming annual ‘Ignite’ event next month. This unveiling marks a significant milestone for the tech giant, as it signals a potential shift away from its reliance on GPUs manufactured by NVIDIA, the dominant player in the semiconductor industry.

Microsoft has meticulously crafted its Athena chip to empower its data center servers, tailoring it specifically for training and running large-scale language models. The motivation behind this endeavor stems from the ever-increasing demand for NVIDIA chips to fuel AI systems. However, NVIDIA’s chips are notorious for being both scarce and expensive, with its most powerful AI offering, the H100 chip, commanding a hefty price tag of $40,000.

By venturing into in-house GPU production, Microsoft aims to curb costs and bolster its cloud computing service, Azure. Notably, Microsoft had been covertly working on Athena since 2019, coinciding with its $1 billion investment in OpenAI, the visionary organization behind ChatGPT. Over the years, Microsoft has allocated nearly $13 billion to support OpenAI, further deepening their collaboration.

Athena’s Arrival: Microsoft’s In-House AI Chip Ready for the Spotlight

Besides advancing its own AI aspirations, Microsoft’s chip could potentially aid OpenAI in addressing its own GPU requirements. OpenAI has recently expressed interest in developing its AI chip or potentially acquiring a chipmaker capable of crafting tailored chips for its unique needs.

This development holds promise for OpenAI, especially considering the colossal expenses associated with scaling ChatGPT. A Reuters report highlights that expanding ChatGPT to a tenth of Google’s search scale would necessitate an expenditure of approximately $48.1 billion for GPUs, along with an annual $16 billion investment in chips. Sam Altman, the CEO of OpenAI, has previously voiced concerns about GPU shortages affecting the functionality of his products.

To date, ChatGPT has relied on a fleet of 10,000 NVIDIA GPUs integrated into a Microsoft supercomputer. As ChatGPT transitions from being a free service to a commercial one, its demand for computational power is expected to skyrocket, requiring over 30,000 NVIDIA A100 GPUs.

Microsoft’s Athena: A Potential Game-Changer in the Semiconductor Race

The global chip supply shortage has only exacerbated the soaring prices of NVIDIA chips. In response, NVIDIA has announced the upcoming launch of the GH200 chip, featuring the same GPU as the H100 but with triple the memory capacity. Systems equipped with the GH200 are slated to debut in the second quarter of 2024.

Microsoft’s annual gathering of developers and IT professionals, ‘Ignite,’ sets the stage for this momentous revelation. The event, scheduled from November 14 to 17 in Seattle, promises to showcase vital updates across Microsoft’s product spectrum.

Llama 2 Long: Redefining AI for Handling Complex User Queries

Meta Platforms has unveiled a groundbreaking AI model that may have slipped under the radar during its annual Meta Connect event in California. While the tech giant showcased numerous AI-powered features for its popular apps like Facebook, Instagram, and WhatsApp, the real standout innovation is Llama 2 Long, an extraordinary AI model designed to provide coherent and relevant responses to extensive user queries, surpassing some of the leading competitors in the field.

Llama 2 Long is an extension of the previously introduced Llama 2, an open-source AI model from Meta known for its versatility in tasks ranging from coding and mathematics to language comprehension, common-sense reasoning, and conversational abilities. What sets Llama 2 Long apart is its capacity to handle more substantial and complex inputs, making it a formidable rival to models like OpenAI’s GPT-3.5 Turbo and Claude 2, which struggle with extended contextual information.

The inner workings of Llama 2 Long are a testament to Meta’s dedication to pushing the boundaries of AI technology. Meta’s research team used varying versions of Llama 2, spanning from 7 billion to 70 billion parameters, which are the adjustable values that govern how the AI model learns from data. They augmented the model with an additional 400 billion tokens of data containing longer texts compared to the original Llama 2 dataset.

Furthermore, the architecture of Llama 2 underwent subtle alterations, primarily in how it encodes the position of each token within a sequence. The introduction of Rotary Positional Embedding (RoPE) proved pivotal, as it allowed each token to be mapped onto a 3D graph that reflects its relationship with other tokens, even when rotated. This innovation enhances the model’s accuracy and efficiency, reducing its reliance on extensive information and memory, which sets it apart from other techniques.

The researchers took the innovative step of reducing the rotation angle of the RoPE encoding from Llama 2 to Llama 2 Long, enabling the model to accommodate more distant or less frequent tokens in its knowledge base. Additionally, they employed reinforcement learning from human feedback (RLHF) and synthetic data generated by Llama 2 itself to fine-tune the model’s performance across various tasks.

The paper detailing Llama 2 Long’s capabilities asserts that the model can generate high-quality responses to user queries containing up to 200,000 characters, equivalent to approximately 40 pages of text. The paper provides illustrative examples of Llama 2 Long’s responses across a range of subjects, including history, science, literature, and sports.

Meta’s researchers regard Llama 2 Long as a significant stride towards the development of more versatile and general AI models capable of addressing diverse and intricate user needs. They also acknowledge the ethical and societal implications of such models, emphasizing the need for further research and dialogue to ensure their responsible and beneficial utilization.

In conclusion, Meta’s introduction of Llama 2 Long represents a remarkable advancement in the realm of AI, with the potential to revolutionize how AI models handle complex and extensive user queries while also underlining the importance of ethical considerations in their deployment.

Apple’s AI Chief Has Announced iOS 17 Update Gives Users Choice of Search Engine

Former high-ranking Google executive John Giannandrea recently highlighted a significant alteration in the latest iPhone software update, iOS 17, which was unveiled on September 25. This update introduces a noteworthy change that allows users to opt for a search engine other than Google when navigating in private mode.

In the wake of growing privacy concerns among users, Google, the tech behemoth, has found itself under increased scrutiny from the public regarding issues of user choice and competition within the search engine market.

The iOS 17 software release has introduced a pivotal feature by adding a second setting that empowers iPhone users to seamlessly switch between Google and alternative search engines. This development was emphasized by the head of Apple’s artificial intelligence division during his testimony in a federal court in Washington as part of the Justice Department’s antitrust lawsuit against Alphabet Inc.’s Google.

This newly added feature simplifies the process of changing search engines with a single tap, a move aimed at addressing concerns surrounding Google’s alleged monopoly in online search. This issue has gained prominence in light of the U.S. government’s antitrust lawsuit, which contends that Google has been unlawfully maintaining its dominant position through agreements with web browsers and mobile device manufacturers, including Apple.

Initially, Google refuted these allegations, asserting in its opening statement that users can easily switch search engines in a matter of seconds. However, Gabriel Weinberg, the CEO of rival search engine DuckDuckGo, testified on September 28 that Google’s default status on browsers is perceived as a barrier to users changing their preferences, citing a convoluted process.

Furthermore, Google’s default position as the search engine in Apple’s Safari, the web browser for Apple devices, is a result of contractual obligations between the two tech giants. As part of this arrangement, Google shares a portion of its advertising revenue with Apple, although the exact sum remains confidential. According to reports, the Justice Department has indicated that Google pays Apple an annual amount estimated to be between $4 billion and $7 billion.

Giannandrea clarified in his testimony that Google will continue to be the default search engine for Safari in private mode, which does not store browsing history. However, the new update offers users the flexibility to choose from a range of search engines, including Yahoo Inc., Microsoft Corp.’s Bing, DuckDuckGo, and Ecosia, for their private browsing experience.

John Giannandrea, currently leading Apple’s AI division, previously worked at Google from 2010 to 2018 in the role of Senior Vice President of Engineering. In his current capacity, Giannandrea is spearheading machine learning initiatives at Apple and driving AI-powered endeavors for the company.

OpenAI’s ChatGPT Unveils New Voice and Image Features for Enhanced User Interaction

OpenAI’s ChatGPT, the AI-powered language model, is unveiling a set of exciting new features, allowing users to “see, hear, and speak.” These enhancements are designed to make ChatGPT more user-friendly and versatile, offering a variety of ways for users to interact with the AI model.

OpenAI has announced a phased rollout of voice and image capabilities within ChatGPT over the next two weeks. These features are intended to empower users to engage in voice conversations and visually convey their queries to ChatGPT, making the AI experience even more interactive and accessible.

The primary goal behind these updates is to enhance the utility and user-friendliness of ChatGPT. According to MIT Technology Review, OpenAI has been diligently refining its technology with the aim of providing a comprehensive AI solution through the ChatGPT Plus app. This puts it in direct competition with virtual assistants like Siri, Google Assistant, and Alexa.

OpenAI emphasized the significance of these new features, stating, “Voice and image give you more ways to use ChatGPT in your life. Snap a picture of a landmark while traveling and have a live conversation about what’s interesting about it.” The voice feature will be available on both iOS and Android platforms, with the option to opt-in through your settings, while the image feature will be functional across all platforms.

OpenAI went on to explain how users can leverage these capabilities: “You can now use voice to engage in a back-and-forth conversation with your assistant. Speak with it on the go, request a bedtime story for your family, or settle a dinner table debate.”

The image feature had been hinted at earlier in March when GPT-4, the model powering ChatGPT, was introduced. However, it was not accessible to the general public at the time. Now, users can upload images to the app and inquire about the content of those images, expanding the AI’s versatility.

MIT Technology Review also noted that this announcement follows the recent integration of DALL-E 3, OpenAI’s image-generation model, into ChatGPT. This integration allows users to instruct the chatbot to generate images based on their input.

Additionally, OpenAI has partnered with Be My Eyes, enabling users to ask ChatGPT questions based on images, further expanding its practical applications.

Powering the voice feature of ChatGPT, OpenAI utilized Whisper, its speech-to-text model, to convert spoken words into text, which ChatGPT can then process, enabling voice interactions with the AI software. Joanne Jang, a producer manager at OpenAI, mentioned that synthetic voices were created by training the text-to-speech model on the voices of hired actors. OpenAI is also considering the possibility of allowing users to create their own custom voices in the future.

OpenAI is taking privacy, safety, and accessibility concerns seriously with the introduction of these features. They have outlined a multifaceted approach to address these issues, including content moderation, responsible data handling, clear user guidelines, restrictions on sensitive topics, and a strong focus on ethical software use. Furthermore, OpenAI is actively collaborating with external organizations, researchers, and experts to conduct audits and assessments of the system, ensuring that ChatGPT remains a responsible and reliable tool for users.

OpenAI Launches ChatGPT Enterprise: Unleashing GPT-4’s Power for Businesses

OpenAI, led by Sam Altman, has introduced ChatGPT Enterprise, marking a significant milestone following the initial launch of their conversational AI, ChatGPT. This new enterprise-level tool heralds a major advancement, granting businesses unrestricted access to GPT-4, boasting performance twice as fast as its predecessors, according to a report by CNBC.

In the preceding year, OpenAI gained widespread recognition with the introduction of ChatGPT. This AI marvel allowed numerous users to experience the capabilities of generative artificial intelligence firsthand. Within a few months, ChatGPT garnered over 100 million active monthly users, outpacing popular platforms like Instagram and Spotify in this remarkable achievement.

Subsequently, OpenAI captured attention through its deepening collaboration with Microsoft, which generously provided substantial financial support in exchange for access to OpenAI’s advanced AI model to enhance its own suite of tools. Notably, the unveiling of ChatGPT Enterprise marks OpenAI’s first product launch since the ChatGPT Plus subscription service, which offered enhanced access to the tool’s features.

Empowering Enterprises

Delving into the specifics of ChatGPT Enterprise, as detailed in CNBC’s report, OpenAI diligently crafted this enterprise version over the span of less than a year. Collaborating with over 20 companies spanning diverse industries and sizes, OpenAI officially launched this version, bestowing enterprises with access to GPT-4 and Application Programming Interface (API) credits. OpenAI asserts that an impressive 80 percent of Fortune 500 companies currently utilize ChatGPT. The Enterprise iteration empowers these enterprises to leverage their own data for training custom models, aiming to alleviate concerns about sensitive information inadvertently being shared with OpenAI through ChatGPT usage.

Addressing these concerns, OpenAI refutes allegations of training its models on user data. To enhance data security, the Enterprise version incorporates an additional layer of encryption for client data. However, the pricing structure for this enhanced offering remains undisclosed at this time.

Racing Ahead

In terms of competitors, ChatGPT Enterprise has already garnered clients such as Block, led by Jack Dorsey, and investment group Carlyle. While an official launch date remains unspecified, OpenAI also has plans to introduce a Business version tailored to smaller companies and teams. Notably, this strategic move positions OpenAI in direct competition with its primary financier, Microsoft. The Azure OpenAI service from Microsoft has enabled businesses to access ChatGPT, but OpenAI’s independent offering could potentially save businesses costs by negating the need for a Microsoft Azure subscription.

OpenAI’s extensive operations, particularly its management of ChatGPT, involve substantial financial expenditure due to the sheer volume of requests processed each month. This prompts OpenAI to seek innovative revenue streams to sustain these services and continue refining their product line. Amidst intensifying competition in the generative AI sector, as exemplified by Anthropic’s upgraded AI model Claude and rumors of Amazon’s prospective AI offering, OpenAI is positioning itself for the enduring competition that lies ahead.

The pivotal question remains whether businesses are inclined to embrace GPT-powered decision-making in the immediate future.

Navigating Challenges in Assessing AI Success

Generative artificial intelligence, especially in the form of systems like ChatGPT and LaMDA, is dominating conversations across various sectors. These applications have triggered significant disruptions, holding the potential to reshape our interactions with technology and the way we conduct our work.

A central aspect distinguishing AI from conventional software is its non-deterministic behavior. Unlike traditional software that consistently produces the same output for a given input, AI generates diverse results with each computation iteration. While this aspect contributes to the remarkable possibilities of AI, it also introduces challenges, particularly when evaluating the effectiveness of AI-driven applications.

Outlined below are the complexities tied to these challenges, along with potential strategies that strategic research and development (R&D) management can employ to address them.

The Unique Traits of AI Applications

AI applications differ from conventional software in their behavior. Traditional software thrives on predictability and repetition, vital for functionality. In contrast, the non-deterministic nature of AI applications means they don’t yield consistent, predictable outcomes for the same inputs. This variability is intentional and pivotal for the appeal of AI — for instance, ChatGPT’s allure stems from its ability to provide novel responses, not repetitive ones.

This unpredictability is a result of the algorithms underpinning machine learning and deep learning. These algorithms rely on intricate neural networks and statistical models. AI systems continually learn from data, leading to diverse outputs based on factors like context, training input, and model configurations.

The Challenge of Evaluation

Given their probabilistic outputs, algorithms built to handle uncertainty, and reliance on statistical models, determining a clear measure of success based on predefined expectations becomes challenging with AI applications. In essence, AI can think, learn, and create like the human mind, but validating the correctness of its output is intricate.

Furthermore, data quality and diversity exert a significant influence. AI models heavily rely on the quality, relevance, and diversity of their training data. To succeed, these models must be trained on diverse data encompassing various scenarios, including edge cases. The adequacy and accuracy of training data become pivotal for gauging the overall success of an AI application. However, since AI is relatively new and standards for data quality and diversity are yet to be established, outcomes vary widely across applications.

In certain instances, it’s the role of the human mind, specifically contextual interpretation and human bias, that complicates success measurement in AI. Human assessment is often necessary to adapt these applications to different situations, user biases, and subjective factors. Consequently, measuring success becomes intricate, involving user satisfaction, subjective evaluations, and user-specific outcomes that may lack easy quantification.

Navigating the Challenges

To devise strategies for enhancing success evaluation and optimizing AI performance, grasping the root of these challenges is crucial. Here are three strategies to consider:

  1. Develop Probabilistic Success Metrics Given the inherent uncertainty of AI outcomes, assessing success necessitates novel metrics designed to capture probabilistic results. Metrics suitable for conventional software systems are ill-suited for AI. Instead of fixating on deterministic metrics like accuracy, introducing probabilistic measures such as confidence intervals or probability distributions can offer a more comprehensive view of success.
  2. Strengthen Validation and Evaluation Establishing robust validation and evaluation frameworks is paramount for AI applications. This encompasses comprehensive testing, benchmarking against relevant sample datasets, and conducting sensitivity analyses to gauge system performance under varying conditions. Regularly updating and retraining models to adapt to evolving data patterns is crucial for maintaining accuracy and dependability.
  3. Prioritize User-Centric Evaluation AI success isn’t confined to algorithmic outputs alone. The effectiveness of these outputs from the user’s perspective holds equal significance. Incorporating user feedback and subjective assessments is vital, particularly for consumer-facing tools. Insights from surveys, user studies, and qualitative assessments can offer valuable insights into user satisfaction, trust, and perceived utility. Balancing objective performance metrics with user-centric output evaluations yields a more comprehensive success assessment.

Evaluating for Triumph

Assessing the success of any AI tool demands a nuanced approach that acknowledges the probabilistic nature of its outputs. Stakeholders involved in AI development and fine-tuning, especially from an R&D viewpoint, must recognize the challenges introduced by inherent uncertainty. Only through defining suitable probabilistic metrics, rigorous validation, and user-centric evaluations can the industry effectively navigate the dynamic landscape of artificial intelligence.

Navigating the Intersection of AI and Operational Technology (OT)

In recent times, the spotlight has been cast on artificial intelligence (AI), with particular emphasis on generative AI applications like ChatGPT and Bard. This surge in interest commenced around November 2022, triggering discussions and debates about the immense potential of AI as well as its ethical and practical implications. This article examines the growing dominance of AI, especially within operational technology (OT), shedding light on its impact, testing, and reliability.

The Phenomenon of Generative AI

Generative AI, or “gen AI,” has impressively ventured into diverse creative domains such as songwriting, image generation, and even email composition. However, along with its remarkable achievements come valid concerns about its ethical utilization and possible misuse. Introducing gen AI to the OT landscape raises profound inquiries about its potential consequences, methods of rigorous testing, and its safe and effective implementation.

Implications, Testing, and Trustworthiness in OT

Operational technology revolves around consistency and repetition, aiming to predict outcomes based on established input-output relationships. In this realm, human operators are ready to make swift decisions when unpredictability arises, especially in critical infrastructures. Unlike the relatively lesser consequences of errors in information technology, OT errors could result in loss of life, environmental harm, and extensive liability, amplifying the need for accurate crisis-time decisions.

AI relies on extensive data to make informed choices and formulate logic for appropriate responses. In OT, incorrect decisions by AI could lead to far-reaching negative effects and unresolved liability concerns. Addressing these issues, Microsoft has proposed a comprehensive framework for the public governance of AI, advocating for government-led safety frameworks and safety mechanisms in AI systems overseeing critical infrastructure.

Enhancing Resilience through Red Team and Blue Team Exercises

Drawing from the “red team” and “blue team” strategies originating in military contexts, cybersecurity experts collaboratively test and fortify systems. The red team simulates attacks to reveal vulnerabilities, while the blue team focuses on defense. These exercises offer valuable insights to bolster security.

Applying AI to these exercises could narrow the skill gap and mitigate resource limitations. AI may uncover hidden vulnerabilities or suggest alternative defense strategies, thereby illuminating new methods to safeguard production systems and enhance overall security.

Unveiling Potential with Digital Twins and AI

Leading organizations have embraced the concept of digital twins, creating virtual replicas of their OT environments for testing and optimization. These replicas allow for safe exploration of potential changes and optimizations, aided by AI-driven stress testing. However, the transition from the digital realm to the real world entails considerable risk, necessitating meticulous testing and risk management.

AI’s Role in SOC and Noise Mitigation

AI’s utilization extends to security operations centers (SOC), where it aids in anomaly detection and interpretation of rule sets. Leveraging AI in this context mitigates noise in alarm systems and asset visibility tools, enhancing operational efficiency and enabling staff to focus on priority tasks.

Anticipating the AI-OT Convergence

As AI increasingly permeates information technology (IT), its influence on OT also grows. Instances like the Colonial pipeline ransomware attack underscore the interconnectedness of these domains. To balance innovation and safety, AI adoption in OT should commence cautiously in lower-impact areas. This measured approach necessitates robust checks and internal testing.

Striking a Balance

While the potential of AI in enhancing efficiency and safety is undeniable, a balanced approach is paramount. Ensuring safety and reliability in the realm of OT is crucial as AI and machine learning continue to evolve. By embracing these technologies responsibly, the industry can harness their benefits while safeguarding against potential risks.

Opera’s iOS Browser Introduces AI Assistant Aria

Opera’s iOS web browser app is set to receive a boost with the integration of an AI assistant named Aria. In a recent announcement, Opera unveiled its collaboration with OpenAI, bringing Aria directly into the iOS web browser, providing this AI-powered product to all users at no cost.

Aria originally launched on Opera’s desktop version and Android browser, amassing over a million users. Now, with its inclusion in Opera for iOS, Aria boasts compatibility across all major platforms, encompassing Mac, Windows, Linux, Android, and iOS.

It’s worth noting that engaging with Aria is at the user’s discretion; there’s no compulsion to use the AI service. After opting in, Aria extends an array of intelligent insights, thoughtful suggestions, and responsive voice commands. To utilize Aria, users are required to log into their Opera accounts. For newcomers, creating an account directly from the app is an option.

Composer Foundation: Powering Aria’s AI Capabilities

The foundation of Aria lies in Opera’s proprietary “Composer” infrastructure, which seamlessly integrates with OpenAI’s GPT technology. This connectivity empowers Aria with the ability to access various AI models, setting the stage for future advancements in search and AI services. These expansions include delving deeper into generative AI, among other revelations that Opera plans to unveil in due course.

Functioning akin to other AI-driven search companions, Opera’s iOS version incorporates a chatbot-like interface, facilitating user queries and providing responses as an alternative to conventional web searches. Users can access the AI’s capabilities through vocal interactions, eliminating the need for text input, by tapping into the “more” menu situated in the far-right tab on the bottom navigation bar within the Opera iOS app.

Following the recent milestone of Aria achieving one million users, Opera highlighted the significant impact that AI integration had on various metrics. Lin Song, co-CEO of Opera, expressed satisfaction with both the initial adoption and the quality of engagement with Aria, noting an increase in overall time spent on the platform, accompanied by heightened search activity and pageviews per session.

Notably, Aria isn’t Opera’s inaugural foray into AI solutions. The company had previously introduced AI Prompts, a feature enabling users to swiftly initiate conversations with generative AI services. This facilitated the summarization or elucidation of articles, tweet generation, and requesting pertinent content based on highlighted text.

Opera’s iOS browser is accessible as a complimentary download and encompasses an array of beneficial attributes, such as built-in ad blocking, a VPN service, tracking prevention, a cryptocurrency wallet, private browsing capabilities, and more.

IBM Introduces Innovative Analog AI Chip That Works Like a Human Brain

IBM has taken the wraps off a groundbreaking analog AI chip prototype, designed to mimic the cognitive abilities of the human brain and excel at intricate computations across diverse deep neural network (DNN) tasks.

This novel chip’s potential extends beyond its capabilities. IBM asserts that this cutting-edge creation has the potential to revolutionize artificial intelligence, significantly enhancing its efficiency and diminishing the power drain it imposes on computers and smartphones.

Unveiling this technological marvel in a publication from IBM Research, the company states, “The fully integrated chip features 64 AIMC cores interconnected via an on-chip communication network. It also implements the digital activation functions and additional processing involved in individual convolutional layers and long short-term memory units.”

A Paradigm Shift in AI Computing

Fashioned within the confines of IBM Albany NanoTech Complex, this new analog AI chip comprises 64 analog in-memory compute cores. Drawing inspiration from the operational principles of neural networks within biological brains, IBM has ingeniously incorporated compact, time-based analog-to-digital converters into every tile or core. This design enables seamless transitions between the analog and digital domains.

Furthermore, each tile, or core, is equipped with lightweight digital processing units adept at executing uncomplicated nonlinear neuronal activation functions and scaling operations, as elaborated upon in an August 10 blog post by IBM.

A Potential Substitution for Existing Digital Chips

In the not-so-distant future, IBM’s prototype chip may very well take the place of the prevailing chips propelling resource-intensive AI applications in computers and mobile devices. Elucidating this perspective, the blog post continues, “A global digital processing unit is integrated into the middle of the chip that implements more complex operations that are critical for the execution of certain types of neural networks.”

As the market witnesses a surge in foundational models and generative AI tools, the efficacy and energy efficiency of conventional computing methods upon which these models rely are confronting their limits.

IBM has set its sights on bridging this gap. The company contends that many contemporary chips exhibit a segregation between their memory and processing components, consequently stymying computational speed. This dichotomy forces AI models to be stored within discrete memory locations, necessitating constant data shuffling between memory and processing units.

Drawing a parallel with traditional computers, Thanos Vasilopoulos, a researcher based at IBM’s Swiss research laboratory, underscores the potency of the human brain. He emphasizes that the human brain achieves remarkable performance while consuming minimal energy.

According to Vasilopoulos, the heightened energy efficiency of the IBM chip could usher in an era where “hefty and intricate workloads could be executed within energy-scarce or battery-constrained environments,” such as automobiles, mobile phones, and cameras.

He further envisions that cloud providers could leverage these chips to curtail energy expenditures and reduce their ecological footprint.

Enhancements to Microsoft Services Agreement Introduce AI Usage Restrictions

Microsoft’s updated Terms of Service, set to become effective on September 30, include new regulations and limitations governing their AI offerings. These alterations, which were made public on July 30, encompass a segment that outlines the concept of “AI Services.” This term is defined within the section as referring to “services designated, described, or identified by Microsoft as incorporating, utilizing, driven by, or constituting an Artificial Intelligence (‘AI’) system.”

The section homes in on five rules and restrictions for Microsoft AI services, saying:

  1. Reverse Engineering. You may not use the AI services to discover any underlying components of the models, algorithms, and systems. For example, you may not try to determine and remove the weights of models.
  2. Extracting Data. Unless explicitly permitted, you may not use web scraping, web harvesting, or web data extraction methods to extract data from the AI services.
  3. Limits on use of data from the AI Services. You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service.
  4. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.
  5. Third party claims. You are solely responsible for responding to any third-party claims regarding Your use of the AI services in compliance with applicable laws (including, but not limited to, copyright infringement or other claims relating to content output during Your use of the AI services).

Microsoft Services Agreement Updates Amidst AI-Focused Changes

The alterations to the Microsoft Services Agreement arrive during a period where shifts in terms of service, specifically those concerning artificial intelligence (AI), are capturing significant attention. A notable instance involves Zoom, a provider of video conferencing and messaging services, which encountered substantial criticism due to inconspicuous adjustments made to its Terms of Service (TOS) in March, centering around AI. These modifications have spurred fresh inquiries into matters of customer privacy, autonomy, and reliance. Recent reports disseminated widely highlighted Zoom’s TOS changes, indicating the company’s capacity to employ user data for training AI without an available opt-out mechanism.

Zoom’s Evolving Position on TOS Amendments and AI

In response to these developments, Zoom has issued a subsequent statement pertaining to its updated TOS and corresponding blog post. The statement affirmed that, following feedback, Zoom has chosen to revise its Terms of Service to underscore that it refrains from utilizing user-generated content—such as audio, video, chat, screen sharing, attachments, and interactive elements—for the training of either Zoom’s proprietary AI models or those of third-party entities. Notably, the policy modifications were designed to enhance transparency and provide users with clarity regarding Zoom’s approach.

The New York Times’ Stance on AI-Related Terms of Service

Recently, The New York Times also revised its Terms of Service in an effort to forestall AI companies from scraping its content for various purposes. The updated clause explicitly delineates that non-commercial usage excludes actions like employing content to develop software programs or AI systems through machine learning. Furthermore, it prohibits the provision of archived or cached data sets containing content to external parties.

Clarification and Intent Behind The New York Times’ Updates

A representative from The New York Times Company communicated with VentureBeat, affirming that the company’s terms of service had consistently disallowed the use of their content for AI training and development. The recent revisions, in part, were undertaken to reinforce and make unequivocal this existing prohibition, aligning with the company’s stance on responsible content utilization in the AI landscape.