WordPress Ad Banner

Microsoft Set to Unveil Its Latest AI Chip, Codenamed ‘Athena,’ Next Month

After years of development, Microsoft is on the cusp of revealing its highly-anticipated AI chip, codenamed ‘Athena,’ at the upcoming annual ‘Ignite’ event next month. This unveiling marks a significant milestone for the tech giant, as it signals a potential shift away from its reliance on GPUs manufactured by NVIDIA, the dominant player in the semiconductor industry.

Microsoft has meticulously crafted its Athena chip to empower its data center servers, tailoring it specifically for training and running large-scale language models. The motivation behind this endeavor stems from the ever-increasing demand for NVIDIA chips to fuel AI systems. However, NVIDIA’s chips are notorious for being both scarce and expensive, with its most powerful AI offering, the H100 chip, commanding a hefty price tag of $40,000.

By venturing into in-house GPU production, Microsoft aims to curb costs and bolster its cloud computing service, Azure. Notably, Microsoft had been covertly working on Athena since 2019, coinciding with its $1 billion investment in OpenAI, the visionary organization behind ChatGPT. Over the years, Microsoft has allocated nearly $13 billion to support OpenAI, further deepening their collaboration.

Athena’s Arrival: Microsoft’s In-House AI Chip Ready for the Spotlight

Besides advancing its own AI aspirations, Microsoft’s chip could potentially aid OpenAI in addressing its own GPU requirements. OpenAI has recently expressed interest in developing its AI chip or potentially acquiring a chipmaker capable of crafting tailored chips for its unique needs.

This development holds promise for OpenAI, especially considering the colossal expenses associated with scaling ChatGPT. A Reuters report highlights that expanding ChatGPT to a tenth of Google’s search scale would necessitate an expenditure of approximately $48.1 billion for GPUs, along with an annual $16 billion investment in chips. Sam Altman, the CEO of OpenAI, has previously voiced concerns about GPU shortages affecting the functionality of his products.

To date, ChatGPT has relied on a fleet of 10,000 NVIDIA GPUs integrated into a Microsoft supercomputer. As ChatGPT transitions from being a free service to a commercial one, its demand for computational power is expected to skyrocket, requiring over 30,000 NVIDIA A100 GPUs.

Microsoft’s Athena: A Potential Game-Changer in the Semiconductor Race

The global chip supply shortage has only exacerbated the soaring prices of NVIDIA chips. In response, NVIDIA has announced the upcoming launch of the GH200 chip, featuring the same GPU as the H100 but with triple the memory capacity. Systems equipped with the GH200 are slated to debut in the second quarter of 2024.

Microsoft’s annual gathering of developers and IT professionals, ‘Ignite,’ sets the stage for this momentous revelation. The event, scheduled from November 14 to 17 in Seattle, promises to showcase vital updates across Microsoft’s product spectrum.

Enhancements to Microsoft Services Agreement Introduce AI Usage Restrictions

Microsoft’s updated Terms of Service, set to become effective on September 30, include new regulations and limitations governing their AI offerings. These alterations, which were made public on July 30, encompass a segment that outlines the concept of “AI Services.” This term is defined within the section as referring to “services designated, described, or identified by Microsoft as incorporating, utilizing, driven by, or constituting an Artificial Intelligence (‘AI’) system.”

The section homes in on five rules and restrictions for Microsoft AI services, saying:

  1. Reverse Engineering. You may not use the AI services to discover any underlying components of the models, algorithms, and systems. For example, you may not try to determine and remove the weights of models.
  2. Extracting Data. Unless explicitly permitted, you may not use web scraping, web harvesting, or web data extraction methods to extract data from the AI services.
  3. Limits on use of data from the AI Services. You may not use the AI services, or data from the AI services, to create, train, or improve (directly or indirectly) any other AI service.
  4. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.
  5. Third party claims. You are solely responsible for responding to any third-party claims regarding Your use of the AI services in compliance with applicable laws (including, but not limited to, copyright infringement or other claims relating to content output during Your use of the AI services).

Microsoft Services Agreement Updates Amidst AI-Focused Changes

The alterations to the Microsoft Services Agreement arrive during a period where shifts in terms of service, specifically those concerning artificial intelligence (AI), are capturing significant attention. A notable instance involves Zoom, a provider of video conferencing and messaging services, which encountered substantial criticism due to inconspicuous adjustments made to its Terms of Service (TOS) in March, centering around AI. These modifications have spurred fresh inquiries into matters of customer privacy, autonomy, and reliance. Recent reports disseminated widely highlighted Zoom’s TOS changes, indicating the company’s capacity to employ user data for training AI without an available opt-out mechanism.

Zoom’s Evolving Position on TOS Amendments and AI

In response to these developments, Zoom has issued a subsequent statement pertaining to its updated TOS and corresponding blog post. The statement affirmed that, following feedback, Zoom has chosen to revise its Terms of Service to underscore that it refrains from utilizing user-generated content—such as audio, video, chat, screen sharing, attachments, and interactive elements—for the training of either Zoom’s proprietary AI models or those of third-party entities. Notably, the policy modifications were designed to enhance transparency and provide users with clarity regarding Zoom’s approach.

The New York Times’ Stance on AI-Related Terms of Service

Recently, The New York Times also revised its Terms of Service in an effort to forestall AI companies from scraping its content for various purposes. The updated clause explicitly delineates that non-commercial usage excludes actions like employing content to develop software programs or AI systems through machine learning. Furthermore, it prohibits the provision of archived or cached data sets containing content to external parties.

Clarification and Intent Behind The New York Times’ Updates

A representative from The New York Times Company communicated with VentureBeat, affirming that the company’s terms of service had consistently disallowed the use of their content for AI training and development. The recent revisions, in part, were undertaken to reinforce and make unequivocal this existing prohibition, aligning with the company’s stance on responsible content utilization in the AI landscape.

Box Partners with Microsoft 365 Copilot for Advanced AI Integration

Box, a leading provider of secure cloud content management solutions, is taking significant strides in its generative AI endeavors by announcing a new collaboration with Microsoft 365 Copilot.

This integration marks a significant expansion of Box’s efforts to leverage AI for empowering enterprise users in understanding and maximizing the value of their content within the Box platform. Earlier, the company had unveiled its Box AI initiative, which seamlessly integrates gen AI into the Box user experience to enable data querying and summarization.

Seamless Integration with Microsoft 365 Copilot

With the new plugin, Box is extending the reach of its AI capabilities by allowing organizations to incorporate Microsoft 365 Copilot into their Box content. Microsoft 365 Copilot is another remarkable gen AI technology that facilitates content creation and querying across Word, Excel, PowerPoint, and Teams for Microsoft 365 users.

Aaron Levie, the CEO and co-founder of Box, explained that this partnership with Microsoft 365 Copilot is their first venture into this domain. The plugin enables users to seamlessly access their Box documents and make queries related to their Box content from within Microsoft Copilot.

Considering the vast scope of Box’s customer base, which includes over 110,000 enterprise clients and tens of millions of individual users, the ability to enable Box content for Microsoft 365 Copilot holds significant applicability. Since many of these customers are likely to be using Microsoft tools extensively, this integration streamlines their workflow and enhances productivity.

Future Prospects of Federated AI

As AI becomes pervasive in the enterprise environment, a potential challenge that companies will face is determining which AI solutions to use. Both Box AI and Microsoft 365 Copilot share similar goals of empowering users to query, summarize, and generate content, but their primary operating environments differ.

Levie highlighted that in an AI-dominated software landscape, where various applications integrate AI functionalities, interactions between different AI systems are inevitable. He drew a parallel with Salesforce Einstein and ServiceNow, anticipating similar scenarios in the future.

While the exact path to a unified solution remains uncertain, Levie believes that federated AI across multiple providers might be a possibility. Regardless of how the future unfolds, Box aims to provide exceptional value propositions within the Box ecosystem and ensure seamless data utilization for users working with different software applications.

Overall, the collaborative efforts of Box and Microsoft 365 Copilot signify an exciting era in the AI landscape, where user expectations and integration possibilities will shape the future of AI-driven productivity. Box is determined to remain at the forefront of this revolution, delivering seamless experiences and maximizing data potential for its customers, regardless of the software they use.

Introducing TypeChat: Microsoft’s Library for Creating Natural Language Interfaces

Microsoft has introduced TypeChat, an innovative library aimed at simplifying the development of natural language interfaces for large language models (LLMs) using types. This open source library, available on GitHub, leverages TypeScript and generative AI to connect natural language, application schema, and APIs seamlessly. By utilizing type definitions in your application, TypeChat enables the retrieval of structured AI responses that are type-safe.

Prior to the release of TypeChat, developers faced challenges in creating natural language interfaces for applications, often relying on complex decision trees to determine user intent and gather the necessary inputs to trigger actions. However, with TypeChat, this process becomes more streamlined. Instead of prompt engineering, TypeChat adopts schema engineering, allowing developers to define types that represent the supported intents in their natural language application. These types can be as simple as sentiment categorization or more sophisticated, such as types tailored for a shopping cart or a music application.

Once the developer has defined the types, it constructs a prompt to communicate with the LLM using those types and verifies that the LLM response adheres to the specified schema. In case of non-conforming output, further interaction with the language model is conducted to rectify the issue. Moreover, it provides a summary of the instance and confirms that it aligns with the user’s intent.

To make it easy for developers to use TypeChat, it can be installed through NPM:

bashCopy codenpm install typechat

Additionally, developers have the option to build TypeChat from the source:

bashCopy codenpm run build

The creators of TypeChat emphasized its significance in the context of the recent enthusiasm surrounding LLMs, particularly in chat assistants. However, they acknowledged that integrating these models into existing app interfaces raised questions for developers. It is specifically designed to address these concerns, enabling the augmentation of traditional user interfaces with natural language interfaces and facilitating the conversion of user requests into a format that applications can process using AI.

In summary, TypeChat by Microsoft is an open source library that leverages TypeScript and generative AI to enable the straightforward development of natural language interfaces for large language models. By utilizing types, developers can streamline the process of integrating AI into their applications and enhance user experiences through seamless natural language interactions.

Microsoft Unveils New AI-Powered Shopping Tools in Bing and Edge for Enhanced User Experience

Microsoft has introduced an array of innovative AI-driven shopping tools for its revamped Bing search engine and the Bing AI chatbot integrated into the Edge sidebar. Unlike previous shopping features incorporated into Edge, which failed to garner much enthusiasm, this fresh set of tools offers practical functionality.

One notable addition is Bing’s GPT-powered AI capabilities that now generate buying guides effortlessly. By using search queries like “college supplies,” Bing will automatically curate products within each category, provide detailed specifications for easy item comparison, and guide users to the appropriate purchasing locations. Microsoft will earn affiliate fees for each successful purchase.

Considering the abundance of websites dedicated to producing buying guides, it remains to be seen how they will respond to this development. If Microsoft implements this in Bing, it is likely that other search engines, including Google, will follow suit. While users won’t lament the departure of low-quality, SEO-optimized shopping content often encountered during product comparisons, legitimate editorial operations may face some challenges.

The buying guides are currently available in the United States, with a global rollout planned for the Edge platform.

Another worldwide feature introduced by Microsoft is AI-generated review summaries. By simply requesting Bing Chat in Edge to summarize product reviews, users can obtain concise overviews of what people are saying about a particular item.

Furthermore, Microsoft introduces Price Match, a tool designed to facilitate price match requests from retailers even after a price drop occurs. Microsoft has already partnered with leading U.S. retailers that have existing price match policies, with plans to collaborate with additional retailers in the future. Specific retailer names were not disclosed.

Overall, Microsoft’s latest enhancements aim to provide users with an enhanced shopping experience, leveraging AI technologies to simplify product research, review analysis, and price matching.

Microsoft Unveils Advanced AI Fabric for Moody’s Utilizing Generative Technology

Moody’s, a renowned global player in financial risk assessment, has joined forces with Microsoft to integrate generative AI into its enterprise operations. The collaboration will leverage the Microsoft Azure OpenAI service to unlock research information and enhance risk assessment capabilities at Moody’s. One of the initial deployments will be Moody’s CoPilot, an internal tool that empowers the company’s 14,000 employees worldwide to easily access and query data and research using large language models (LLMs).

In addition to AI advancements, Moody’s is embracing the Microsoft Fabric data management platform, introduced recently, to streamline data management for AI and analytics purposes.

Nick Reed, Chief Product Officer at Moody’s, emphasized the benefits of the new generative AI tools, stating, “Users will leverage the technology to access tailored risk data and insights drawn from across Moody’s vast body of risk data, analytics, and research.”

Moody’s decision to adopt generative AI aligns with a growing trend in various industries, including financial services. Just last month, JPMorgan revealed plans for an investment service similar to ChatGPT.

Reed explained that Moody’s has already integrated traditional AI technologies into its solutions to scale and accelerate informed decision-making in risk assessment. However, the evaluation of generative AI became necessary when the rapid advancements in the field indicated that it could further harness the power of Moody’s proprietary data, analytics, and research to deliver new value and opportunities to customers.

By combining their extensive knowledge and opinions with Microsoft’s generative AI technology, Moody’s Copilot aims to seamlessly merge insights from different risk areas, such as credit risk, ESG exposure, and supply chain management. The goal is to eliminate existing silos and provide users with comprehensive risk information and insights.

Compliance, security, and enterprise AI are vital considerations in Moody’s partnership with Microsoft. Bill Borden, Corporate VP of Worldwide Financial Services at Microsoft, highlighted the importance of integrating generative AI with existing processes while meeting the strict security and compliance requirements of Moody’s. Microsoft’s established foundation in these areas positions it well to support financial service firms in their digital transformation journey, with a deep understanding of global regulations and robust controls and governance models.

In addition to leveraging the Microsoft Azure OpenAI service, Moody’s is utilizing the newly announced Microsoft Fabric data technology. Fabric enables Moody’s users to simplify data viewing and analysis by consolidating multiple data sources. With a wide range of proprietary risk data in areas such as credit, ESG, commercial real estate, and supply chain, Moody’s is exploring various use cases of Fabric to optimize its data strategies.

Microsoft’s Fabric serves as an integrated platform for horizontal data capabilities, providing customers with enhanced data management, governance, data cataloging, and valuable insights to support their data strategies in industries like banking, capital markets, and insurance.

OpenAI’s ChatGPT App Employs Bing for Web Searches

OpenAI has announced the addition of a new feature called Browsing to its premium chatbot, ChatGPT Plus. Subscribers can now utilize the Browsing feature on the ChatGPT app, enabling the chatbot to search Bing for answers to questions.

To activate Browsing, users can navigate to the New Features section in the app settings, select “GPT-4” in the model switcher, and choose “Browse with Bing” from the drop-down menu. The Browsing functionality is available on both the iOS and Android versions of the ChatGPT app.

OpenAI highlights that Browsing is particularly useful for inquiries related to current events and other information that extends beyond ChatGPT’s original training data. By enabling Browsing, users can access more up-to-date information, as ChatGPT’s knowledge base is limited to information available up until 2021 when Browsing is disabled.

While the introduction of Browsing enhances ChatGPT’s capabilities and makes it a more valuable research assistant, there are concerns about the restriction to Bing as the sole search engine. OpenAI’s close partnership with Microsoft, which has invested over $10 billion in the startup, likely plays a role in this choice. However, Bing is not regarded as the definitive search engine, and past analyses have raised questions about its fairness and the presence of disinformation in search results.

Users may find the limitation to Bing as a user-hostile move since they don’t have alternatives to choose from when Bing’s search results fall short. Although Microsoft continues to refine Bing’s algorithms, the lack of diversity in search options raises concerns about access to unbiased and comprehensive information.

In other news related to the ChatGPT app, OpenAI has also implemented a feature that allows users to directly access specific points in the conversation by tapping on search results. This improvement, alongside the introduction of Browsing, will be rolled out this week, according to OpenAI.

Microsoft Unveils the World’s First Analog Optical Computer to Solve Optimization Problems

Microsoft Research Lab in Cambridge has unveiled the world’s first analog optical computer which promises to solve optimization problems at a lightning-fast pace, a press release said. The computer uses photons and electrons to process continuous value data instead of crunching them to binary bits using transistors.

Optimization problems are everywhere around us whether one considers managing electricity on the grid or delivering goods to your doorstep from the warehouse of the seller. Optimizing involves the use of the least resources to maximize returns for processes. However, even the world’s fastest computers can end up spending years to solve them once the size of the problem grows.

The Traveling Salesman Problem is a classic example of this problem. It involves finding an optimum route to visit a set of cities just once before returning to the starting point. When computing for five cities, there are 12 possible routes that one can take. However, as the numbers of cities grow, the potential routes expand exponentially making them impossible to compute.

The Analog Iterative Machine

Researchers have used heuristic algorithms which can provide approximate solutions to such problems. However, even with their custom hardware, the approach has not yielded a practical alternative to conventional computers, which are limited by their binary abstraction of problems.

World's First Analog Optical Computer
Illustration of how AIM works Microsoft 

The research team at Microsoft suggests a more expressive abstraction that allows the use of mixed variables, both binary and continuous to solve problems of optimization. The team achieved this using an analog optical computer that they call the Analog Interactive Machine (AIM).

The team leverages the ability of photons to not interact with each other but with the matter through which they travel to perform simple mathematical operations like addition and multiplication.

By constructing a physical system that uses optics and electronics to perform vector-matrix multiplications, the team has found a way to efficiently and swiftly execute calculations needed to find solutions to optimization problems.

Further, the components of this system have been miniaturized to fit tiny centimeter-scale chips, making the AIM no bigger than a rack enclosure.

World's First Analog Optical Computer
Centimeter sized chips means AIM is no bigger than a rack enclosure Microsoft 

Real-world applications

Last year, the company built the first generation AIM computer that delivered an accuracy of up to seven bits. Now in an attempt to test it in the real world, Microsoft has teamed up with Barclays, a UK-based bank to test applications in financial markets.

Interbank transactions are settled at clearing houses which process hundreds of thousands of transactions on a daily basis. As banking transactions scale, the settlements take increasingly longer to be completed which is a real-world optimization problem.

The Microsoft team has already attempted using a basic version of AIM to solve the transaction problem and solved with accurately in tests so far. The team is now working to scale up the computer to handle a larger number of variables and more data.

Microsoft believes that an optical computer can address the two major issues with silicon-based computing. First, the diminishing returns on Moore’s Law where computing capacity per dollar has been declining over the years in chips as well as the limitations of computing in binary.

An optical computer could literally open a spectrum of options for researchers while also reducing resources spent on performing complex calculations.

Microsoft Has Completed The First Step To Building A Quantum Supercomputer

Microsoft is leading the race in artificial intelligence (AI) models and has also set its eye on the future of computing. In an announcement made on Wednesday, the Redmond, Washington-headquartered company unveiled a roadmap where it plans to build a quantum supercomputer in the next 10 years.

Quantum computing has been in the news in recent weeks for beating supercomputers at complex math and being able to compute at speeds much faster than one could imagine. Scientists have acknowledged that they have used noisy physical qubits for these achievements, which are not error-free.

Microsoft refers to today’s quantum computers as those belonging to the foundational level. According to the software giant, these computers need upgrades in the underlying technology, much like early computing machines did as they moved from vacuum tubes to transistors and then to integrated circuits before taking their current form.

Logical qubits

In its roadmap, Microsoft suggests that as an industry, quantum computing needs to move on from noisy physical qubits to reliable logical qubits since the former cannot reliably run scaled applications.

Microsoft suggests bundling hundreds to thousands of physical qubits into one logical qubit to increase redundancy and reduce error rates. Since qubits are prone to interference from their environment, efforts must be made to increase their stability, which will aid in increasing their reliability.

Reliable logical qubits can be scaled to perform complex problems that need solving urgently. However, since we do not have a measure of how reliable calculations in quantum computing are, the company has proposed a new measure called reliable Quantum Operations Per Second (rQOPS) to do so.

Microsoft claims that the Majorana-based qubit announced last year is highly stable but also difficult to create. The company has published its progress in the peer-reviewed publication in the journal Physical Review B.

Platform to accelerate discovery

Microsoft has completed the first step to building a quantum supercomputer
When quantum computing will reach the supercomputer stageMicrosoft 

Microsoft estimates that the first quantum supercomputer will need to deliver at least one million rQOPS with an error rate of 10-12, or one in every trillion operations, to be able to provide valuable inputs in solving scientific problems. However, quantum computers of today deliver an rQOPS value of zero, meaning that the industry as a whole has a long way to go before we see the first quantum supercomputer.

Instead of decades, Microsoft wants to build this supercomputer in a matter of years and has now launched its Azure Quantum Elements platform to accelerate scientific discovery. The platform will enable organizations to leverage the latest breakthroughs in high-performance computing (HPC), AI, and quantum computing to make advances in chemistry and material science to build the next generation of quantum computers.

The company is also extending its Copilot services to Azure Quantum, where researchers will be able to use natural language processing to solve complex problems of chemistry and materials science. Copilot can help researchers query quantum computers and visualize data using an integrated browser.

Microsoft’s competitors in this space are Google and IBM, who have also unveiled their quantum capabilities.

Microsoft Adds AI Voice Chat to Bing on Desktop

Microsoft has expanded its voice capabilities by introducing voice support for Bing Chat on desktop. Users can now interact with the search engine’s chatbot on Edge for PCs, utilizing OpenAI’s GPT-4 technology. This feature was initially available for Bing’s AI chatbot on mobile apps and has now been extended to desktop users. By simply tapping on the microphone icon in the Bing Chat box, users can engage in voice conversations with the AI-powered bot.

In its latest Bing preview release notes, Microsoft acknowledged the popularity of voice input for chat on mobile devices and highlighted the addition of voice support to the desktop version. Currently, the feature supports English, Japanese, French, German, and Mandarin, with plans to expand language support in the future. Users can now ask Bing questions verbally and receive text-to-speech responses from the chatbot, which can also answer questions using its own voice. For instance, Microsoft suggested asking Bing Chat, “What’s the toughest tongue twister you know?” and receiving a spoken response.

The introduction of voice support for Bing Chat on desktop comes shortly after Microsoft’s announcement about discontinuing the standalone Cortana app for Windows, which functions as a voice assistant. Microsoft emphasized that users will still have access to powerful AI capabilities in Windows and Edge, mentioning Bing Chat and Microsoft 365 Copilot as examples. Bing Chat, combined with AI capabilities, provides users with voice interaction and productivity features, while Microsoft 365 Copilot utilizes artificial intelligence to generate content within the company’s applications.

With voice support now available on Bing Chat for desktop, users can enjoy a more interactive and convenient search experience, enabling them to converse with the AI chatbot using voice input and receive spoken responses.