In a significant development, Microsoft has announced that it will provide government agencies in the U.S. with access to OpenAI’s artificial intelligence (AI) models, including the highly regarded GPT-4 and its predecessor. This move aims to empower government offices by harnessing the capabilities of AI and leveraging the benefits of Azure OpenAI services.
At the forefront of Microsoft new Bing search engine, OpenAI’s GPT-4 has proven to be a formidable force, capturing the attention of companies seeking to optimize their data utilization through AI-driven insights. With over 4,500 customers already benefiting from Azure OpenAI services since its launch in January, major corporations such as Mercedes, Volvo, Ikea, and Shell have embraced this technology to enhance employee productivity and data analysis.
While private companies have eagerly adopted AI to revolutionize their operations, government agencies have often lagged behind in adopting these transformative technologies. However, Microsoft’s latest offering breaks down those barriers, extending the opportunity for government offices to leverage powerful AI models effectively.
By integrating OpenAI’s AI models into their operations, government agencies can tap into the immense potential of AI, unlocking opportunities for improved decision-making, enhanced efficiency, and increased productivity. The availability of Azure OpenAI services to government entities signifies a pivotal step toward enabling data-driven insights and advanced analysis in the public sector.
Through this collaboration, government agencies will gain the means to harness the capabilities of OpenAI’s cutting-edge AI models, enabling them to derive valuable insights from vast amounts of data and make informed decisions. This technological leap promises to propel government operations into a new era of efficiency and effectiveness, ultimately benefiting the public at large.
With the convergence of Microsoft’s Azure OpenAI services and OpenAI’s powerful AI models, government agencies now have a transformative tool at their disposal. This development marks an important milestone in bridging the gap between private and public sectors in the adoption of AI, paving the way for innovation and progress in government operations.
What is Microsoft offering?
Microsoft will allow government agencies to access GPT-4, GPT-3, and Embeddings from OpenAI using the Azure OpenAI service. Embeddings measure the relatedness of text strings and are helpful in operations such as Search, Clustering, Anomaly Detection, and Classification, to name a few, as per OpenAI’s website.
These services are aimed at helping government agencies “improve efficiency, enhance productivity and unlock new insights from their data,” Microsoft wrote in a blog post. Users of this service can use REST API, Python SDK as well as the web-based interface in Azure AI Studio to adapt AI models to specific tasks.
Using the service is expected to help government agencies accelerate content generation, reduce the time and effort required for research and analysis, generate summaries of logs, and rapidly analyze long reports while also facilitating enhanced information discovery, a Microsoft blog post stated.
Users will also be able to build custom applications to query data models and generate code documentation, processes which have historically been very time-consuming.
Ensuring the security of government data
Since most government agencies tackle sensitive information that needs a high level of security, Microsoft will provide these services through Azure Government which uses stringent security and compliance standards.
Government agencies will use the AI services on the Azure Government network, which will pair directly with the commercial Azure network over Microsoft’s own backbone networks. Through this architecture, Microsoft guarantees that government applications and data environments remain on Azure Government.
Additionally, Microsoft encrypts all Azure traffic using AES-128 block cipher and ensures that the traffic remains within Microsoft’s networks and is never made part of the public internet. Microsoft has also clarified in the blog post that government data will not be used to learn about the data or to train or improve the AI models.
Specifically, Azure Government users will not have access to ChatGPT, the conversational chatbot commonly accessed by users on the internet, a Microsoft spokesperson confirmed to Bloomberg.
This should put to rest any concerns about government or individual data being accidentally released to the public due to some misgivings about the technology from a state or federal employee, unlike what happened at Samsung.
Intel showcased details of its upcoming Meteor Lake chips of PC processors during Microsoft’s Build 2023 conference. With a “chiplet” system-on-chip (SoC) design, Intel aims to deliver advanced intellectual properties (IPs) and segment-specific performance while maintaining lower power consumption. Meteor Lake will introduce Intel’s first PC platform with a built-in neural VPU, enabling efficient AI model execution.
The integrated VPU will collaborate with existing AI accelerators on the CPU and GPU, allowing for accessible and impactful AI features for PC users. Intel asserts that its product is at the forefront of the AI trend, positioning Meteor Lake as a key player.
At Computex, Intel disclosed that the VPU in Meteor Lake is derived from Movidius’s third-generation Vision Processing Unit (VPU) design. By leveraging this acquisition from 2016, Intel aims to establish itself as an AI market leader. Although specific performance figures and VPU specifications have not been revealed, it is anticipated that Intel’s VPU will surpass Movidius’s previous throughput rating of 1 TOPS (tera operations per second).
As the VPU is integrated into the SoC, AI capabilities will be a standard feature across all Meteor Lake SKUs, rather than a differentiating factor. Intel seeks to achieve similar energy efficiency levels as smartphone SoCs, enabling tasks like dynamic noise suppression and background blur.
Collaborating closely with Microsoft, Intel aims to scale Meteor Lake and Windows 11 across the ecosystem. Through partnerships and leveraging the ONNX Runtime—an open-source library for deploying machine learning models—Intel plans to optimize AI model execution on the Windows platform.
Intel envisions shifting server-based AI workloads to client devices, offering benefits such as reduced costs, lower latency, and enhanced privacy. By pursuing this vision, Intel aims to gain a competitive advantage in the market.
In the race to extend their AI-powered app ecosystems, Microsoft recently made an announcement at Build that highlighted their plans to expand Copilot applications and adopt a standardized approach for plugins. This standard, introduced by their partner OpenAI for ChatGPT, enables developers to create plugins that seamlessly interact with APIs from various software and services. The expansion encompasses ChatGPT, Bing Chat, Dynamics 365 Copilot, Microsoft 365 Copilot, and the new Windows Copilot.
However, experts caution that this endeavor poses significant challenges for Microsoft. Google, during its I/O event, revealed plans to make Bard compatible with additional apps and services, both from Google itself (such as Docs, Drive, Gmail, and Maps) and from third-party partners like Adobe Firefly.
“When it comes to APIs, as opposed to hardware-dependent applications or apps, establishing a dominant position becomes much more difficult,” noted Whit Andrews, Vice President and Distinguished Analyst at Gartner Research, in an interview with VentureBeat. He further explained that if other companies develop APIs that are equally capable, the switching cost for users becomes less significant.
The competition between Microsoft and Google in the AI app ecosystem is poised to intensify as they vie for developer adoption and user loyalty. The ability to seamlessly integrate with a wide range of apps and services will play a crucial role in shaping the success of these platforms. As the battle unfolds, it will be intriguing to witness how developers and users embrace these AI-powered ecosystems and the unique advantages they bring to the table.
Microsoft is enjoying a head start
Andrews emphasized that Microsoft certainly has a head start and three key advantages.
First, Microsoft has an “extraordinary” first-mover advantage as OpenAI’s partner. “So the more they can establish familiarity and appeal, the more they can generate a defensible value,” he said.
In addition, without a moat, brand strength will also be an important driver, he explained. “With the intense value of Microsoft’s brand, that’s why things have to move so fast for Microsoft to have the best possible outcome.”
Finally, Microsoft, with its tremendous developer community, has the opportunity to grab market share and familiarity. “Microsoft attracts developers better than anybody else,” said Andrews. “So if you’re Microsoft, you lean on that this week [at Build]. Can you present your developers, your faithful, with the opportunities to participate in this extraordinary AI world that they will find attractive and familiar?” Microsoft needs to be synonymous in the developer’s mind with access to easy artificial intelligence-powered functionality, he added: “That means growth needs to be explosive — every developer in the Microsoft family needs to say to themselves, ‘I’ll start by looking there.’”
‘An impressive, all-out assault’ has limits
According to Matt Turck, a VC at FirstMark, Microsoft’s AI app ecosystem and plugin framework is an “impressive, all-out assault by Microsoft to be top of mind for developers around the world who want to build with AI.”
Microsoft is certainly pushing hard to lead the space and reap ROI on its multi-billion dollar investment in OpenAI, Turck told VentureBeat. But he said it “remains to be seen whether the world is ready to live in a Microsoft-dominated AI world” and suspects there will be “stiff resistance,” particularly on the enterprise side — where many want to leverage open source and multi-agents for customization, and will also want to protect their data from going out to a cloud provider (in this case, Azure).
Andrews agreed that it’s too early to know whether Microsoft will prevail — or if the AI app and plugin ecosystem will even flourish. “For lots of consumer users, ChatGPT is pretty amazing for what it does right now, and there might be problems with plugins that conflict with each other, things might begin to get a little challenging. The value of a plugin demands education, explanation and usage.”
Harder to implement effective controls and safeguards
Other experts point out that the growth of the app ecosystem will make it even harder to develop effective controls and safeguards in an era when AI regulation is becoming a top priority.
“The main concern in my mind is a distribution of accountability between the third parties and the entity that provides the source LLM,” Suresh Venkatasubramanian, professor of computer science at Brown University and former White House policy advisor, told VentureBeat in a message.
While he said there is also an opportunity if the companies proving the LLM service are willing and able to establish more controls, “I don’t see that happening any time soon. To me, this continues to reinforce the importance of guardrails ‘at the point of impact’ where people are affected.”
During its Build 2023 conference, Microsoft announced a significant expansion of features for its Power Platform, aimed at revolutionizing app development and empowering both professional and citizen developers with low-code technologies.
Recognizing the growing adoption of low-code tools among professional developers, Microsoft acknowledges their ability to streamline and expedite application development while reducing the complexities associated with traditional software development.
To leverage this trend, Microsoft has introduced updates to its comprehensive low-code Power Platform, enabling developers to boost productivity and accelerate solution-building. The Power Platform equips developers with tools for creating custom applications, automating processes, and effectively managing information across their organizations.
Low-code meets cutting-edge AI
Microsoft’s latest updates introduce cutting-edge AI capabilities designed to improve development practices and reshape businesses’ and individuals’ operations. The enhancements include intuitive visual interfaces, drag-and-drop components and pre-built templates. The aim to to enable developers to concentrate on higher-level tasks rather than the complexities of coding.
The company said developers will also gain increased agility so they can swiftly adapt to evolving market conditions and meet customers’ ever-changing needs.
“Our new features for the Power Platform will transform how developers work by increasing their speed, agility, and ability to deliver solutions in complex technical environments. The platform provides comprehensive low-code tools that increase [developers’] productivity through over 1,000 built-in data connectors, robust platform components, and easy-to-use development environments,” Charles Lamanna, CVP of business applications and platform at Microsoft, told VentureBeat. “The AI-powered Power Platform Copilot is further accelerating low-code development in a really interesting way — low code makes developers go faster, but AI plus low code makes devs go even faster.”
The introduction of Copilot in Power Apps stands out among the new features, as it streamlines and speeds website development via generative AI in Power Pages.
In addition, with Copilot as their digital copy editor, developers can rapidly generate text using natural language descriptions, facilitating efficient content creation.
Moreover, Copilot in Power Pages simplifies the creation of data-centric forms by automatically generating tables in Microsoft Dataverse based on natural language input. This innovative tool further assists in web page layout generation, image creation and theme customization, enabling development of visually appealing websites.
An additional advantage of Copilot in Power Pages is its seamless integration of a Power Virtual Agents chatbot into the Power Pages site. The chatbot can provide instantaneous responses to user queries via generative AI. This integration streamlines website creation and enhances the user experience with natural language and intelligent suggestions.
“Copilot in Power Apps can now build complex multi-screen apps and turn unstructured inputs into structured data,” said Microsoft’s Lamanna. “Developers can now move faster than ever by describing the user experience they want, and AI automatically generates it for them. Or, developers can provide sample data for an application and have AI automatically generate the data schema and app. This will help them spend more time writing code and less dealing with enterprise app development’s drudgery.”
With Gartner’s projection that 75% of new enterprise applications will embrace low-code or no-code technologies by 2026, Microsoft emphasized that the Power Platform, coupled with the integration of AI capabilities through Copilot, empowers developers to improve productivity and expedite the creation of innovative solutions.
Generative AI for fast-tracking application development
The company said that in the past three months, it has dedicated its efforts to introducing AI capabilities that revolutionize development practices and redefine how businesses and individuals operate. At Build 2023, the company emphasized its commitment to driving innovation and pushing boundaries in the tech industry.
Power Platform’s managed environments now support a catalog of enterprise-approved templates and components to further enhance developers’ productivity. Professional developers can use this feature to publish artifacts such as apps, flows and bots, streamlining the application development process.
Microsoft asserts that each new component added to the catalog brings about time and cost reductions for the entire organization, promoting efficiency and effectiveness.
The company said these advancements enable IT administrators to govern and maintain their app ecosystems effectively, minimizing duplication, fostering collaboration between makers and developers, and establishing a cohesive environment where components can be shared organization-wide. Consequently, the app-building process is expedited, while an audit trail ensures accountability and transparency.
Improvements to Copilot
Microsoft has also significantly improved Copilot, a generative AI tool within Power Apps that greatly accelerates application development. It enables developers to use natural language to easily incorporate screens and controls into their apps. It also facilitates the extraction of unstructured data from Excel, providing valuable insights.
Developers can now tap into their applications’ full potential through intuitive conversation-driven interactions, optimizing their functionality and user experience.
“Our new advancements include the ability for Copilot to add and edit any screen or mainline control to a maker’s app; the ability to understand unstructured Excel files as inputs and turn them into structured data; and the ability for developers to add an advanced Copilot control to model apps for end users that can reason over all the data in their application,” Lamanna told VentureBeat.
With the generative AI capabilities of Copilot in Power Pages, developers can generate web page layouts, create images and customize site themes for a seamless and visually appealing website setup, the company says. This integration empowers developers to expedite website building while maintaining control over visual elements and customization.
“Users can build data-centric forms by simply describing the type of form required. Then, Copilot will build it for you by auto-generating tables in Microsoft Dataverse and allowing you to edit, remove and add fields using natural language input,” Lamanna said. “Then, with a single click, you can add a Power Virtual Agents chatbot to your Power Pages site, which uses generative AI to instantly respond to user questions. All of this is possible in minutes using natural language and intelligent suggestions, streamlining the process of creating a Power Pages site.”
Microsoft is also enhancing Power BI, its business intelligence tool, by integrating Copilot. This integration harnesses the capabilities of large language models to turn data into insights. Users can describe the desired visuals and insights, and Copilot will generate them accordingly.
The integration encompasses functionalities including creating and customizing reports, generating and editing DAX calculations and producing text summaries of data insights.
The company says that Power BI’s ability to adapt the tone, scope and style of narratives aids in delivering easily comprehensible text summaries to improve data-driven decision-making.
“With Copilot in Power BI, we are infusing the power of large language models … to help everyone move faster from data to insights. You can simply describe the visuals and insights you’re looking for, and Copilot will do the rest,” explained Lamanna. ”Power BI can now also deliver data insights with greater impact through easy-to-understand text summaries.”
Enhancing chatbot development with agent development tools
Microsoft is also working towards transforming the process of building bots with Power Virtual Agents. With generative AI, developers can now create bots that engage in rich, multi-turn conversations with customers.
According to Microsoft’s Lamanna, these bots possess the intelligence to effectively handle customer queries by searching knowledge bases for answers and seamlessly connecting relevant actions to fulfill requests. The new Generative Actions engine empowers the bot to comprehend user requests, analyze its library of APIs, actions and tools, and autonomously assemble the necessary components to complete the request.
Lamanna believes this approach streamlines bot development by reducing the time required to build conversations node-by-node. Additionally, it allows developers to efficiently curate knowledge sources and tools, enhancing the bot-building process’s overall effectiveness.
“It is a way to create a ChatGPT-like experience using your data and your own plugins. Every organization can now use these chat experiences to better support their employees or customers,” added Lamanna. “Bot builders will spend less time building conversations node-by-node from scratch and instead focus on curating knowledge sources and tools, changing the traditional way bots are built.”
Virtual Agents unified authoring canvas
Microsoft has also made an announcement regarding the general availability of the unified authoring canvas in Power Virtual Agents. This innovative feature combines the capabilities of Bot Framework Composer and Power Virtual Agents’ intuitive authoring experience.
The unified authoring canvas offers a single platform where low-code makers and professional developers can collaborate. Within this canvas, team members can use rich multi-author capabilities and a built-in code side-by-side editor, enabling everyone to contribute to the bot’s development.
Noteworthy additions to this tool include enhanced rich response authoring, the ability to integrate APIs and events, Power FX expressions, connections to Azure services for custom natural language requirements, and a range of powerful generative AI capabilities. These advancements further empower users to create highly interactive and dynamic bots.
Furthermore, Microsoft has introduced a preview of Copilot in Power Automate, showcasing a brand-new designer specifically designed for cloud flows. This integration aims to unify the user experience and use natural language to accelerate the creation of automation.
Microsoft has announced a major expansion of its artificial intelligence-based search tools, offering new features that allow visual and multimodal searches, as well as persistent chat tools. These updates significantly enhance the capabilities of Bing, the company’s search engine, and Edge, its web browser.
Over the last three months, a limited number of users have been able to test the new AI search features in a limited preview. However, the company has now announced that it is moving Bing and Edge into an open preview, allowing anyone to test the new tools by signing in with a Microsoft account. This move indicates that Microsoft believes the new features are ready for wider use and feedback.
“Today is exciting because it means the new Bing is now more quickly accessible to anyone who wants to try it, which also means that we can engage and get a greater volume of signals from anyone else who wants to try the experience. With a higher volume of data, we are able to iterate more quickly and bring newer experiences and even improve upon the experiences that we’re launching,” said Microsoft’s global head of marketing for search and AI, Divya Kumar, in an interview with VentureBeat.
“It’s exciting seeing this shift within a very short amount of time, 90 days, going from just a text-based experience to a visually rich, conversational experience,” Kumar added. “That’s the amount of data we’ve been getting. It’s been incredible to get the amount of feedback that we’ve been getting, and seeing that shift happen so quickly, at the pace of AI — it’s been fascinating.”
Bing Chat is now even more powerful
Bing search, long derided as an inferior rival to Google, has undergone a remarkable transformation after the company integrated GPT-4 into its core functionality three months ago. Microsoft says Bing has grown to exceed 100 million daily active users, and daily installs of the Bing mobile app have increased fourfold since launch.
Today’s update adds several visual search features, including the ability to search using images. It also allows users to generate charts, graphs and other visual answers within the search experience. Microsoft also says it is expanding its Image Creation Tool, which allows users to generate images through conversational prompts, to support more than 100 languages.
Bing Chat now saves your history
One of the most critical updates that rolled out today is the ability to revisit and resume previous conversations with Bing Chat. By integrating chat history and persistent chats within the Edge browser, Microsoft hopes to make search more relevant and convenient across multiple interactions. The company said that it plans to leverage users’ chat history and context to deliver more personalized and improved answers over time.
By keeping track of a person’s queries and responses, persistent chats can help users find relevant information faster, avoid repeating themselves, and follow up on topics of interest at a later ate. Persistent chats can also create a more natural and engaging interactions with AI assistants over long periods of time, as they can mimic the flow of a typical human conversation.
Bing Chat will soon offer third-party plugins
The company also announced plans to open up Bing’s capabilities to third-party developers, allowing them to build features and plug-ins on top of the search platform. For example, Microsoft said people could soon search for restaurants in Bing Chat and then book a reservation through OpenTable, or get answers to complex questions through Wolfram Alpha, without leaving the Bing experience.
The introduction of third-party plugins essentially turns Bing into a platform, allowing developers to create applications that run within the Bing Chat web and mobile interface. This is a similar strategy to one that’s being used by OpenAI with ChatGPT Plugins. The plugins will eventually be used in a similar way to apps on a mobile phone — each plugin will help users achieve a specific task, like booking a flight or watching a movie trailer.
Microsoft Edge browser gets an major upgrade
Microsoft is also releasing redesigned version of its Edge browser, which integrates more deeply with Bing Chat. The most noticeable difference for users will be the new capabilities of Bing Chat via the Edge sidebar, which can now reference chat history, export and share conversations from Bing Chat, summarize long documents and perform actions based on user requests.
Export and share functionalities via the Edge sidebar enable users to easily share their conversation with others in social media or continue iterating on a newly discovered idea. Users can export their chat directly in the same format, making it easy to transition to collaborative tools like Microsoft Word.
Summarization capabilities help users to consume dense online content more efficiently. Bing chat can now summarize long documents, including PDFs and longer-form websites, and highlight the key points for users. Users can also ask Bing chat to summarize a specific section or paragraph of a document.
Actions in Edge allow users to lean on AI to complete even more tasks with fewer steps. For example, if a user wants to watch a particular movie, actions in Edge will find and show them options in chat in the sidebar and then play the movie they want from where it’s available. Actions in Edge will also be available on Edge mobile soon.
The new Edge will begin to roll out in the coming weeks for Windows 10, Windows 11, macOS, iOS and Android devices.
Next-generation search engine and browser
The updates announced today showcase Microsoft’s expertise in two of its core domains: artificial intelligence and cloud computing. “There are a couple of things Bing does uniquely well. Bing is not only built on GPT-4, it is built combining GPT-4 with Microsoft search and using Azure AI supercomputing,” said Kumar.
Microsoft said that these updates are part of its vision to make Edge and Bing the best tools for productivity and creativity. The company also said that it is exploring ways to make chats more personalized by bringing context from a previous chat into new conversations.
“I think the opportunity is in how much AI can play a role in not only driving productivity and efficiency, but reducing barrier, and actually helping with human connection,” she said. “Honestly, I think we’ve just barely scratched the surface.”
Microsoft plans to launch a new version of ChatGPT on its dedicated Azure cloud computing servers to address concerns regarding data leaks and regulatory compliance. This version would allow users to safeguard sensitive information from being used to train ChatGPT’s language model.
While many companies use OpenAI’s conversation chatbot, ChatGPT, to showcase their adoption of AI and improve customer experiences, industries such as healthcare and finance have been hesitant due to the risk of data leaks associated with the service’s common infrastructure. With Microsoft’s offering of a private ChatGPT service, these industries may feel more secure in utilizing AI technology to automate processes.
OpenAI itself had a brief mishap in March this year when a bug exposed brief chat descriptions of some users to others. Competitors in business would be distraught if trade secrets or customer information were leaked in such a scenario.
ChatGPT on Dedicated Servers
As part of its multi-year, multi-billion investment in OpenAI, Microsoft has begun to incorporate the AI model in its own products. The software giant has also gained the rights to sell OpenAI’s products to customers and is now looking to bundle its Azure cloud computing services by offering a niche product to some users.
Since OpenAI is still developing its AI models, it uses customer information for training its language models. Interesting Engineering reported last month how Samsung faced multiple incidents where confidential information was entered into ChatGPT by employees unknowingly looking for help from AI.
Microsoft’s offering is expected to be aimed at large organizations that are still on the fence about using ChatGPT over such fears of accidental leakage of confidential information. However, this special case consideration is expected to come at an extra cost, which could end up being as much as 10 times the cost of using ChatGPT in a shared space, according to The Information‘s report.
Microsoft’s announcement of such a service is expected to come later this quarter but will also compete with OpenAI’s own offering, which makes similar promises that will not use the data for training the AI model.
It will be interesting to see OpenAI and Microsoft, who have been partners in promoting ChatGPT now compete for the same set of customers with similar products and even similar backend infrastructure. This will also coincide with the launch of bilingual models such as Alibaba’s Tongyi Qianwen, which will seek customers from the Western markets.
The other option for companies would be to choose the cloud computing infrastructure of their own choice and develop AI models based on their own data and needs, much like Bloomberg did.
Since ChatGPT’s debut in November, users have been turning to the popular chatbot created by OpenAI for help with everything from emailing coworkers and updating resumes to finding recipes ideas and overhauling dating profiles.
While some fear the chatbot is already eliminating jobs, it has also introduced ways to help make work more efficient, allowing users to shift their energy toward other tasks and projects.
One example is by using the generative AI for help with data processing programs workers often struggles with, like Microsoft Excel and Google Sheets. We asked ChatGPT how it can help alleviate spreadsheet woes — here’s what the chatbot had to say about how it can help make your Excel experience easier.
Assisting with tricky formulas, scripts, and templates
ChatGPT can help suggest the best formulas to use within data sets to identify insights you’re seeking and more quickly find results. The technology can also help write Excel scripts or macros, an action or set of actions that can be run repeatedly, like changing the font size or color of a group of cells, which can help make your work more efficient.
According to ChatGPT, it can assist in designing or finding a spreadsheet that fits a specific template with headings and categories already implemented. If a user needs a function that isn’t already available in Excel or Sheets, ChatGPT says it can help walk you through the process of writing it in a program like the Google Apps Script.
Identifying data trends and flagging errors
According to ChatGPT, the technology can help analyze data by finding trends, summarizing information into a few key statistics, and even helping to create charts and different ways to visualize data. The technology can also help quickly identify errors or missing data points, offering remedial suggestions along the way.
ChatGPT said it can help users integrate data into other programs, or help with importing and exporting data to an Application Programming Interface, commonly referred to as an API.
Helping beginners learn common tricks
The chatbot can walk beginners through common Excel tricks to make the program more efficient, like keyboard shortcuts or step-by-step directions on how to format data in a certain way.
ChatGPT said it can also help with general troubleshooting as issues arise with a spreadsheet, which could be faster than looking through the help menu of a specific program.
Finally, the chatbot said it can recommend other tutorials or guides available online based on your current Excel skill level or what specific task you are looking to complete.
The US tech giants like Alphabet, Microsoft, Amazon, and Meta are increasing their large language model (LLM) investments as a show of their dedication to utilizing the power of artificial intelligence (AI) while cutting costs and jobs.
Since the launch of OpenAI’s ChatGPT chatbot in late 2022, these businesses have put their artificial intelligence AI models on steroids to compete in the market, CNBC reported on Friday.
All the recently released quarterly reports by these tech behemoths show their efforts to increase AI productivity in the face of growing economic worries.
A significant amount of data and processing power are needed for generative AI programs to replicate human-like outputs like text, code excerpts, and computer-generated graphics.
Tech titans and AI investments
During their respective earnings calls, the CEOs of Alphabet, Microsoft, Amazon, and Meta all discussed their plans and monetary investments for developing and deploying AI applications.
Sundar Pichai, CEO of Alphabet, acknowledged the demand to produce AI products and underlined the incorporation of generative AI developments to improve search skills.
Beyond search, Google uses AI to improve ad conversion rates and fend off “toxic text.” Pichai noted ties with Nvidia for strong processors as well as cooperation between the two main AI teams, Brain and DeepMind.
Microsoft’s Teams teleconferencing system, Office program, and Bing search engine all use OpenAI’s GPT technology.
Invoking Bing’s doubled downloads following the integration of a chatbot, CEO Satya Nadella emphasized that AI will drive revenue growth and increase app penetration. Microsoft’s expenditure on sizable data centers for AI applications will demand a sizeable sum of money.
Andy Jassy, the CEO of Amazon, showed interest in generative AI, highlighting the recent developments that provide game-changing possibilities.
Amazon plans to use its resources as one of the few businesses capable of making the necessary infrastructure investments in developing its own LLMs and creating data center chips for machine learning, despite the fact that it primarily sells access to AI technology.
Jassy noted Amazon Web Services’ aspirations to create tools for developers and enhance user experiences, including Alexa.
Along with Meta’s emphasis on the metaverse, CEO Mark Zuckerberg emphasized the value of AI. Zuckerberg emphasized the company’s shift toward generative foundation models and its use of machine learning for suggestions.
The AI initiatives from Meta will have an impact on a variety of products, including conversation features in Facebook Messenger and WhatsApp, as well as tools for creating images for Facebook and Instagram.
In addition, Zuckerberg discussed the company’s expenditures in enlarging data centers for AI infrastructure as well as the possibilities of AI agents, such as the automation of customer service.
AI booms as tech job cuts gloom
All the major tech companies like Alphabet, Microsoft, Amazon, and Meta are making significant investments in massive language models and artificial intelligence to improve their products and user experiences.
According to the CNBCreport, these tech behemoths are investing enormous resources to be on the cutting edge of this quickly developing industry because they see the revolutionary potential of AI.
While AI generated positive media coverage, the loss of tech jobs also caused heartbreak.
According to a Crunchbase News count, 136,569 employees at IT companies with US headquarters or with a sizable US workforce have been let go in a wave of layoffs as of 2023. In 2022, public and private tech enterprises in the US cut more than 93,000 jobs.
According to some people, your business may be way behind if you do not already use at least one Artificial Intelligence (AI) application.
Indeed, AI is used in a wide variety of ways these days. It has already begun to alter how we work and live by simplifying and accelerating complicated tasks. New AI language models can understand and generate human-like responses, opening up various possibilities in various fields. These AI language models, like ChatGPT, will likely be a game-changer in areas as diverse as improving customer service and enhancing language translation.
It’s only normal to ask, then, which is the best among the top three (3) AI-driven chatbots: ChatGPT, Bing, and Google Bard. We have tested, read user reviews, and followed the news on all three models. This article will discuss and compare their underlying technologies and applications and explore the much-asked question: ChatGPT vs. Bing vs. Google Bard – which is better?
What is an AI language model?
An AI language model is not a deterministic system, like regular software. Instead, they are probabilistic — they generate replies by predicting the likelihood of the next word based on statistical regularities in their training data. This means that asking the same question twice will not necessarily give you the same answer twice. It also means that how you word a question will affect the reply.
ChatGPT, Bing, and Google Bard are chatbots that all use AI language models developed to generate more human-like language. These models have been trained on large text datasets, allowing them to generate contextually relevant responses to a wide range of queries and conversations. They are used in various applications, such as customer service, language translation, personal assistance, and more.
It is not really possible to directly compare the three AI chatbots, as some of them are still in development, and new features and capabilities are being added all the time. However, we have some experience with Bing and Bard, even though many are still on a waiting list for access. ChatGPT has been around for a while. We analyze the available information to understand the differences among these chatbots better.
Features and capabilities of Chatgpt, Bing, and Google Bard.
Modern AI language models that have revolutionized the field of natural language processing (NLP) include ChatGPT, Bing, and Google Bard. Each model stands out thanks to its own attributes and abilities, although these are not the only NLP chatbots out there. And, as you will see, they each use somewhat different AI technology.
It is not really possible to directly compare the three AI chatbots, as some of them are still in development, and new features and capabilities are being added all the time. However, we have some experience with Bing and Bard, even though many are still on a waiting list for access. ChatGPT has been around for a while. We analyze the available information to understand the differences among these chatbots better.
Features and capabilities of ChatGPT, Bing, and Google Bard.
Modern AI language models that have revolutionized the field of natural language processing (NLP) include ChatGPT, Bing, and Google Bard. Each model stands out thanks to its own attributes and abilities, although these are not the only NLP chatbots out there. And, as you will see, they each use somewhat different AI technology.
ChatGPT
ChatGPT (Chat-based Generative Pre-trained Transformer) is a large language model developed by OpenAI. It has 6 billion training parameters (e.g. the weights and biases of the layers) and can generate human-like text in response to a given prompt.
ChatGPT is capable of understanding natural language queries and can provide relevant responses. It can perform a wide range of tasks, including language translation, question answering, summarization, and much more.
ChatGPT can also generate text in various styles and tones, making it useful for creative writing and other applications. ChatGPT is a computer program that uses advanced technology to create engaging and interactive conversations with users. It works by analyzing the training texts it has been given and using these to generate natural and engaging responses. This technology is based on a combination of natural language processing and machine learning, which makes it possible for ChatGPT to learn and adapt to different types of conversations.
Bing
Bing AI is a search engine developed by Microsoft. It is based on ChatGPT’s latest technology, ChatGPT-4. However, Bing has some major differences from ChatGPT; perhaps the biggest is that Bing has access to the entirety of the internet, while ChatGPT only has access to the data is was trained on.
As with the other chatbots here, Bing uses AI-driven natural language processing to understand user queries and provide relevant search results.
Bing can also perform various other tasks, such as providing weather forecasts, news updates, and sports scores. It can also be used for image and video searches, and it offers a variety of filters and settings to refine search results.
Google Bard
Unlike other chatbots that rely on GPT-based technology, Google Bard, uses a completely different technology powered by an extension of the in-house LaMDA that the company previewed a couple of years ago at Google I/O. However, some users have reported that Google Bard is less advanced than its competitors.
For example, ChatGPT’s training datasets included materials like Wikipedia and Common Crawl, and LaMDA was trained using more human dialogues. The result is that ChatGPT tends to use longer and more well-structured sentences, while LaMDA has a more casual style.
Although Google is currently facing challenges in dealing with the bots’ propensity to make factual errors and promote misinformation, the company is expected to improve its chatbot to compete with the growing competition from Microsoft and OpenAI.
Bard is capable of performing tasks such as answering questions, summarizing information, and creating content when given prompts. Bard has flexibility because it is connected to the internet as well as the Google search database.
Bard can also help users explore different topics by summarizing information from the internet and providing links to relevant websites for more in-depth information. While the platform has been trained on human dialogues and conversations, it’s important to note that Google also incorporates search data to offer real-time information. Google Bard AI has access to the entire Internet.
ChatGPT, Bing, and Google Bard are all powerful systems with unique features and capabilities. Depending on the task, one of these may be more suitable than the others.
User experience
Users may interact seamlessly with ChatGPT, Bing, and Google Bard as AI language models, each giving a different user experience.
We tested these top 3 AI language models: ChatGPT, Bing, and Google Bard, asking over 200 questions in various categories. Each chatbot offered different user experiences and responses.
ChatGPT stood out with its helpful log of past activity in a sidebar, while Bing didn’t allow viewing past chats. Bard displayed three different drafts of the same response. All three chatbots had varying response times and limitations on prompts.
Google Bard seemed to have more human-like agency, purporting to have tried products and expressing human attributes like having black hair or being nonbinary. Bard also provided strong opinions on topics like book banning. In contrast, ChatGPT and Bing Chat responded more objectively.
Creativity varied across chatbots, with ChatGPT boasting in a tech review about its own prowess and Bing Chat crafting a LinkedIn post about a fictional app. When testing the models’ limits, Bing Chat attempted to self-censor, while ChatGPT refused to engage in offensive responses. Bard, however, provided both derogatory terms and irrelevant information.
In summary, our and many other users’ experiences demonstrated that each AI language model provided unique user experiences, responses, and creativity levels, with some chatbots leaning more toward human-like qualities.
Queries and AI-language models
When a user submits a query to an AI NLP system like ChatGPT, Bing, or Google Bard, the system uses various algorithms and machine learning models for query interpretation and then generates a response.
The first step in interpreting a user query is understanding its intent. This is done using natural language processing (NLP) techniques, which analyze the syntax, semantics, and context of the query to determine its meaning. The system may also use machine learning models to classify the query into specific categories, such as “informational,” “transactional,” or “navigational.”
Once the system has determined the query’s intent, it retrieves relevant information from its database or the internet. This process may involve crawling web pages, analyzing documents, or searching databases for the most relevant and accurate information.
Finally, the system generates a response to the user query. This may involve generating a summary, answering a specific question, or providing a list of relevant results. The AI system may use various techniques to generate the response, including natural language generation (NLG), summarization algorithms, or chatbot frameworks.
The response generated by the AI system is based on the data it has analyzed and the algorithms it has used to interpret the user query. The accuracy and relevance of the response depend on the quality of the data and algorithms used, as well as the complexity and specificity of the user query.
Chat GPT
Google Bard
Bing
Pricing and Accessibility
The original version of Chatgpt remains free to users, but a plug is available for $20 per month.
Free for members of the public, although through a waitlist. Accessible to use when accepted after joining the waitlist.
Accessible to Users who are accepted after they join the waitlist.
Developer
OpenAI
Google/Alphabet
OpenAI (Uses finetuning)
Technology
GPT-4
LAMDA
GPT-4
Response to Queries
ChatGPT was trained on a vast collection of text from different sources such as books, scientific journals, news articles, and Wikipedia. The training data used had a cutoff date of 2021, meaning it does not have access to recent events.
Bard has real-time access to Google’s rich database that is gathered through search. It uses this information from the web to offer reliable and current responses.
Like Bard, Bing has real-time access to Bing search and can provide current information.
Although Bard, Bing, and ChatGPT aim to provide human-like answers to questions, each has a unique approach.
Bing employs the same GPT technology as ChatGPT and can go beyond text to also generate images. Bard uses Google’s LaMDA (Language Model for Dialogue Applications) model and often provides less text-heavy responses. In contrast, Bing collaborates with OpenAI.
Applications and use cases of ChatGPT, Bing, and Google Bard
Now that we’ve seen how Chatgpt, Bing, and Google Bard work, how they compare in real life, and their differences, let’s now talk about the applications of these AI language models in different use cases.
ChatGPT
ChatGPT has some unique features that make it particularly useful for specific applications.
First, it is the most verbally flexible and can generate human-like text, making it difficult to tell whether a human or AI is behind a piece of writing. Second, it uses Reinforcement Learning with Human Feedback to create interactive responses that evolve and adapt based on user feedback. Third, it can be used for translating text from one language to another, making it easier for users who speak different languages to communicate.
Fourth, it can summarize long texts, saving time for people too busy to read lengthy reports. It can also provide personalized content using machine learning algorithms.
Bing
Bing is best for getting information from the web. It has expansive use cases and applications such as:
Calculation, units, and currency conversion: Type the value or equation and the units, and Bing will give you the result. You can also do currency conversions and mathematical equations.
Search for a specific file type: You can use the contains:<fileExtension> option to find sites containing a specific file type. For example, contains: pdf would return sites that have a PDF file.
Get weather forecasts: Type the name of the city followed by the weather or forecast. You can also add units of measurement such as Celsius.
Track flights: Type ‘flight status’ in the search box, and Bing will ask for the airline name and flight number. Enter the details and click on get status to get the flight status.
Add preference for a particular result type: Use the preferred:<keyword> option to give more weight to results containing that keyword. For example, to search for a content management system, enter prefer:php to get results for PHP CMS.
Get live stock quotes: Enter the ticker symbol and the word stock to get the quotes.
Google Bard
Google Bard is currently more limited but has several potential uses that could make our lives easier and help us learn new things, such as:
Providing accurate answers to questions using advanced AI algorithms.
Using the familiar Google search engine to find information quickly and easily.
Improving task automation with Google AI technology.
Offering personal AI assistance, such as helping with time management and scheduling.
Serving as a social hub and facilitating conversations in various settings.
How businesses and individuals can use AI-Language models
There are multiple ways in which AI language models can benefit individuals and businesses. Several ways are:
One of the biggest advantages of using AI in businesses is that it can handle some tasks, especially routine ones, faster and more efficiently than humans. They can even help with some routine coding tasks.
This means that people can focus more effort on those critical tasks that AI can’t do, which leads to better use of human intelligence and empathy. By letting technology handle mundane and repetitive tasks, companies could save money and maximize the potential of their human workforce. Using AI can also speed up the development process and reduce the time it takes to move from the design phase to production and marketing. This means that AI could allow companies to see a quicker return on their investment.
Improved quality and fewer mistakes
By using AI in some of their processes, businesses can reduce errors and stick to established standards better.
Microsoft is reportedly working on its own AI chips to train complex language models. The move is thought to be intended to free the corporation from reliance on Nvidia chips, which are in high demand.
Select Microsoft and OpenAI staff members have been granted access to the chips to verify their functionality, The Information reported on Tuesday.
“Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language,” read The Information article.
Since 2019, Microsoft has been secretly developing the chips, and that same year the Redmond, Washington-based tech giant also made its first investment in OpenAI, the company behind the sensational ChatGPT chatbot.
Nvidia is presently the main provider of AI server chips, and businesses are scrambling to buy them in order to use AI software. For the commercialization of ChatGPT, it is predicted that OpenAI would need more than 30,000 of Nvidia’s A100 GPUs.
While Nvidia tries to meet demand, Microsoft wants to develop its own AI chips. The corporation is apparently speeding up work on the project, code-named “Athena“.
Microsoft intends to make its AI chips widely available to Microsoft and OpenAI as early as next year, though it hasn’t yet said if it will make them available to Azure cloud users, notedThe Information.
Microsoft joins other tech titans making AI chips
The chips are not meant to replace Nvidia’s, but if Microsoft continues to roll out AI-powered capabilities in Bing, Office programs, GitHub, and other services, they could drastically reduce prices.
Bloomberg reported in late 2020 that Microsoft was considering developing its own ARM-based processors for servers and possibly even a future Surface device. Microsoft has been working on its own ARM-based chips for some years.
Although these chips haven’t yet been made available, Microsoft has collaborated with AMD and Qualcomm to develop specialized CPUs for its Surface Laptop and Surface Pro X devices.
The news sees Microsoft join the list of tech behemoths with their own internal AI chips, which already includes the likes of Amazon, Google, and Meta. However, most companies still rely on the use of Nvidia chips to power their most recent large language models.
The most cutting-edge graphics cards from Nvidia are going for more than $40,000 on eBay as demand for the chips used to develop and use artificial intelligence software increases, CNBCreported last week.
The A100, a nearly $10,000 processor that has been dubbed the “workhorse” for AI applications, was replaced by the H100, which Nvidia unveiled last year.