WordPress Ad Banner

OpenAI Announces General Availability of GPT-4

OpenAI has made an exciting announcement today, revealing the general availability of GPT-4, its latest text-generation model, via the OpenAI API. Existing OpenAI API developers with a successful payment history can access GPT-4 starting immediately. OpenAI plans to gradually expand access to new developers by the end of this month, and availability limits will be adjusted based on compute availability.

Since March, millions of developers have eagerly requested access to the GPT-4 API, and OpenAI is witnessing a growing range of innovative products leveraging the power of GPT-4. OpenAI envisions a future where chat-based models can support any use case.

Compared to its predecessor, GPT-3.5, GPT-4 offers notable improvements. It can generate text, including code, and accepts both image and text inputs. GPT-4 performs at a “human level” on various professional and academic benchmarks. The training data for GPT-4 includes publicly available information from web pages, as well as licensed data obtained by OpenAI.

However, the image-understanding capability of GPT-4 is currently limited to a single partner, Be My Eyes, as OpenAI conducts testing. OpenAI has not disclosed when this functionality will be made available to a wider customer base.

It’s important to note that, like any generative AI model, GPT-4 is not perfect. It may occasionally “hallucinate” facts and make reasoning errors, sometimes with unwarranted confidence. It also lacks the ability to learn from experience, making it prone to failures in complex tasks such as introducing security vulnerabilities in generated code.

OpenAI plans to introduce the capability for developers to fine-tune GPT-4 and GPT-3.5 Turbo, another recent text-generation model, with their own data, as has been possible with previous OpenAI models. This feature is expected to be available later this year.

The competition in generative AI has been intensifying since the unveiling of GPT-4 in March. Anthropic, for instance, expanded the context window of its text-generating model Claude from 9,000 tokens to 100,000 tokens. Context window refers to the amount of text considered by the model before generating additional text, while tokens represent units of raw text. GPT-4 previously held the record for a context window, offering up to 32,000 tokens. Models with smaller context windows tend to forget the content of recent conversations, leading to a loss of coherence.

In a related announcement, OpenAI also made its DALL-E 2 image-generating model and Whisper speech-to-text model APIs generally available. OpenAI plans to deprecate older models in its API to optimize compute capacity. Starting from January 4, 2024, GPT-3 and its derivatives will no longer be accessible. They will be replaced by new “base GPT-3” models, which are expected to be more computationally efficient. Developers currently using the older models will need to manually upgrade their integrations by January 4. Those interested in continuing to use fine-tuned old models beyond that date will have to fine-tune replacements based on the new base GPT-3 models. OpenAI will provide support to users to ensure a smooth transition and will reach out to developers who have recently used the older models with further instructions and information about testing the new completio

OpenAI Expands Beyond the US, Chooses London for First International Office

San Francisco-based OpenAI, the parent company of ChatGPT, recently announced its plans to expand its operations outside the United States, London has been selected as the location for OpenAI’s inaugural international office, signaling a strategic move to tap into the UK’s thriving AI startup ecosystem. The decision comes amidst the anticipation of the UK’s first AI Summit in June 2023, aimed at exploring the practical applications of AI and bringing together industry leaders and professionals.

While the consumer-facing AI market experiences significant growth, concerns about privacy violations have surfaced. OpenAI faced a class action lawsuit in the US, with allegations of breaching privacy laws. However, OpenAI’s expansion to London is seen as an opportunity to attract world-class talent and reinforce efforts in developing safe Artificial General Intelligence (AGI), an advanced AI system envisioned to surpass human intelligence.

OpenAI CEO, Sam Altman, also expressed his reservations about proposed EU legislation that seeks to regulate AI practices. Altman cited technical challenges in complying with certain requirements of the AI Act, particularly regarding safety and transparency. The UK’s departure from the EU, following Brexit in 2021, may have influenced OpenAI’s decision to establish a presence in London.

The UK government and industry leaders view OpenAI’s expansion as a vote of confidence in the country’s AI capabilities. The UK’s Science, Innovation, and Technology Secretary, Chloe Smith, emphasized the growth of the AI sector in the country, with over 50,000 people employed in AI-related roles. Prime Minister Rishi Sunak highlighted the transformative potential of AI in improving public services and pledged support for emerging opportunities in various domains.

To further boost the AI landscape

Tthe UK government plans to invest £110 million in the AI Tech Missions Fund and allocate £900 million for an AI Research Resource and the development of an exascale supercomputer capable of running large AI models. Additionally, initiatives such as the £8 million AI Global Talent Network and funding for new PhDs aim to nurture a thriving talent pool of AI researchers.

Diane Yoon, Head of Human Resources for ChatGPT developers, expressed excitement about expanding the research and development footprint to London. Yoon emphasized the city’s renowned culture and exceptional talent pool, reinforcing OpenAI’s commitment to building dynamic teams focused on research, engineering, and the promotion of safe AI practices.

OpenAI’s success with its popular chatbot, ChatGPT, has ignited a global race in AI-powered products, with significant investments, including Microsoft’s reported $13 billion investment in OpenAI.

OpenAI’s ChatGPT App Employs Bing for Web Searches

OpenAI has announced the addition of a new feature called Browsing to its premium chatbot, ChatGPT Plus. Subscribers can now utilize the Browsing feature on the ChatGPT app, enabling the chatbot to search Bing for answers to questions.

To activate Browsing, users can navigate to the New Features section in the app settings, select “GPT-4” in the model switcher, and choose “Browse with Bing” from the drop-down menu. The Browsing functionality is available on both the iOS and Android versions of the ChatGPT app.

OpenAI highlights that Browsing is particularly useful for inquiries related to current events and other information that extends beyond ChatGPT’s original training data. By enabling Browsing, users can access more up-to-date information, as ChatGPT’s knowledge base is limited to information available up until 2021 when Browsing is disabled.

While the introduction of Browsing enhances ChatGPT’s capabilities and makes it a more valuable research assistant, there are concerns about the restriction to Bing as the sole search engine. OpenAI’s close partnership with Microsoft, which has invested over $10 billion in the startup, likely plays a role in this choice. However, Bing is not regarded as the definitive search engine, and past analyses have raised questions about its fairness and the presence of disinformation in search results.

Users may find the limitation to Bing as a user-hostile move since they don’t have alternatives to choose from when Bing’s search results fall short. Although Microsoft continues to refine Bing’s algorithms, the lack of diversity in search options raises concerns about access to unbiased and comprehensive information.

In other news related to the ChatGPT app, OpenAI has also implemented a feature that allows users to directly access specific points in the conversation by tapping on search results. This improvement, alongside the introduction of Browsing, will be rolled out this week, according to OpenAI.

Workato Partners with OpenAI to Ease Business Automation

Workato, a leading enterprise automation platform, has recently announced a strategic partnership with OpenAI to enhance its low-code/no-code platform with the integration of various AI models and future releases from OpenAI. The objective of this collaboration is to simplify the process of building automation and integrations by leveraging generative AI capabilities.

As part of this partnership, Workato is set to introduce several new features and capabilities. One of these is Workato Copilots, which empowers users to create automations and application connectors using simple, plain-English descriptions. By incorporating AI connectivity, Workato users can now incorporate generative AI capabilities into their automations through the OpenAI connector provided by Workato.

Additionally, Workato will introduce WorkbotGPT, a feature that enables users to interact with enterprise applications and data through popular chat apps like Slack and Microsoft Teams in a conversational manner.

Gautham Viswanathan, founder and head of products and engineering at Workato, explained the value of Workato Copilot, stating, “Built using OpenAI models, the Copilot is like a Workato-expert coworker who generates workflow recipes and data connectors through a natural conversation. It has been trained on millions of data points from Workato’s public community. We believe Workato Copilot will further lower the barrier of who can build within an organization.” He further emphasized that the Copilot will provide assistance to users by offering onboarding support, helping them learn new capabilities, suggesting what to build next, providing recommendations, and offering instant troubleshooting help.

Workato’s enterprise automation tool already incorporates RecipeIQ, its own AI/ML models that provide data mapping, logic, and recommendations for the next steps. By integrating OpenAI’s models, Workato aims to enhance automation and integration development, making it even easier for businesses to adopt and implement their technology.

Furthermore, this collaboration ensures robust security and governance capabilities, facilitating confident collaboration between IT and business teams and enabling efficient operations at scale.

Through their collaboration with OpenAI, Workato is set to revolutionize the way businesses approach automation and integration development, paving the way for increased efficiency and streamlined operations in organizations of all sizes.

Streamlining Enterprise Automation Through OpenAI Models

According to Viswanathan, integrating OpenAI’s models into the Workato platform involved considering numerous use cases requested by its end users from various departments and industries. 

These use cases include functions such as generating highly personalized emails/sequences, summarizing meetings/recordings, and creating virtual assistants.

Since customers are currently building automations on the Workato platform, the company selected LLMs by evaluating these automations and envisioning how they can be enhanced through generative AI. The team then explored OpenAI models to determine which ones best suit each use case.

“This led us to select several LLMs and then train them with our proprietary models to best serve those specific use cases,” Viswanathan told VentureBeat. “We have seamlessly incorporated these models into our platform so that our customers can experience this as they build their automations, integrations, APIs, or application connectors.”

Workato introduced RecipeIQ in 2018, utilizing proprietary ML techniques to offer users recommendations for their workflow’s next steps. The company said that the Copilot will expand upon this feature, enabling it to construct complete recipes through conversational interactions with the builder.

Viswanathan said that the WorkbotGPT capability will facilitate real-time automation in business workflows, eliminating the need for pre-built components.

“WorkbotGPT is conversational automation for Slack and Teams. You can give it natural language prompts, and it will generate the summary of action items for you by looking up transcripts of recordings in Zoom, your email, and CRM — all in real time,” he said.

Ensuring Secure Automation Development 

Workato said its platform incorporates a robust governance framework, facilitating the management of federated workspaces for different lines of business through AutomationHQ. 

The company also gives its customers full control over their assets, data and logs. The platform implements robust roles-based access controls and provides fine-grained permissions, allowing customers to determine who is authorized to use AI services. 

Customers can also mask sensitive data, audit all user activity changes, stream logs for centralized monitoring, and customize the storage duration of logs.

“For our international and multinational customers, we have multi-region data center support for customers that need to meet strict data residency and sovereignty requirements. Our Copilots adhere to the strictest data privacy standards and do not use customer data from these interactions to train any model,” explained Viswanathan. “These capabilities are built on top of a strong foundation of security featuring multi-layer encryption, hourly key rotation, EKM/BYOK, and zero-trust policies. “

What’s next for Workato? 

Viswanathan revealed that the company is presently training its models using metadata from user automations, integrations and internal APIs. The company aims to develop other powerful tools similar to Copilot and WorkbotGPT through this training. 

He believes that as enterprises increasingly embrace the power of AI, their trust in sharing data with external LLMs will grow.

“That will open a set of exciting possibilities — some we can think of, some will remain unknown until we fully understand the breadth and depth of available data,” he said. “We aim to solve that challenge by bringing AI, automation and integration to a single platform and creating new products and solutions that our customers can use to harness the power of these technologies.”

Microsoft Announced OpenAI’s GPT-4 Access to US Government Agencies

In a significant development, Microsoft has announced that it will provide government agencies in the U.S. with access to OpenAI’s artificial intelligence (AI) models, including the highly regarded GPT-4 and its predecessor. This move aims to empower government offices by harnessing the capabilities of AI and leveraging the benefits of Azure OpenAI services.

At the forefront of Microsoft new Bing search engine, OpenAI’s GPT-4 has proven to be a formidable force, capturing the attention of companies seeking to optimize their data utilization through AI-driven insights. With over 4,500 customers already benefiting from Azure OpenAI services since its launch in January, major corporations such as Mercedes, Volvo, Ikea, and Shell have embraced this technology to enhance employee productivity and data analysis.

While private companies have eagerly adopted AI to revolutionize their operations, government agencies have often lagged behind in adopting these transformative technologies. However, Microsoft’s latest offering breaks down those barriers, extending the opportunity for government offices to leverage powerful AI models effectively.

By integrating OpenAI’s AI models into their operations, government agencies can tap into the immense potential of AI, unlocking opportunities for improved decision-making, enhanced efficiency, and increased productivity. The availability of Azure OpenAI services to government entities signifies a pivotal step toward enabling data-driven insights and advanced analysis in the public sector.

Through this collaboration, government agencies will gain the means to harness the capabilities of OpenAI’s cutting-edge AI models, enabling them to derive valuable insights from vast amounts of data and make informed decisions. This technological leap promises to propel government operations into a new era of efficiency and effectiveness, ultimately benefiting the public at large.

With the convergence of Microsoft’s Azure OpenAI services and OpenAI’s powerful AI models, government agencies now have a transformative tool at their disposal. This development marks an important milestone in bridging the gap between private and public sectors in the adoption of AI, paving the way for innovation and progress in government operations.

What is Microsoft offering?

Microsoft will allow government agencies to access GPT-4, GPT-3, and Embeddings from OpenAI using the Azure OpenAI service. Embeddings measure the relatedness of text strings and are helpful in operations such as Search, Clustering, Anomaly Detection, and Classification, to name a few, as per OpenAI’s website.

These services are aimed at helping government agencies “improve efficiency, enhance productivity and unlock new insights from their data,” Microsoft wrote in a blog post. Users of this service can use REST API, Python SDK as well as the web-based interface in Azure AI Studio to adapt AI models to specific tasks.

Using the service is expected to help government agencies accelerate content generation, reduce the time and effort required for research and analysis, generate summaries of logs, and rapidly analyze long reports while also facilitating enhanced information discovery, a Microsoft blog post stated.

Users will also be able to build custom applications to query data models and generate code documentation, processes which have historically been very time-consuming.

Ensuring the security of government data

Since most government agencies tackle sensitive information that needs a high level of security, Microsoft will provide these services through Azure Government which uses stringent security and compliance standards.

Government agencies will use the AI services on the Azure Government network, which will pair directly with the commercial Azure network over Microsoft’s own backbone networks. Through this architecture, Microsoft guarantees that government applications and data environments remain on Azure Government.

Additionally, Microsoft encrypts all Azure traffic using AES-128 block cipher and ensures that the traffic remains within Microsoft’s networks and is never made part of the public internet. Microsoft has also clarified in the blog post that government data will not be used to learn about the data or to train or improve the AI models.

Specifically, Azure Government users will not have access to ChatGPT, the conversational chatbot commonly accessed by users on the internet, a Microsoft spokesperson confirmed to Bloomberg.

This should put to rest any concerns about government or individual data being accidentally released to the public due to some misgivings about the technology from a state or federal employee, unlike what happened at Samsung.

OpenAI’s $1 Million Grants Empower Ethical AI Development and Combat Misinformation

ChatGPT creators, OpenAI, have announced ten $100,000 grants for anyone with good ideas on how artificial intelligence (AI) can be governed to help address bias and other factors. The grants will be awarded to recipients who present the most compelling answers for some of the most pressing solutions around AI, like whether it should be allowed to have an opinion on public figures.

This comes in light of arguments around whether AI systems such as ChatGPT may have a built-in prejudice because of the data they are trained on (not to mention the opinions of human programmers behind the scenes). Reports have revealed instances of discriminatory or biased results generated by AI technology. There is a growing apprehension that AI, when working alongside search engines like Google and Bing, might generate misleading information with great conviction.

OpenAI, backed by a significant $10 billion investment from Microsoft, has long been a proponent of responsible AI regulation. However, the organization recently expressed apprehension regarding proposed rules in the European Union (EU) and even hinted at the possibility of withdrawing support. OpenAI’s CEO, Sam Altman, stated that the current draft of the EU AI Act appears to be overly restrictive, although there are indications that it might undergo revisions. “They are still discussing it,” Altman mentioned in an interview with Reuters.

Reuters noted that the $1 million grants offered by OpenAI might not fully address the needs of emerging AI startups. In the current market, most AI engineers earn salaries exceeding $100,000, and exceptional talent can command compensation surpassing $300,000. Nevertheless, OpenAI emphasized the importance of ensuring that AI systems benefit humanity as a whole and are designed to be inclusive. “To take an initial step in this direction,” OpenAI stated in a blog post, “we are launching this grant program.”

Altman, a prominent advocate for AI regulation, has been updating ChatGPT and image-generator DALL-E. However, he recently expressed concerns about potential risks associated with AI technology during his appearance before a U.S. Senate subcommittee. Altman emphasized that if something were to go wrong, the consequences could be significant.

Recently, Microsoft joined the call for comprehensive regulation of AI. However, the company remains committed to integrating the technology into its products and competing with other major players like OpenAI, Google, and various startups to deliver AI solutions to consumers and businesses.

AI’s potential to enhance efficiency and reduce labor costs has piqued the interest of almost every sector. However, there are also concerns that AI might spread misinformation or factual inaccuracies, which industry experts call “hallucinations.”

There have been instances where AI has also been involved in creating popular hoaxes. For example, a recent fake image of an explosion near the Pentagon caused a momentary impact on the stock market. Although there have been numerous requests for stricter regulations, Congress has been unsuccessful in enacting new laws that significantly limit the power of Big Tech.

OpenAI Introduces ChatGPT App For iOS

OpenAI has taken the tech world by surprise with the sudden release of its ChatGPT app for Apple iOS. This unexpected move brings the power of generative AI to iPhones worldwide in a remarkably short timeframe, as it comes less than six months after the highly acclaimed chatbot made its debut on November 30. The availability of ChatGPT on iOS devices opens up new possibilities for users to engage in human-like conversations and harness the capabilities of AI directly from their iPhones. This strategic move by OpenAI demonstrates its commitment to providing accessible and innovative AI solutions to a broader audience, further expanding the reach of conversational AI technology.

According to a blog post, the company says that the ChatGPT app in the App Store “syncs your conversations, supports voice input, and brings our latest model improvements to your fingertips.”

OpenAI added that the app is free to use and syncs a user’s history across devices. It also integrates Whisper, the company’s open-source speech-recognition system, enabling voice input. 

In addition, ChatGPT Plus subscribers get exclusive access to GPT-4’s capabilities, early access to features and faster response times.

In any case, as ChatGPT-like clones have flooded the App Store, and since open-source LLMs have been shown to work on smaller devices, it’s clear that this is a big move that OpenAI needed to make quickly. Apparently that didn’t leave much time to detail any efforts around safety issues — the only thing the blog post says is “As we gather user feedback, we’re committed to continuous feature and safety improvements for ChatGPT.”

But there is good news for non-Apple users, according to the blog post: “Android users, you’re next! ChatGPT will be coming to your devices soon.”

Users can download the ChatGPT app here.

OpenAI to Soon Release a New Open-Source AI Model

OpenAI plans to release a new open-source AI model in response to increasing competition in the open-source large language model (LLM) space, according to a report by The Information. While this development is exciting, the model may not be as advanced as OpenAI’s proprietary model, GPT, and may not directly compete with it.

The report also states that OpenAI’s $27 billion private valuation relies on a future where the most advanced AI for commercial purposes remains proprietary rather than open source. This shift towards accessible AI development is influenced by pressure from Meta, OpenAI’s rival, which released several open-source models in February. As more developers choose free models, OpenAI’s move signifies a significant shift towards more democratic AI development.

Since the launch of ChatGPT, an AI chatbot that enables human-like conversations, there has been a surge of interest in generative AI. Microsoft’s substantial investment in OpenAI, the developer of ChatGPT, has intensified competition with Google, which recently showcased numerous AI advancements at its i/o conference.

The long-term performance comparison between open-source and proprietary models will be intriguing. However, discussions among AI proponents and critics now focus on broader concerns, such as misinformation and security.

OpenAI CEO Sam Altman recently testified before a U.S. Senate panel, addressing the risks and limitations associated with AI. Altman emphasized that a “great threshold” would be a model capable of persuading or manipulating a person’s beliefs. He also stated that companies should have the freedom to choose whether their data is used for AI training, although public web material would be considered fair game.

OpenAI has previously released open-source models like Point-E, Whisper, Jukebox, and CLIP. The performance of its new open-source model compared to its competitors remains to be seen.

OpenAI CEO Warns Senate About AI Interfering with Elections

OpenAI CEO, Sam Altman, expressed his concerns regarding the potential interference of artificial intelligence (AI) in elections during his testimony before a Senate panel on Tuesday. Altman emphasized the need for rules and guidelines regarding disclosure from companies providing AI models, emphasizing his apprehension about the issue.

This marked Altman’s first appearance before Congress, where he advocated for stringent licensing and testing requirements for the development of AI models in the United States. When asked about the specific AI models that should require licensing, Altman suggested that any model capable of persuading or manipulating people’s beliefs should meet a high threshold for regulation.

Altman further asserted that companies should have the freedom to choose whether their data is used for AI training, a topic already under discussion in Congress. He mentioned that material available on the public web should generally be considered fair game for AI training, although the executive did not rule out the possibility of advertising, but leaned towards a subscription-based model.

The OpenAI CEO ‘s testimony highlighted the growing concerns surrounding the potential misuse of AI in electoral processes, emphasizing the need for proactive measures to address these challenges and ensure the integrity of democratic systems.

Top technology CEOs convened

Altman’s testimony was one of many at the Senate as the White House invited top technology CEOs to address AI concerns with U.S. lawmakers seeking to further the technology’s advantages, while limiting its misuse. 

“There’s no way to put this genie in the bottle. Globally, this is exploding,” said Senator Cory Booker, a lawmaker concerned with how best to regulate AI.

Altman’s warnings about AI and elections come at a time when companies large and small have been competing to bring AI to market, with billions of dollars at play. But experts everywhere have warned that the technology may worsen societal harms such as prejudice and misinformation.

Some have even gone so far as to speculate AI could end humanity itself.

The White House is taking all these concerns seriously and convening with all relevant authorities and executives to try and ensure that the worst case scenarios do not come to pass

Datadog Brings OpenAI Model Monitoring into the Fold, Launches New Integration

Datadog, a New York-based cloud observability platform provider for enterprise applications and infrastructure, has announced a new integration designed to monitor OpenAI models, including the popular GPT-4.

This integration aims to assist enterprise teams in gaining insights into user interactions with their applications powered by GPT models. By monitoring these interactions, organizations can optimize their models for improved performance and cost efficiency.

The announcement from Datadog arrives at a time when OpenAI’s large language models are being increasingly adopted across various enterprise use cases. Industries such as customer service and data querying are leveraging these powerful models to enhance their business-critical operations. With this integration, organizations can effectively harness the potential of OpenAI’s models and leverage the insights gained to drive better outcomes in these crucial areas.

How does the OpenAI integration help?

Once up and running, the Datadog-OpenAI integration automatically tracks GPT usage patterns, providing teams with actionable insights into model performance and costs via dashboards and alerts.

For performance, the plugin looks at OpenAI API error rates, rate limits and response times, allowing users to identify and isolate issues within their applications. It also offers the ability to view OpenAI request volumes — along with metrics, traces, and logs containing prompts and corresponding completions — to understand how end customers are interacting with the applications, and to gauge quality of the output generated by their OpenAI models. 

“Customers can install the integration by instrumenting the OpenAI Python library to emit metrics, traces and logs for requests made to the completions, chat completions and embedded endpoints. Once instrumented, the metrics, traces and logs will be automatically available in the out-of-the-box dashboard provided by Datadog,” Yrieix Garnier, VP of product at Datadog, told VentureBeat. 

These dashboards can then be customized to drill down further into performance issues and optimize the models for improved user experience, the VP added.

On the costs front, Datadog says, the integration allows users to review token allocation by model or service and analyze the associated costs of OpenAI API calls. This can then be used to manage expenses more effectively and avoid unexpected bills for using the service.

While Garnier did confirm that customers of both companies are testing the integration, he did not share specific results they have witnessed so far. The connector currently works for multiple AI models from OpenAI, including the GPT family of LLMs, Ada, Babbage, Curie and Davinci.

New Relic offers something similar

New Relic, another player in the observability space, offers a similar OpenAI integration that tracks API response time, average tokens per request and the associated cost. However, Garnier claims Datadog’s offering covers additional elements, like response-time-to-prompt token ratio, as well as metrics providing contextual insights into individual user queries.

“Furthermore, for API response times, API requests and other metrics, we allow users to break this down by model, service and API keys. This is critical in order to understand the primary drivers of usage, token consumption and cost,” he noted.

Moving ahead, monitoring solutions like these, including those specifically tracking hallucinations, are expected to see an increase in demand, given the meteoric rise of large language models within enterprises. Companies are either using or planning to use LLMs (most prominently those from OpenAI) to accelerate key business functions, from querying their data stack to optimizing customer service.