WordPress Ad Banner

OpenAI Shifts AI Strategy, ChatGPT Training Excludes Customer Data: Sam Altman

OpenAI CEO Sam Altman has announced a significant shift in the company’s strategy regarding the training of its large-language GPT models for artificial intelligence (AI). In response to customer feedback and concerns, OpenAI will no longer utilize client data for training purposes. The decision comes after customers expressed their desire to protect their data, prompting OpenAI to reconsider their approach. Altman confirmed the change, stating, “Customers clearly want us not to train on their data, so we’ve changed our plans: We will not do that.”

While the revised strategy applies primarily to client data used for training via OpenAI’s API services, it’s important to note that ChatGPT, the company’s chatbot, may still incorporate information from external sources. OpenAI’s focus on data privacy and protection aims to address customer concerns and align with evolving privacy standards. By respecting customer preferences, OpenAI aims to foster trust and transparency in its AI development process.

The decision holds significance for OpenAI’s corporate clients, including industry giants like Microsoft, Salesforce, and Snapchat, who frequently utilize OpenAI’s API capabilities. The modified approach reflects OpenAI’s commitment to prioritizing customer needs and respecting data privacy. While the use of AI models continues to raise broader questions and concerns within various industries, OpenAI’s shift in strategy demonstrates a willingness to adapt and respond to customer feedback.

The ongoing debate surrounding large-language models extends beyond privacy concerns. Restrictions on ChatGPT usage for script production or editing, for example, have led to a strike by the Writers Guild of America, highlighting concerns about the impact of AI technologies on creative industries. Intellectual property considerations also emerge as a prominent issue. As businesses grapple with these challenges, OpenAI’s decision to discontinue training on client data represents a notable step towards addressing customer concerns and fostering responsible AI development practices.

ChatGPT causing an ‘existential crisis?’

Barry Diller, a businessman in the entertainment industry and the head of IAC, said that media corporations might litigate their claims and possibly sue AI firms for using their original content. 

This week, the Writers Guild of America (WGA), which represents over 10,000 writers in the American film industry, went on strike due to an “existential crisis” about the possibility of AI taking their employment..

Amazon apparently issued a recent warning to staff members not to divulge sensitive information to ChatGPT for fear that it would appear in chat responses for other users.

On Monday, employees at Samsung Electronics Co. are not allowed to use generative AI technologies, such as ChatGPT, Google Bard, and Bing AI, among others. 

According to media sources with access to the company’s internal memo, the tech giant alerted staff at one of its main divisions about the new policy due to concerns regarding the security of critical code. 

“We ask that you diligently adhere to our security guideline, and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” the memo warned employees.

The issue of data privacy and protection is becoming more crucial as the use of large-language models increases. AI companies are trying hard to preserve client privacy and be open about using customer data, noted the CNBC report.

Open-Source AI Massive Threat to Google and OpenAI

According to a leaked internal document written by a senior Google engineer, neither OpenAI nor Google is likely to come out on top in the race for AI dominance. The document has been circulating in Silicon Valley for several months and was recently made public by consulting firm Semi-Analysis. While OpenAI gained fame last year with its ChatGPT conversational AI chatbot, Google has been working in the AI domain for over a decade and was previously thought to be the leader. However, an AI arms race has since ensued between the two companies, with both vying for supremacy.

In April, Google engineer Luke Sernau published a document internally that has since been widely circulated privately. Sernau does not believe that either company will ultimately emerge as the AI leader if they continue down this path. He notes that while these firms have been squabbling, open-source AI has surged ahead, citing examples such as large language models that can run on a smartphone and personal AI that can be fine-tuned on a laptop in one evening.

Sernau wrote that AI models developed by private organizations still held the edge, but not for long. Open-source models were closing in on achieving the same results as corporations that were spending billions of dollars at a fraction of the cost. Open-source tech was also iterating much faster, with new iterations coming up in weeks, as opposed to months for corporations.

Sernau highlighted that the giant models used by Google and OpenAI were the main reason why their progress was being slowed down, while the open-source community had discovered LLaMa from Meta, which was much smaller and easier to work with. The engineer emphasized the need for Google to shift to smaller models and learn from the open-source community, which is more nimble and can be quickly iterated upon.

Ultimately, if better AI models become available for free, clients will not pay to use inferior models from companies such as Google or OpenAI. Thus, it is essential for these companies to shift their focus and learn from the open-source community to stay competitive.

5 Ways GPT-4 Is Better Than Older Versions Of OpenAI’s ChatGPT

OpenAI recently released GPT-4, an updated version of their popular language model that boasts increased reliability and creativity over its predecessor. According to CEO Sam Altman, the technology is so advanced that it can even pass the bar exam and score a 5 on multiple AP exams.

Prior to this update, OpenAI’s ChatGPT was powered by GPT-3.5. The latest version of ChatGPT, ChatGPT Plus, now features the GPT-4 technology. However, access to the API is currently limited to a select group of developers on OpenAI’s waitlist.

Interestingly, Microsoft’s Bing search engine has already implemented a version of GPT-4 that has been customized for search. Users who have tried the AI-powered search engine have likely experienced some of the benefits of this cutting-edge technology.

1. GPT-4 can understand images

GPT-4 is “multimodal,” meaning it can see and process image prompts as well as text.

Users can ask the chatbot to describe images, but it can also contextualize and understand them. In one example given by OpenAI, the chatbot is shown describing what’s funny about a group of images.

The chatbot is still limited to text responses and cannot produce images itself.

2. The bot is said to be more accurate

According to OpenAI, the update will give more-accurate responses to users’ queries.

OpenAI said in a blog post that the system was “40% more likely to produce factual responses than GPT-3.5.” GPT-4 also has more “advanced reasoning capabilities” than its predecessor, according to the company. 

The updated chatbot is still not immune to “hallucinations,” a tendency for AI to generate false responses or reasoning errors. OpenAI said the chatbot was not perfect. Altman called it “still flawed, still limited.”

3. Users can have longer conversations 

GPT-4 can take in and generate about eight times more text than ChatGPT.

This means that the chatbot will have a longer “memory” and be able to keep up with lengthier conversations. OpenAI said the latest version could process up to 25,000 words, compared with the previous 3,000 words. 

4. It’s harder to break the rules

GPT-4 may be bad news for fans of ChatGPT’s “evil” alter egos. 

OpenAI said the update included its best-ever results on “steerability, and refusing to go outside of guardrails.”  

The company said that the system was “82% less likely to respond to requests for disallowed content.” Many users have attempted to trick ChatGPT into answering inappropriately or overriding its content moderation, likely providing OpenAI with many examples of malicious prompts.

5. The chatbot is more creative

OpenAI said the update was the most creative and collaborative version of ChatGPT yet.

The company said the changes may be “subtle” in casual conversations but would become clear when the bot’s faced with complex situations. GPT-4 is also “able to handle much more nuanced instructions than GPT-3.5,” OpenAI said.

In collaboration with users, the chatbot can produce and edit creative-writing tasks such as drafting screenplays. The company added that the updated chatbot could learn a user’s writing style. 

ChatGPT is Available Again In Italy

San Francisco-based OpenAI, ChatGPT’s maker, announced on Friday that the artificial intelligence chatbot is available again in Italy after it was blocked for nearly a month by regulators who stated privacy concerns.

OpenAI said it now meets all the conditions that the Italian data protection authority wanted satisfied by an April 30 deadline.

“ChatGPT is available again to our users in Italy,” OpenAI told The Globe and Mail over an email. “We are excited to welcome them back, and we remain dedicated to protecting their privacy.”
The rapid development of generative AI systems like ChatGPT has raised fears among officials and even tech leaders about potential ethical and societal risks. 
In April of 2023, the Italian watchdog, known as Garante, ordered OpenAI to temporarily stop processing Italian users’ personal information while it examined a possible data breach that could be violating the EU’s data privacy rules.

OpenAI has now claimed that it has “addressed or clarified the issues” raised by the Garante.

As part of these measures, ChatGPT will now have information on its website about how it collects and uses data, will make available a new form for EU residents to object to having their data used for training, and add a tool to verify users’ ages.

The Garante said in a statement that it “welcomes the measures OpenAI implemented.”

Areas of concern

Last month, the watchdog noticed that some users’ messages and payment information were exposed to others. Other areas of concern were also whether there was a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT’s algorithms and the fact that the system could sometimes generate false information about individuals.

The return of the AI system was well received, with infrastructure Minister Matteo Salvini writing on Instagram that his League party “is committed to help start-ups and development in Italy” through the use of the technology.

But the battle is not over yet, France’s data privacy regulator and Canada’s privacy commissioner are investigating ChatGPT after complaints about the chatbot surfaced.

Meanwhile, last month, the European Data Protection Board (EDPB), the agency that united Europe’s national privacy watchdogs, formed a task force on ChatGPT aimed at developing a common policy on setting privacy rules on artificial intelligence.

Will bans be applied elsewhere, or has OpenAI made enough changes to ensure its program is compatible with all countries’ regulations?

OpenAl Is Working on a Humanoid Robot

1X Technologies, a Norwegian start-up that specializes in the development of humanoid robots, has received a US$23.5 million investment from OpenAI, the developer of ChatGPT. The investment will allow the start-up to produce new androids on a commercial scale. Their latest project, a bipedal robot called NEO, is designed to accompany human workers in warehouse settings and help alleviate labor shortages.

NEO is intended to be autonomous and able to adapt to different environments. While there is no indication that it will be equipped with a ChatGPT-inspired form of intelligence, 1X Technologies may be able to draw on OpenAl’s expertise to improve Neo’s knowledge and performance.

The development of artificial intelligence and humanoid robots is an area where many cutting-edge projects are pushing new boundaries. There are already many projects underway, including the start-up Figure’s humanoid robot that combines human-like dexterity with artificial intelligence, Boston Dynamics’ Atlas robot that can handle heavy loads, Xiaomi’s CyberOne robot that can perceive 3D space and human gestures and emotions, and Tesla’s Optimus humanoid robot project.

With OpenAl’s investment, 1X Technologies may be able to develop NEO to be an advanced autonomous robot capable of adapting to different practical applications. While the competition is tough, the potential for innovation and advancement in this field is high.

OpenAi’s ChatGPT Banned in Italy Over Privacy Concerns

Italy has become the first Western country to block artificial intelligence chatbot ChatGPT over data privacy concerns.

The Italian data-protection authority temporarily banned the chatbot as it investigated a possible violation of privacy rules.

Italy’s privacy watchdog, Garante, said it was taking provisional action “until ChatGPT respects privacy”, including temporarily limiting the company from processing Italian users’ data.

It questioned whether OpenAI had legal justification for its “massive collection and processing of personal data” used to train the platform’s algorithms.

The Italian regulator also accused OpenAI of failing to check the age of ChatGPT’s users, who are supposed to be aged 13 or above.

ChatGPT, created by US-start up Open AI and backed by Microsoft, is known for its ability to generate essays, songs, exams and news articles from brief prompts.

OpenAI said it had disabled ChatGPT for users in Italy following the government’s request.

Concerns grow about AI boom

The ban came just days after a group of more than 1000 artificial intelligence experts, including Tesla CEO Elon Musk, called for companies such as OpenAI to pause the development of AI models in an open letter that cited potential risks to society.

ChatGPT has set off a tech craze since its release in November last year, prompting rivals to launch similar products and companies to integrate it or similar technologies into their products.

Italy’s restriction affects the web version of ChatGPT, popularly used as a writing assistant.

Alp Toker, director of the advocacy group NetBlocks which monitors internet access worldwide, said Italy’s action was “the first nation-scale restriction of a mainstream AI platform by a democracy”.

The chatbot is also unavailable in mainland China, Hong Kong, Iran, Russia and parts of Africa where residents cannot create OpenAI accounts.

The Italian watchdog said OpenAI must report within 20 days what measures it has taken to ensure the privacy of users’ data or face a fine of up to either 20 million Euros ($32.5 million) or 4 per cent of annual global revenue.

The Italian regulator also said on March 20 that ChatGPT had experienced a data breach involving “users conversations” and subscriber payments.

AI regulation needed 

Experts said new regulations were needed to govern AI because of its potential impact on national security, jobs and education.

European consumer group BEUC last week called for EU authorities to investigate ChatGPT and similar AI chatbots. BEUC said it could be years before the EU’s AI legislation takes effect, so authorities need to act faster to protect consumers from possible risks.

“In only a few months, we have seen a massive take-up of ChatGPT, and this is only the beginning,” Deputy Director General Ursula Pachl said.

She said waiting for the EU’s AI Act, “which will happen years from now, is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people”.

ChatGPT is estimated to have reached 100 million monthly active users in January, just two months after launch, making it the fastest-growing consumer application in history, according to a UBS study published last month.

Top 5 Ways GPT-4 Can Increase Workers Productivity

The language model that powers chatGPT, GPT-4, has developed the latest version with considerable improvements over GPT 3 and GPT 3.5. Though, workers can use GPT-4 to enhance their productivity and quality of work.

The newest version of GPT is capable enough to accept inputs in both the forms of text and images. In contrast, GPT 3 and 3.5 could only take data in text.

ChatGPT is an innovative artificial intelligence technology created by OpenAI. The company aims to provide effective results in less period.

However, I can suggest some potential ways in which a language model like GPT-4 could increase workers’ productivity in the future based on the advancements made by previous models like GPT-3:

  1. Automating Repetitive Tasks: GPT-4 could be used to automate repetitive tasks such as data entry, email responses, and social media posts, freeing up workers’ time to focus on more complex tasks.
  2. Enhancing Communication: GPT-4 could help workers communicate more efficiently by providing real-time language translation and automatic summarization of long texts or meetings, making it easier to understand and respond to information.
  3. Improving Research: GPT-4 could assist workers in conducting research by providing accurate and relevant information from a vast range of sources, making it easier to find the data needed to complete tasks.
  4. Streamlining Workflows: GPT-4 could help workers streamline their workflows by providing suggestions on the best way to complete a task, identifying potential roadblocks, and offering solutions to overcome them.
  5. Personalizing Work: GPT-4 could help workers personalize their work by analyzing their work habits, preferences, and productivity patterns and offering customized recommendations on how to improve their workflow and increase productivity.

Bill Gates Says OpenAl’s GPT is the Most Revolutionary Tech Development Since 1980

As important as the invention of the microprocessor, the personal computer, the Internet, and the cell phone was the development of artificial intelligence. It will alter how people work, study, travel, receive medical treatment, and interact with one another.

Bill Gates, a co-founder of Microsoft, claims that OpenAI’s GPT AI model is the greatest ground-breaking development in technology since he first encountered a contemporary graphical desktop experience (GUI) in 1980. Before then, command lines were utilized to control computers.

Bill Gates used “GUI” technology to create Windows, a potent piece of modern software. Gates now makes parallels to OpenAI’s GPT models, which can generate computer code that is nearly useable and language that closely resembles human output.

In a blog post, he claimed that he had earlier pushed the OpenAI group to develop an AI system that could pass the Advanced Placement Biology exam. GPT-4, which became publicly available last week, reportedly received the highest score from OpenAI. According to Gates, the entire experience “was wonderful.” I was aware that I had just witnessed the biggest development in technology since the graphical user interface was created.

OpenAI, the firm that developed the GPT model, has strong ties to Gates and Microsoft. Microsoft made a $10 billion investment in the firm and sells some of its AI technologies for sale via Azure cloud services. Gates suggests that in discussing AI, humans should “balance fears” about biased, inaccurate, or unpleasant technologies with the technology’s capacity to make life better. Moreover, he believes that governments and charitable groups ought to fund the development of AI technology to improve the health and education systems in underdeveloped countries, as corporations would not always choose to make such investments on their own.

OpenAI and Microsoft will collaborate with a new AI startup accelerator.

By joining together with OpenAI and Microsoft Corp, Neo, the startup accelerator established by Silicon Valley investor Ali Partovi, will provide free software and guidance to businesses in a new track that focuses on artificial intelligence.  

According to the firm’s announcement, businesses selected into Neo’s AI cohort will receive credits to use Microsoft’s Azure cloud as well as OpenAI’s GPT language generating tool, Dall-E picture production software, and other products. Moreover, Microsoft and OpenAI researchers and mentors will be available to the startups.

Microsoft also reportedly increased its investment in OpenAI by a whopping $10 billion, According to CB Insights, financing for startups in generative artificial intelligence, so named because the technologies are used to generate new material, reached $2.65 billion in 2022, a 71% rise from the previous year. Google, a subsidiary of Alphabet Inc., opened up access to Bard, a conversational AI service that competes with ChatGPT.

According to OpenAI Study, GPT Would Affect 80% of U.S. Employees’ Employment.

It won’t take long for AI to become a common tool in the workplace as sophisticated big language models like OpenAI’s GPT-4 grow more adept at writing, coding, and performing math with more precision and consistency. OpenAI is wagering that GPT models will automate at least a portion of the labor of the vast majority of people.

In a paper, researchers from OpenAI and the University of Pennsylvania hypothesised that at least 10% of the US workforce’s occupations might be affected by the introduction of GPTs, a group of well-known large language models developed by OpenAI.

They also discovered that roughly 19 percent of workers would have at least 50 percent of their duties disrupted. GPT exposure is stronger for higher-income employment, they stated in the report but spreads across practically all industries.

It covers 1,016 jobs with standardized descriptions and serves as the main occupational database in the U.S., to choose the tasks to test for each occupation. To assess if access to GPT directly or through a secondary GPT-powered system would save the time needed for a human to complete a given activity by at least 50%, they gathered both human and GPT-4 produced comments using a rubric. Increased exposure meant that GPT would produce work of higher quality while cutting the task’s completion time in half or more.

The relevance of scientific and critical thinking abilities is substantially negatively correlated with exposure, according to the study’s findings, which suggests that jobs requiring these skills are less likely to be damaged by existing language models. On the other hand, programming and writing abilities have a substantial positive correlation with exposure, suggesting that these professions are more prone to being impacted by language models. 

Mathematicians, tax prepares, authors, web designers, accountants, journalists, and legal secretaries are among the professions with the most exposure. Graphic designers, search marketing strategists, and finance managers are among the professions with the biggest variance or those least likely to be harmed by GPT.

The researchers also list the generally expected effects of GPT on various industries, with data processing services, information services, the publishing sector, and insurance carriers having the most and food manufacturing, wood product manufacturing, and support activities for agriculture and forestry having the least effects.  

Due to the human annotators’ familiarity with the capabilities of the models and the fact that they did not work in some of the jobs examined, the researchers concede that their study had limitations. Another drawback was that GPT-4’s results were not always the absolute truth since it was sensitive to the phrasing and structure of the prompt and occasionally made up data.  

Naturally, it should be mentioned that OpenAI itself created the content. As a for-profit organization creating AI models, OpenAI has a strong motive to present its tools as disrupting sectors and automating activities, which ultimately helps employers.  

Yet, the study shows how GPT models will soon be a widely utilized tool. Google and Microsoft have already announced that AI would be included in their search engines and other office tools like email and documents. Startups already use GPT-4 and its coding capabilities to cut costs on hiring human workers. According to the researchers’ analysis, LLMs like GPT-4 are likely to have widespread effects. Even if we stop developing new capabilities today, LLMs’ expanding economic impact is anticipated to continue and grow even if their capabilities have steadily increased over time.

Bill Gates likes ChatGPT.

One of this century’s most well-known computer titans, Bill Gates and Steve Jobs, have been friends for a very long time. Also, they had a well-known, fierce competition that finally grew into friendship.

When asked what one lesson he took away from Jobs, Gates responded that his inspiration came from his understanding of design and marketing. “Steve taught me a lot” We had nothing in common. He didn’t even write a single line of code, but he had excellent design and marketing sense, and he also had a fantastic intuition for smart engineers. Steve was such a special person who could gain a lot from others.

He did overwork people, so he wasn’t a perfect thing. China has evolved from a technical backwater to one of the world’s top centers for innovation. Shenzhen, the third largest city in the nation, is sometimes referred to as a second Silicon Valley.

When asked whether Russia or China is a more innovative authoritarian state, “The amount of invention in China is fairly big, nothing like American levels,” Gates said. Despite having a smaller population and excellent arithmetic skills, Russia has never fully mastered scaling.

“They entirely abandoned medical innovation, despite the fact that they were excellent 50 years ago. So, yeah, I feel bad for the young generation there that might be making contributions to IT and health advancements. And because they will now be essentially cut off from it, some of them are fleeing the nation. ChatGPT’s most recent iteration is built on OpenAI’s GPT-3.5 LLM (Large Language Model).

Privacy Alert: ChatGPT Exposes Private Conversations

OpenAI CEO expresses regret, claims error has been fixed.

Artificial Intelligence (AI) is transforming our lives and work, but recent developments have raised concerns about the privacy and security of user data when using AI-powered tools.

One of these concerns is the ChatGPT glitch that allowed some users to see the titles of other users’ conversations.

ChatGPT glitch

ChatGPT is an AI chatbot developed by OpenAI that allows users to draft messages, write songs, and code. Each conversation is stored in the user’s chat history bar.

However, users began seeing conversations they didn’t have with the chatbot in their chat history as early as Monday. Users shared these on social media sites, including Reddit and Twitter.

Company Response

OpenAI CEO Sam Altman expressed regret and confirmed that the “significant” error had been fixed. The company also briefly disabled the chatbot to address the issue. OpenAI claims that users couldn’t access the actual chats. Despite this, many users are still worried about their privacy on the platform.

Privacy Concerns

The glitch suggests that OpenAI has access to user chats, which raises questions about how the company uses this information.

The company’s privacy policy states that user data, such as prompts and responses, may be used to continue training the model.

However, that data is only used after personally identifiable information has been removed. Users fear that their private information could be released through the tool.

AI Tools and Privacy

The ChatGPT glitch comes as Google and Microsoft compete for control of the burgeoning market for AI tools. Concerns have been raised that missteps like these could be harmful or have unintended consequences.

There needs to be a greater focus on privacy and security concerns as AI becomes more prevalent in our lives. Companies must be transparent about how they collect, store, and use user data and must work quickly to address any issues.