Google’s CEO has said it plans to include Chat AI in its search product.
Conversational AI features will be added to Google’s search engine, Alphabet CEO Sundar Pichai confirmed to The Wall Street Journal, as the tech giant ramps up its artificial intelligence efforts. Pichai countered the idea that ChatGPT and other bots pose a threat to its core search business by saying, “The opportunity space, if anything, is bigger than before.” The AI race is unfolding as Google faces pressure to cut costs while increasing productivity, with WSJ reporting that Pichai “wouldn’t directly address” the prospect of more staffing cuts following January’s layoff announcement affecting 12,000 people.
Google CEO Sundar Pichai for an interview on Tuesday. A few takeaways: – Sundar doesn’t think search and AI-powered chatbots are a zero-sum game. “The opportunity space, if anything, is bigger than before,” he said. – Google users will be able to interact directly with conversational artificial-intelligence models (“LLMs”) in search. It could take a few different forms, but one possibility is giving users the ability to ask follow-up questions to their original queries. – There’s more juice to squeeze to reach the 20% productivity gains Sundar outlined last year. “We are pleased with the progress, but there is more work left to do,” he said. – Sundar acknowledged Google sped up its long-running work on chatbots following the release of ChatGPT. “We were iterating to ship something, and maybe timelines changed, given the moment in the industry”.
Alphabet Inc’s Google on Tuesday released new details about the supercomputers it uses to train its artificial intelligence models, saying the systems are both faster and more power-efficient than comparable systems from Nvidia Corp.
Google has designed its own custom chip called the Tensor Processing Unit, or TPU. It uses those chips for more than 90 per cent of the company’s work on artificial intelligence training, the process of feeding data through models to make them useful at tasks like responding to queries with human-like text or generating images.
The Google TPU is now in its fourth generation. Google on Tuesday published a scientific paper detailing how it has strung more than 4,000 of the chips together into a supercomputer using its own custom-developed optical switches to help connect individual machines.
Improving these connections has become a key point of competition among companies that build AI supercomputers because so-called large language models that power technologies like Google’s Bard or OpenAI’s ChatGPT have exploded in size, meaning they are far too large to store on a single chip.
The models must instead be split across thousands of chips, which must then work together for weeks or more to train the model. Google’s PaLM model – its largest publicly disclosed language model to date – was trained by splitting it across two of the 4,000-chip supercomputers over 50 days.
Google said its supercomputers make it easy to reconfigure connections between chips on the fly, helping avoid problems and tweak for performance gains.
“Circuit switching makes it easy to route around failed components,” Google Fellow Norm Jouppi and Google Distinguished Engineer David Patterson wrote in a blog post about the system. “This flexibility even allows us to change the topology of the supercomputer interconnect to accelerate the performance of an ML (machine learning) model.”
While Google is only now releasing details about its supercomputer, it has been online inside the company since 2020 in a data centre in Mayes County, Oklahoma. Google said that startup Midjourney used the system to train its model, which generates fresh images after being fed a few words of text.
In the paper, Google said that for comparably sized systems, its supercomputer is up to 1.7 times faster and 1.9 times more power-efficient than a system based on Nvidia’s A100 chip that was on the market at the same time as the fourth-generation TPU.
Google said it did not compare its fourth-generation to Nvidia’s current flagship H100 chip because the H100 came to the market after Google’s chip and is made with newer technology.
Google hinted that it might be working on a new TPU that would compete with the Nvidia H100 but provided no details, with Jouppi telling Reuters that Google has “a healthy pipeline of future chips.”
The service, launched on Tuesday, can identify AI-generated text with 98% accuracy. In contrast, OpenAI’s plagiarism detection system works only 26% of the time.
According to Turnitin’s CEO, Chris Caren, the new tool was developed in response to educators’ request to detect AI-written text accurately. “They need to be able to detect AI with very high certainty to assess the authenticity of a student’s work and determine how to best engage with them,” Caren explained.
However, the launch of the new tool has been met with mixed reactions. Some institutions, including Cambridge and other members of the Russell Group, representing leading UK universities, have announced their intention to opt out of the new service.
False positives
There are concerns that the tool may falsely accuse students of cheating. In addition, it involves handing student data to a private company and prevents people from experimenting with new technologies such as generative AI.
As a result, the UCISA, a UK membership body supporting technology in education, has worked with Turnitin to ensure universities can opt out of the feature temporarily. The American Association of Colleges and Universities has also expressed “dubiousness” over the detection system, given rapid developments in AI.
Deborah Green, CEO of UCISA, expressed concern that Turnitin was launching its AI detection system with little warning to students as they prepared coursework and exams this summer. While universities have broadly welcomed the new tool, Green added that they need time to assess it.
Impact on lecturers and universities
Lecturers worried they wouldn’t know why essays have been flagged as being written by AI. In a single university, an error rate of 1% would mean hundreds of students wrongly accused of cheating, with little recourse to appeal, warned Charles Knight, assistant director at consultancy Advance HE.
While Turnitin has not immediately responded to a request for comment on the concerns raised about the AI detection tool, the company stated that the technology had been “in development for years” and provided resources to “help the education community navigate and manage [it]”.
The new tool’s launch has sparked a debate among academics, higher education consultants, and cognitive scientists worldwide over how universities might develop new modes of assessment in response to AI’s threat to academic integrity.
Clearview AI, a controversial facial recognition company, recently announced that it has scraped more than 30 billion photos from social media platforms. Despite facing multiple setbacks and bans from cities such as Portland, San Francisco, and Seattle, Clearview AI remains undeterred and has continued to grow its database significantly over the last year.
While Clearview AI is no longer able to provide its services to private businesses, it is still being used by more than 2,400 law enforcement agencies in the United States. The company’s CEO claims that police have used Clearview AI’s tools over a million times, and its database of scraped social media images now tops 30 billion.
The company’s AI-powered technology can recognize millions of faces thanks to images uploaded to social media. However, this technology has been met with widespread criticism due to concerns over privacy and the potential for misuse. Around 17 cities have banned the use of Clearview AI, but law enforcement agencies seem more than happy to use the platform.
The Miami Police Department recently confirmed that it uses Clearview AI regularly, which is a rare admission by law enforcement. The fact that law enforcement agencies are willing to use this technology despite the controversy surrounding it raises concerns about the impact it could have on civil liberties and human rights.
While facial recognition technology can be useful in certain situations, such as identifying criminals or finding missing persons, its potential for misuse is significant. Clearview AI’s massive database of scraped social media images is a prime example of how technology can be used to infringe on privacy rights. As this technology continues to evolve, it is important that we have a discussion about its use and potential impact on society.
During a recent interview with Eric Sheridan, the senior U.S. internet analyst at Goldman Sachs Research, Emad Mostaque, CEO of Stability AI, made some bold predictions about the future of AI and its impact on society. He compared AI to a highly gifted intern with poor recall, stating that if the memory problem is resolved, AI might be prepared for promotion to the analyst or associate-level employment.
According to Mostaque, programmers may be replaced by AI in just five years. He cited the fact that AI is now producing software code successfully, to the point that it generates 41% of all new software code on GitHub. This speed of disruption is “terrifying,” he said, describing AI as a “far bigger disruption than the pandemic.”
Mostaque also predicted that AI would soon have a significant impact on various sectors of society, including entertainment, education, medicine, and of course the IT industry. He reminded the audience that AI massive language models are already successfully producing software code, stating that “this is a much bigger disruption than the pandemic.”
Despite being a non-specialized model, Mostaque claimed that OpenAI’s ChatGPT could pass Google’s test for a high-level software engineer. He projected that there would be no programmers in five years, and while this offers the potential to increase production and efficiency, it also raises concerns about job security in fields that previously seemed secure.
While there are undoubtedly challenges associated with the development of AI, Mostaque believes that the benefits it brings will ultimately outweigh the risks. As AI continues to evolve and become more sophisticated, we can expect to see it play an increasingly important role in shaping the future of society.
U.S. President Joe Biden has called on tech companies to ensure the safety of AI products before releasing them to the public. Biden emphasized the need for appropriate safeguards to protect society, national security, and the economy from potential risks associated with AI. Social media is cited as an example of the negative impact of powerful technologies in the absence of safeguards. U.S. President Joe Biden has called on leading technology companies to ensure their products are secure before releasing them to the public, amidst growing concerns over the safety of artificial intelligence (AI). Speaking at a meeting with science and technology advisers on April 4, Biden highlighted the importance of addressing potential risks to society, national security, and the economy.
U.S. President on social media’s negative impact on AI
During the meeting, Biden cited social media as an example of the negative impact that powerful technologies can have when appropriate measures to protect against them are not in place. He said: Absent safeguards, we see the impact on the mental health and self-images and feelings and hopelessness, especially among young people.
He also emphasized the need for non-partisan privacy laws that limit the personal data gathered by technology firms, prohibit child-targeted advertising, and prioritize health and safety in product development.
Biden’s comments come amid growing concerns about the safety and ethical implications of AI, as the technology continues to develop rapidly. The ability to swiftly and effectively collect and analyze enormous amounts of data has been a significant contributing factor in the development of AI.
Also, the demand for automated systems that can complete activities that are too risky, challenging or time-consuming for humans, as well as the accessibility of enormous amounts of digital data, have also driven the development of AI.
Ethics and safety concerns drive AI research
However, societal and cultural issues have also influenced the development of AI. Discussions concerning the ethics and the ramifications of AI have arisen in response to worries about job losses and automation.
Concerns have also been raised about the possibility of AI being employed for malicious purposes, such as cyberattacks or disinformation campaigns. As a result, many researchers and decision-makers are attempting to ensure that AI is created and applied ethically and responsibly.
AI is being increasingly utilized in a variety of modern-day applications, from virtual assistants to self-driving cars, medical diagnostics, and financial analysis. Researchers are also exploring novel ideas like reinforcement learning, quantum computing, and neuromorphic computing.
One important trend in modern-day AI is the shift toward more human-like interactions, with voice assistants like Siri and Alexa leading the way. Natural language processing has also made significant progress, enabling machines to understand and respond to human speech with increasing accuracy.
The recently developed ChatGPT is an example of AI that can understand natural language and generate human-like responses to a wide range of queries and prompts.
President Biden’s call for tech companies to prioritize the safety and ethical implications of AI underscores the need for a comprehensive approach to regulating and implementing the technology. While AI presents numerous benefits, it also poses significant risks that must be addressed through responsible and ethical development and implementation.
Artificial intelligence is here, and it’s coming for your job. So promising are the tool’s capabilities that Microsoft — amid laying off 10,000 people — has announced a “multiyear, multibillion-dollar investment” in the revolutionary technology, which is growing smarter by the day. And the rise of machines leaves many well-paid workers vulnerable, experts warn.
“AI is replacing the white-collar workers. I don’t think anyone can stop that,” said Pengcheng Shi, an associate dean in the department of computing and information sciences at Rochester Institute of Technology. “This is not crying wolf,” Shi told The Post. “The wolf is at the door.”
From the financial sector to health care to publishing, a number of industries are vulnerable, Shi said. But as AI continues its mind-blowing advancements, he maintains that humans will learn how to harness the technology.
Artificial intelligence is already at a point where it can do the jobs people are paid for. AI is already having an impact on multiple industries, professors warn.
Already, AI is upending certain fields, particularly after the release of ChatGPT, a surprisingly intelligent chatbot released in November that’s free to the public.
Earlier this month, it emerged that consumer publication CNET had been using AI to generate stories since late last year — a practice put on pause after fierce backlash on social media. Academia was recently rocked by the news that ChatGPT had scored higher than many humans on an MBA exam administered at Penn’s elite Wharton School. After Darren Hick, a philosophy professor at South Carolina’s Furman University, caught a student cheating with the wildly popular tool, he told The Post that the discovery had left him feeling “abject terror” for what the future might entail.
Hick and many others are right to be worried, said Chinmay Hegde, a computer science and electrical engineering associate professor at New York University.
“Certain jobs in sectors such as journalism, higher education, graphic and software design — these are at risk of being supplemented by AI,” said Hegde, who calls ChatGPT in its current state “very, very good, but not perfect.”
For now, anyway.
Here’s a look at some of the jobs most vulnerable to the fast-learning, ever-evolving technology.
Education
As it stands now, ChatGPT — currently banned in NYC schools — “can easily teach classes already,” Shi said. The tool would likely be most effective at the middle or high school level, he added, as those classes reinforce skills already established in elementary school.
“Although it has bugs and inaccuracies in terms of knowledge, this can be easily improved. Basically, you just need to train the ChatGPT,” Shi continued.
As for higher education, both Shi and Hegde maintain that college courses will need a human leader for the foreseeable future, but the NYU professor did admit that, in theory, AI could teach without oversight.
In the meantime, educators are seeing their roles transformed nearly overnight. It’s already become a struggle to adapt teaching and testing methods in efforts to keep up with the increasingly talented ChatGPT, which, according to Shi, can successfully complete a corner-cutting student’s coursework at a master’s level.
Doctoral candidates hoping for a shortcut are likely out of luck: Creating an independent thesis on an area not often or thoroughly studied is beyond AI’s abilities for the time being, he said.
Finance
Wall Street could see many jobs axed in coming years, as bots like ChatGPT continue to better themselves, Shi told The Post.
“I definitely think [it will impact] the trading side, but even [at] an investment bank, people [are] hired out of college and spend two, three years to work like robots and do Excel modeling — you can get AI to do that,” he explained. “Much, much faster.”
Shi is certain, however, that crucial financial and economic decisions will likely always be left in human hands, even if the data sheets are not.
Software engineering
Website designers and engineers responsible for comparatively simple coding are at risk of being made obsolete, Hegde warns.
“I worry for such people. Now I can just ask ChatGPT to generate a website for me — any type of person whose routine job would be doing this for me is no longer needed.”
In essence, AI can draft the code — hand-tailored to a user’s requests and parameters — to build sites and other pieces of IT.
Relatively uncomplicated software design jobs will be a thing of the past by 2026 or sooner, Shi said.
“As time goes on, probably today or the next three, five, 10 years, those software engineers, if their job is to know how to code … I don’t think they will be broadly needed,” Shi said.
Journalism
The technology is off to a rocky start in the news-gathering business: CNET’s recent attempts (and subsequent corrections to its computer-generated stories) were preceded by the Guardian, which had GPT software write a piece in 2020 — with mixed results.
Still,there is one job the technology is already highly qualified for, according to Hegde.
“Copy editing is certainly something it does an extremely good job at. Summarizing, making an article concise and things of that nature, it certainly does a really good job,” he said, noting that ChatGPT is excellent at designing its own headlines.
One major shortcoming — salvation for reporters and copy editors, at least for now — is the tool’s inability to fact-check efficiently, he added.
“You can ask it to provide an essay, to produce a story with citations, but more often than not, the citations are just made up,” Hegde continued. “That’s a known failure of ChatGPT and honestly we do not know how to fix that.”
Graphic design
In 2021, ChatGPT developer OpenAI launched another tool, DALL-E, which can generate tailored images from user-generated prompts on command. Along with doppelgangers such as Craiyon, Stable Diffusion, and Midjourney, the tool poses a threat to many in the graphic and creative design industries, according to Hegde.
“Before, you would ask a photographer or you would ask a graphic designer to make an image [for websites]. That’s something very, very plausibly automated by using technology similar to ChatGPT,” he continued.
Shi recently commanded DALL-E to make a cubist portrait of rabbits for the Lunar New Year, which he said came out “just amazing.” But, although it captured the hard-lined, Picasso-derived painting style, Shi noticed that it was not successful with more nuanced techniques — exposing a current shortcoming in the tech.
Meta’s CEO, Mark Zuckerberg, is generally seen in a regular attire of t-shirts, jeans, and sneakers; it’s a little different to see him wearing a designer Louis Vuitton outfit and also walking on the ramp. But the artificial technology has made it possible, causing a lot of confusion among internet users because the images appear very real.
It would be challenging to distinguish the fake images produced by AI from real ones since they are so uncannily realistic. Zuckerberg can be seen maintaining the flawless expression that models frequently sport during the rampwalk.
It’s not the first time artificial intelligence (AI) images have swept the internet. Many of the expert artists who have employed this technology have produced sometimes unimaginable images.
The furore around ChatGPT and other innovations in generative AI — be it Google’s Bard or other new AI image tools — can be equally exciting for some and overwhelming for many others, particularly in the field of freelancing. Freelancers, who operate independently and offer their services to various clients, have found AI technologies to be a game-changer in their field. From content creation to customer support, AI-powered tools like ChatGPT have provided freelancers with the ability to automate and streamline their work processes, allowing them to focus on delivering high-quality work to their clients.
The impact of generative AI technologies like ChatGPT on the freelance economy cannot be overstated. One of the most significant benefits is the increased efficiency and productivity of freelancers. ChatGPT, for example, can generate high-quality content within minutes, which would otherwise take hours for a human writer to complete. This enables freelancers to take on more work, deliver results faster, and ultimately earn more income.
Another significant benefit of generative AI technologies like ChatGPT is that they provide freelancers with the ability to offer more diverse services to their clients. For example, a freelance writer who previously only offered content creation services can now offer additional services like social media management, chatbot creation, and customer support through the use of generative AI technologies. This increased service offering enables freelancers to diversify their income streams and become more competitive in their market.
The impact of generative AI like ChatGPT technologies on the freelance economy is not limited to increased efficiency and productivity or expanded service offerings. AI-powered tools like ChatGPT have also provided freelancers with the ability to offer more personalized services to their clients. Chatbots, for example, can be programmed to provide personalized responses to customer inquiries, which can lead to increased customer satisfaction and loyalty. This, in turn, can lead to repeat business for freelancers.
Another benefit of generative AI technologies like ChatGPT is that they have enabled freelancers to work remotely and communicate more effectively with clients from all over the world. AI-powered translation tools, for example, have made it easier for freelancers to communicate with clients who speak different languages. This has opened up new opportunities for freelancers to work with clients from all over the world, without the need for expensive travel or language learning.
Despite the many benefits of generative AI technologies like ChatGPT, there are also some potential downsides to consider. One concern is that the widespread adoption of AI-powered tools could lead to job displacement. As AI-powered tools become more sophisticated, they may be able to replace human workers in certain tasks, such as content creation or customer support. This could lead to a reduction in demand for certain types of freelancers, particularly those who offer services that can be automated.
Another concern is that the increasing use of AI-powered tools could lead to a homogenization of services. As more and more freelancers adopt AI-powered tools like ChatGPT, it could lead to a standardization of services, with clients choosing freelancers based on the quality of their AI tools rather than their unique skills and expertise. This could lead to a commoditization of freelancing, with freelancers competing solely on price rather than on their unique value propositions.
Despite these potential downsides, the overall impact of generative AI technologies like ChatGPT on the freelance economy is overwhelmingly positive. These tools have enabled freelancers to work more efficiently, offer more diverse services, and provide more personalized experiences to their clients. As AI-powered tools continue to evolve and become more sophisticated, it is likely that they will play an increasingly important role in the freelance economy. Freelancers who are able to embrace these technologies and incorporate them into their work processes are likely to be the most successful in the years to come.
Generative AI tools like ChatGPT have created a burgeoning market for “prompt engineers” who are responsible for improving the responses of AI chatbots. These high-paying jobs can offer salaries as high as $335,000 a year and often don’t require a degree in tech.
Anthropic, a leading AI safety and research company, is currently seeking a qualified “prompt engineer and librarian” to join their team. The position boasts an attractive salary range, spanning from $175,000 to $335,000. As a crucial member of the team, the selected individual will be responsible for curating an extensive library of top-tier prompts and prompt chains, while also developing interactive tools aimed at educating customers in the art of prompt engineering. Although some prior experience in programming and familiarity with large language models is preferred, Anthropic enthusiastically encourages all interested candidates to apply, even if they don’t meet every qualification.
The realm of prompt engineering is experiencing rapid growth, which is evidenced by Prompt Base’s recent launch of a prompt marketplace just last June. However, some cautious recruiters warn that most high-paying positions within this field typically require a strong background in technology and formal education. Despite this, it’s worth noting that many successful prompt engineers have emerged from non-tech backgrounds, finding immense satisfaction in the creative and analytical aspects of their work. The opportunity to craft engaging, thought-provoking prompts that steer AI systems towards optimal outcomes can be incredibly fulfilling.
Yet, as with any burgeoning field, uncertainties linger. Some experts wonder if prompt engineering will maintain its status as a highly sought-after profession in the long term, considering the rapid evolution of AI technology. The continuous advancements in the field might lead to shifts in job demands and priorities, prompting professionals to adapt and expand their skill sets accordingly.
In conclusion, the role of a prompt engineer and librarian at Anthropic presents an exciting opportunity for individuals passionate about AI safety and research. With the potential for substantial compensation and the chance to shape the future of AI through prompt engineering, interested candidates from diverse backgrounds are encouraged to apply and contribute their unique perspectives to this ever-evolving domain. While the future remains uncertain, the growth and significance of prompt engineering in shaping AI’s trajectory cannot be underestimated.