WordPress Ad Banner

Will AI-detection tools falsely accuse students of cheating? Universities are worried

The service, launched on Tuesday, can identify AI-generated text with 98% accuracy. In contrast, OpenAI’s plagiarism detection system works only 26% of the time.

According to Turnitin’s CEO, Chris Caren, the new tool was developed in response to educators’ request to detect AI-written text accurately. “They need to be able to detect AI with very high certainty to assess the authenticity of a student’s work and determine how to best engage with them,” Caren explained.

However, the launch of the new tool has been met with mixed reactions. Some institutions, including Cambridge and other members of the Russell Group, representing leading UK universities, have announced their intention to opt out of the new service.

False positives

There are concerns that the tool may falsely accuse students of cheating. In addition, it involves handing student data to a private company and prevents people from experimenting with new technologies such as generative AI.

As a result, the UCISA, a UK membership body supporting technology in education, has worked with Turnitin to ensure universities can opt out of the feature temporarily. The American Association of Colleges and Universities has also expressed “dubiousness” over the detection system, given rapid developments in AI.

Deborah Green, CEO of UCISA, expressed concern that Turnitin was launching its AI detection system with little warning to students as they prepared coursework and exams this summer. While universities have broadly welcomed the new tool, Green added that they need time to assess it.

Impact on lecturers and universities

Lecturers worried they wouldn’t know why essays have been flagged as being written by AI. In a single university, an error rate of 1% would mean hundreds of students wrongly accused of cheating, with little recourse to appeal, warned Charles Knight, assistant director at consultancy Advance HE.

While Turnitin has not immediately responded to a request for comment on the concerns raised about the AI detection tool, the company stated that the technology had been “in development for years” and provided resources to “help the education community navigate and manage [it]”.

The new tool’s launch has sparked a debate among academics, higher education consultants, and cognitive scientists worldwide over how universities might develop new modes of assessment in response to AI’s threat to academic integrity.

Clearview Al Has Scraped More Than 30 Billion Photos from Social Media and Gave Them to Cops

Clearview AI, a controversial facial recognition company, recently announced that it has scraped more than 30 billion photos from social media platforms. Despite facing multiple setbacks and bans from cities such as Portland, San Francisco, and Seattle, Clearview AI remains undeterred and has continued to grow its database significantly over the last year.

While Clearview AI is no longer able to provide its services to private businesses, it is still being used by more than 2,400 law enforcement agencies in the United States. The company’s CEO claims that police have used Clearview AI’s tools over a million times, and its database of scraped social media images now tops 30 billion.

The company’s AI-powered technology can recognize millions of faces thanks to images uploaded to social media. However, this technology has been met with widespread criticism due to concerns over privacy and the potential for misuse. Around 17 cities have banned the use of Clearview AI, but law enforcement agencies seem more than happy to use the platform.

The Miami Police Department recently confirmed that it uses Clearview AI regularly, which is a rare admission by law enforcement. The fact that law enforcement agencies are willing to use this technology despite the controversy surrounding it raises concerns about the impact it could have on civil liberties and human rights.

While facial recognition technology can be useful in certain situations, such as identifying criminals or finding missing persons, its potential for misuse is significant. Clearview AI’s massive database of scraped social media images is a prime example of how technology can be used to infringe on privacy rights. As this technology continues to evolve, it is important that we have a discussion about its use and potential impact on society.

AI-Generated Code Will Replace Human Programmers Within the Next 5 Years

During a recent interview with Eric Sheridan, the senior U.S. internet analyst at Goldman Sachs Research, Emad Mostaque, CEO of Stability AI, made some bold predictions about the future of AI and its impact on society. He compared AI to a highly gifted intern with poor recall, stating that if the memory problem is resolved, AI might be prepared for promotion to the analyst or associate-level employment.

According to Mostaque, programmers may be replaced by AI in just five years. He cited the fact that AI is now producing software code successfully, to the point that it generates 41% of all new software code on GitHub. This speed of disruption is “terrifying,” he said, describing AI as a “far bigger disruption than the pandemic.”

Mostaque also predicted that AI would soon have a significant impact on various sectors of society, including entertainment, education, medicine, and of course the IT industry. He reminded the audience that AI massive language models are already successfully producing software code, stating that “this is a much bigger disruption than the pandemic.”

Despite being a non-specialized model, Mostaque claimed that OpenAI’s ChatGPT could pass Google’s test for a high-level software engineer. He projected that there would be no programmers in five years, and while this offers the potential to increase production and efficiency, it also raises concerns about job security in fields that previously seemed secure.

While there are undoubtedly challenges associated with the development of AI, Mostaque believes that the benefits it brings will ultimately outweigh the risks. As AI continues to evolve and become more sophisticated, we can expect to see it play an increasingly important role in shaping the future of society.

U.S. President puts pressure on tech giants over AI

U.S. President Joe Biden has called on tech companies to ensure the safety of AI products before releasing them to the public. Biden emphasized the need for appropriate safeguards to protect society, national security, and the economy from potential risks associated with AI. Social media is cited as an example of the negative impact of powerful technologies in the absence of safeguards. U.S. President Joe Biden has called on leading technology companies to ensure their products are secure before releasing them to the public, amidst growing concerns over the safety of artificial intelligence (AI). Speaking at a meeting with science and technology advisers on April 4, Biden highlighted the importance of addressing potential risks to society, national security, and the economy.

U.S. President on social media’s negative impact on AI

During the meeting, Biden cited social media as an example of the negative impact that powerful technologies can have when appropriate measures to protect against them are not in place. He said: Absent safeguards, we see the impact on the mental health and self-images and feelings and hopelessness, especially among young people.

He also emphasized the need for non-partisan privacy laws that limit the personal data gathered by technology firms, prohibit child-targeted advertising, and prioritize health and safety in product development.

Biden’s comments come amid growing concerns about the safety and ethical implications of AI, as the technology continues to develop rapidly. The ability to swiftly and effectively collect and analyze enormous amounts of data has been a significant contributing factor in the development of AI.

Also, the demand for automated systems that can complete activities that are too risky, challenging or time-consuming for humans, as well as the accessibility of enormous amounts of digital data, have also driven the development of AI.

Ethics and safety concerns drive AI research

However, societal and cultural issues have also influenced the development of AI. Discussions concerning the ethics and the ramifications of AI have arisen in response to worries about job losses and automation.

Concerns have also been raised about the possibility of AI being employed for malicious purposes, such as cyberattacks or disinformation campaigns. As a result, many researchers and decision-makers are attempting to ensure that AI is created and applied ethically and responsibly.

AI is being increasingly utilized in a variety of modern-day applications, from virtual assistants to self-driving cars, medical diagnostics, and financial analysis. Researchers are also exploring novel ideas like reinforcement learning, quantum computing, and neuromorphic computing.

One important trend in modern-day AI is the shift toward more human-like interactions, with voice assistants like Siri and Alexa leading the way. Natural language processing has also made significant progress, enabling machines to understand and respond to human speech with increasing accuracy.

The recently developed ChatGPT is an example of AI that can understand natural language and generate human-like responses to a wide range of queries and prompts.

President Biden’s call for tech companies to prioritize the safety and ethical implications of AI underscores the need for a comprehensive approach to regulating and implementing the technology. While AI presents numerous benefits, it also poses significant risks that must be addressed through responsible and ethical development and implementation.

How Al And ChatGPT Will Actually Create More Jobs For Humans

Artificial intelligence is here, and it’s coming for your job. So promising are the tool’s capabilities that Microsoft — amid laying off 10,000 people — has announced a “multiyear, multibillion-dollar investment” in the revolutionary technology, which is growing smarter by the day. And the rise of machines leaves many well-paid workers vulnerable, experts warn.

“AI is replacing the white-collar workers. I don’t think anyone can stop that,” said Pengcheng Shi, an associate dean in the department of computing and information sciences at Rochester Institute of Technology. “This is not crying wolf,” Shi told The Post. “The wolf is at the door.”

From the financial sector to health care to publishing, a number of industries are vulnerable, Shi said. But as AI continues its mind-blowing advancements, he maintains that humans will learn how to harness the technology.

Artificial intelligence is already at a point where it can do the jobs people are paid for. AI is already having an impact on multiple industries, professors warn.

Already, AI is upending certain fields, particularly after the release of ChatGPT, a surprisingly intelligent chatbot released in November that’s free to the public.

Earlier this month, it emerged that consumer publication CNET had been using AI to generate stories since late last year — a practice put on pause after fierce backlash on social media. Academia was recently rocked by the news that ChatGPT had scored higher than many humans on an MBA exam administered at Penn’s elite Wharton School. After Darren Hick, a philosophy professor at South Carolina’s Furman University, caught a student cheating with the wildly popular tool, he told The Post that the discovery had left him feeling “abject terror” for what the future might entail.

Hick and many others are right to be worried, said Chinmay Hegde, a computer science and electrical engineering associate professor at New York University.

“Certain jobs in sectors such as journalism, higher education, graphic and software design — these are at risk of being supplemented by AI,” said Hegde, who calls ChatGPT in its current state “very, very good, but not perfect.”

For now, anyway.

Here’s a look at some of the jobs most vulnerable to the fast-learning, ever-evolving technology.

Education

Professors and teachers, in theory, could be replaced by AI courses, according to experts.
Professors and teachers could be replaced by AI courses, according to experts.

As it stands now, ChatGPT — currently banned in NYC schools — “can easily teach classes already,” Shi said. The tool would likely be most effective at the middle or high school level, he added, as those classes reinforce skills already established in elementary school.

“Although it has bugs and inaccuracies in terms of knowledge, this can be easily improved. Basically, you just need to train the ChatGPT,” Shi continued.

As for higher education, both Shi and Hegde maintain that college courses will need a human leader for the foreseeable future, but the NYU professor did admit that, in theory, AI could teach without oversight.

In the meantime, educators are seeing their roles transformed nearly overnight. It’s already become a struggle to adapt teaching and testing methods in efforts to keep up with the increasingly talented ChatGPT, which, according to Shi, can successfully complete a corner-cutting student’s coursework at a master’s level.

Doctoral candidates hoping for a shortcut are likely out of luck: Creating an independent thesis on an area not often or thoroughly studied is beyond AI’s abilities for the time being, he said.

Finance

AI like ChatGPT could take over spreadsheet style jobs in finance, experts warn.
AI like ChatGPT could take over spreadsheet-style jobs in finance, experts warn.

Wall Street could see many jobs axed in coming years, as bots like ChatGPT continue to better themselves, Shi told The Post.

“I definitely think [it will impact] the trading side, but even [at] an investment bank, people [are] hired out of college and spend two, three years to work like robots and do Excel modeling — you can get AI to do that,” he explained. “Much, much faster.”

Shi is certain, however, that crucial financial and economic decisions will likely always be left in human hands, even if the data sheets are not.

Software engineering

Relatively simple software design jobs are at risk from ChatGPT and other AI.
Relatively simple software design jobs are at risk.

Website designers and engineers responsible for comparatively simple coding are at risk of being made obsolete, Hegde warns.

“I worry for such people. Now I can just ask ChatGPT to generate a website for me — any type of person whose routine job would be doing this for me is no longer needed.”

In essence, AI can draft the code — hand-tailored to a user’s requests and parameters — to build sites and other pieces of IT.

Relatively uncomplicated software design jobs will be a thing of the past by 2026 or sooner, Shi said.

“As time goes on, probably today or the next three, five, 10 years, those software engineers, if their job is to know how to code … I don’t think they will be broadly needed,” Shi said.

Journalism

AI is already making it way into newsrooms.
AI is already making its way into newsrooms.

The technology is off to a rocky start in the news-gathering business: CNET’s recent attempts (and subsequent corrections to its computer-generated stories) were preceded by the Guardian, which had GPT software write a piece in 2020 — with mixed results.

Still, there is one job the technology is already highly qualified for, according to Hegde.

“Copy editing is certainly something it does an extremely good job at. Summarizing, making an article concise and things of that nature, it certainly does a really good job,” he said, noting that ChatGPT is excellent at designing its own headlines.

One major shortcoming — salvation for reporters and copy editors, at least for now — is the tool’s inability to fact-check efficiently, he added.

“You can ask it to provide an essay, to produce a story with citations, but more often than not, the citations are just made up,” Hegde continued. “That’s a known failure of ChatGPT and honestly we do not know how to fix that.”

Graphic design

Graphic design jobs also face potential obsoleting from AI.
Graphic design jobs also face potentially becoming obsolete thanks to AI.

In 2021, ChatGPT developer OpenAI launched another tool, DALL-E, which can generate tailored images from user-generated prompts on command. Along with doppelgangers such as Craiyon, Stable Diffusion, and Midjourney, the tool poses a threat to many in the graphic and creative design industries, according to Hegde.

“Before, you would ask a photographer or you would ask a graphic designer to make an image [for websites]. That’s something very, very plausibly automated by using technology similar to ChatGPT,” he continued.

Shi recently commanded DALL-E to make a cubist portrait of rabbits for the Lunar New Year, which he said came out “just amazing.” But, although it captured the hard-lined, Picasso-derived painting style, Shi noticed that it was not successful with more nuanced techniques — exposing a current shortcoming in the tech.

Al Images Showing Mark Zuckerberg Walking The Ramp Take Internet By Storm

Meta’s CEO, Mark Zuckerberg, is generally seen in a regular attire of t-shirts, jeans, and sneakers; it’s a little different to see him wearing a designer Louis Vuitton outfit and also walking on the ramp. But the artificial technology has made it possible, causing a lot of confusion among internet users because the images appear very real.

It would be challenging to distinguish the fake images produced by AI from real ones since they are so uncannily realistic. Zuckerberg can be seen maintaining the flawless expression that models frequently sport during the rampwalk.

It’s not the first time artificial intelligence (AI) images have swept the internet. Many of the expert artists who have employed this technology have produced sometimes unimaginable images.

Nokia to Launch 4G internet is set to arrive on the Moon

According to a Nokia executive, the company is preparing to introduce 4G internet on the moon later this year as part of NASA’s Artemis 1 mission to establish a human foothold on the lunar surface. The objective is to demonstrate that terrestrial networks can fulfill the communication requirements of upcoming space expeditions.

Nokia is preparing to launch a 4G mobile network on the moon later this year, in the hopes of enhancing lunar discoveries — and eventually paving the path for human presence on the satellite planet.

The Finnish telecommunications group plans to launch the network on a SpaceX rocket over the coming months, Luis Maestro Ruiz De Temino, Nokia’s principal engineer, told reporters earlier this month at the Mobile World Congress trade show in Barcelona.

The network will be powered by an antenna-equipped base station stored in a Nova-C lunar lander designed by U.S. space firm Intuitive Machines, as well as by an accompanying solar-powered rover.

An LTE connection will be established between the lander and the rover.

The infrastructure will land on the Shackleton crater, which lies along the southern limb of the moon.

Nokia says the technology is designed to withstand the extreme conditions of space.

The network will be used within Nasa’s Artemis 1 mission, which aims to send the first human astronauts to walk on the moon’s surface since 1972.

The aim is to show that terrestrial networks can meet the communications needs for future space missions, Nokia said, adding that its network will allow astronauts to communicate with each other and with mission control, as well as to control the rover remotely and stream real-time video and telemetry data back to Earth.

The lander will launch via a SpaceX rocket, according to Maestro Ruiz De Temino. He explained that the rocket won’t take the lander all the way to the moon’s surface — it has a propulsion system in place to complete the journey.

Anshel Sag, principal analyst at Moor Insights & Strategy, said that 2023 was an “optimistic target” for the launch of Nokia’s equipment.

“If the hardware is ready and validated as it seems to be, there is a good chance they could launch in 2023 as long as their launch partner of choice doesn’t have any setbacks or delays,” Sag told CNBC via email. 

Nokia previously said that its lunar network will “provide critical communication capabilities for many different data transmission applications, including vital command and control functions, remote control of lunar rovers, real-time navigation and streaming of high definition video.”

Lunar ice

One of the things Nokia is hoping to achieve with its lunar network is finding ice on the moon. Much of the moon’s surface is now dry, but recent unmanned missions to the moon have yielded discoveries of ice remnants trapped in sheltered craters around the poles.

Such water could be treated and used for drinking, broken up into hydrogen and oxygen for use as rocket fuel, or separated to provide breathable oxygen to astronauts.

“I could see this being used by future expeditions to continue to explore the moon since this really seems like a major test of the capabilities before starting to use it commercially for additional exploration and potential future mining operations,” Sag told CNBC.

“Mining requires a lot of infrastructure to be in place and having the right data about where certain resources are located.

We’ll need more than just internet connectivity, if we’re ever to live on the moon. Engineering giant Rolls-Royce, for example, is working on a nuclear reactor to provide power to future lunar inhabitants and explorers.

Meta Launches Tools to Segregate Ads from Harmful Content

Meta Platforms Inc said on Thursday it is now rolling out a long-promised system for advertisers to determine where their ads are shown, responding to their demands to distance their marketing from controversial posts on Facebook and Instagram.

The system offers advertisers three risk levels they can select for their ad placements, with the most conservative option excluding placements above or below posts with sensitive content like weapons depictions, sexual innuendo and political debates.

Meta also will provide a report via advertising measurement firm Zefr showing Facebook advertisers the precise content that appeared near their ads and how it was categorized.

Marketers have long advocated for greater control over where their ads appear online, complaining that big social media companies do too little to prevent ads from showing alongside hate speech, fake news and other offensive content.

The issue came to a head in July 2020, when thousands of brands joined a boycott of Facebook amid anti-racism protests in the United States.

Under a deal brokered several months later, the company, now called Meta, agreed to develop tools to “better manage advertising adjacency,” among other concessions.

Samantha Stetson, Meta’s vice president for Client Council and Industry Trade Relations, said she expected Meta to introduce more granular controls over time so advertisers could specify their preferences around different social issues.

Stetson also said early tests showed no significant change in performance or price for ads placed using more restrictive settings, adding that those involved in the tests were “pleasantly surprised.”

However, she cautioned that the pricing dynamic could change, given the auction-based nature of Meta’s ads system and the reduction in inventory associated with any restrictions.

The controls will be available initially in English- and Spanish-speaking markets, with plans to expand them to other regions – and to the company’s Reels, Stories and video ad formats – later this year.

Impact of ChatGPT and other generative AI technologies on the freelance economy.

The furore around ChatGPT and other innovations in generative AI — be it Google’s Bard or other new AI image tools — can be equally exciting for some and overwhelming for many others, particularly in the field of freelancing. Freelancers, who operate independently and offer their services to various clients, have found AI technologies to be a game-changer in their field. From content creation to customer support, AI-powered tools like ChatGPT have provided freelancers with the ability to automate and streamline their work processes, allowing them to focus on delivering high-quality work to their clients.

The impact of generative AI technologies like ChatGPT on the freelance economy cannot be overstated. One of the most significant benefits is the increased efficiency and productivity of freelancers. ChatGPT, for example, can generate high-quality content within minutes, which would otherwise take hours for a human writer to complete. This enables freelancers to take on more work, deliver results faster, and ultimately earn more income.

Another significant benefit of generative AI technologies like ChatGPT is that they provide freelancers with the ability to offer more diverse services to their clients. For example, a freelance writer who previously only offered content creation services can now offer additional services like social media management, chatbot creation, and customer support through the use of generative AI technologies. This increased service offering enables freelancers to diversify their income streams and become more competitive in their market.

The impact of generative AI like ChatGPT technologies on the freelance economy is not limited to increased efficiency and productivity or expanded service offerings. AI-powered tools like ChatGPT have also provided freelancers with the ability to offer more personalized services to their clients. Chatbots, for example, can be programmed to provide personalized responses to customer inquiries, which can lead to increased customer satisfaction and loyalty. This, in turn, can lead to repeat business for freelancers.

Another benefit of generative AI technologies like ChatGPT is that they have enabled freelancers to work remotely and communicate more effectively with clients from all over the world. AI-powered translation tools, for example, have made it easier for freelancers to communicate with clients who speak different languages. This has opened up new opportunities for freelancers to work with clients from all over the world, without the need for expensive travel or language learning.

Despite the many benefits of generative AI technologies like ChatGPT, there are also some potential downsides to consider. One concern is that the widespread adoption of AI-powered tools could lead to job displacement. As AI-powered tools become more sophisticated, they may be able to replace human workers in certain tasks, such as content creation or customer support. This could lead to a reduction in demand for certain types of freelancers, particularly those who offer services that can be automated.

Another concern is that the increasing use of AI-powered tools could lead to a homogenization of services. As more and more freelancers adopt AI-powered tools like ChatGPT, it could lead to a standardization of services, with clients choosing freelancers based on the quality of their AI tools rather than their unique skills and expertise. This could lead to a commoditization of freelancing, with freelancers competing solely on price rather than on their unique value propositions.

Despite these potential downsides, the overall impact of generative AI technologies like ChatGPT on the freelance economy is overwhelmingly positive. These tools have enabled freelancers to work more efficiently, offer more diverse services, and provide more personalized experiences to their clients. As AI-powered tools continue to evolve and become more sophisticated, it is likely that they will play an increasingly important role in the freelance economy. Freelancers who are able to embrace these technologies and incorporate them into their work processes are likely to be the most successful in the years to come.

No Tech Skills Required! Al Prompt Engineers Jobs Can Earn You Up To $335,000 A Year

Generative AI tools like ChatGPT have created a burgeoning market for “prompt engineers” who are responsible for improving the responses of AI chatbots. These high-paying jobs can offer salaries as high as $335,000 a year and often don’t require a degree in tech.

Anthropic, a leading AI safety and research company, is currently seeking a qualified “prompt engineer and librarian” to join their team. The position boasts an attractive salary range, spanning from $175,000 to $335,000. As a crucial member of the team, the selected individual will be responsible for curating an extensive library of top-tier prompts and prompt chains, while also developing interactive tools aimed at educating customers in the art of prompt engineering. Although some prior experience in programming and familiarity with large language models is preferred, Anthropic enthusiastically encourages all interested candidates to apply, even if they don’t meet every qualification.

The realm of prompt engineering is experiencing rapid growth, which is evidenced by Prompt Base’s recent launch of a prompt marketplace just last June. However, some cautious recruiters warn that most high-paying positions within this field typically require a strong background in technology and formal education. Despite this, it’s worth noting that many successful prompt engineers have emerged from non-tech backgrounds, finding immense satisfaction in the creative and analytical aspects of their work. The opportunity to craft engaging, thought-provoking prompts that steer AI systems towards optimal outcomes can be incredibly fulfilling.

Yet, as with any burgeoning field, uncertainties linger. Some experts wonder if prompt engineering will maintain its status as a highly sought-after profession in the long term, considering the rapid evolution of AI technology. The continuous advancements in the field might lead to shifts in job demands and priorities, prompting professionals to adapt and expand their skill sets accordingly.

In conclusion, the role of a prompt engineer and librarian at Anthropic presents an exciting opportunity for individuals passionate about AI safety and research. With the potential for substantial compensation and the chance to shape the future of AI through prompt engineering, interested candidates from diverse backgrounds are encouraged to apply and contribute their unique perspectives to this ever-evolving domain. While the future remains uncertain, the growth and significance of prompt engineering in shaping AI’s trajectory cannot be underestimated.