WordPress Ad Banner

EU Lawmakers Eye Tiered Approach To Regulating Generative AI

EU lawmakers in the European parliament are closing in on how to tackle generative AI as they work to fix their negotiating position so that the next stage of legislative talks can kick off in the coming months.

The hope then is that a final consensus on the bloc’s draft law for regulating AI can be reached by the end of the year.

“This is the last thing still standing in the negotiation,” says MEP Dragos Tudorache, the co-rapporteur for the EU’s AI Act, discussing MEPs’ talks around generative AI in an interview with TechCrunch. “As we speak, we are crossing the last ‘T’s and dotting the last ‘I’s. And sometime next week I’m hoping that we will actually close — which means that sometime in May we will vote.”

The Council adopted its position on the regulation back in December. But where Member States largely favored deferring what to do about generative AI — to additional, implementing legislation — MEPs look set to propose that hard requirements are added to the Act itself.

In recent months, tech giants’ lobbyists have been pushing in the opposite direction, of course, with companies such as Google and Microsoft arguing for generative AI to get a regulatory carve out of the incoming EU AI rules.

Where things will end up remains tbc. But discussing what’s likely to be the parliament’s position in relation to generative AI tech in the Act, Tudorache suggests MEPs are gravitating towards a layered approach — three layers in fact — one to address responsibilities across the AI value chain; another to ensure foundational models get some guardrails; and a third to tackle specific content issues attached to generative models, such as the likes of OpenAI’s ChatGPT.

Under the MEPs’ current thinking, one of these three layers would apply to all general purpose AI (GPAIs) — whether big or small; foundational or non foundational models — and be focused on regulating relationships in the AI value chain.

“We think that there needs to be a level of rules that says ‘entity A’ puts on the market a general purpose [AI] has an obligation towards ‘entity B’, downstream, that buys the general purpose [AI] and actually gives it a purpose,” he explains. “Because it gives it a purpose that might become high risk it needs certain information. In order to comply [with the AI Act] it needs to explain how the model was was trained. The accuracy of the data sets from biases [etc].”

A second proposed layer would address foundational models — by setting some specific obligations for makers of these base models.

“Given their power, given the way they are trained, given the versatility, we believe the providers of these foundational models need to do certain things — both ex ante… but also during the lifetime of the model,” he says. “And it has to do with transparency, it has to do, again, with how they train, how they test prior to going on the market. So basically, what is the level of diligence the responsibility that they have as developers of these models?”

The third layer MEPs are proposing would target generative AIs specifically — meaning a subset of GPAIs/foundational models, such as large language models or generative art and music AIs. Here lawmakers working to set the parliament’s mandate are taking the view these tools need even more specific responsibilities; both when it comes to the type of content they can produce (with early risks arising around disinformation and defamation); and in relation to the thorny (and increasingly litigated) issue of copyrighted material used to train AIs.

“We’re not inventing a new regime for copyright because there is already copyright law out there. What we are saying… is there has to be a documentation and transparency about material that was used by the developer in the training of the model,” he emphasizes. “So that afterwards the holders of those rights… can say hey, hold on, what you used my data, you use my songs, you used my scientific article — well, thank you very much that was protected by law, therefore, you owe me something — or no. For that will use the existing copyright laws. We’re not replacing that or doing that in the AI Act. We’re just bringing that inside.”

The Commission proposed the draft AI legislation a full two years ago, laying out a risk-based approach for regulating applications of artificial intelligence and setting the bloc’s co-legislators, the parliament and the Council, the no-small-task of passing the world’s first horizontal regulation on AI.

Adoption of this planned EU AI rulebook is still a ways off. But progress is being made and agreement between MEPs and Member States on a final text could be hashed out by the end of the year, per Tudorache — who notes that Spain, which takes up the rotating six-month Council presidency in July, is eager to deliver on the file. Although he also concedes there are still likely to be plenty of points of disagreement between MEPs and Member States that will have to be worked through. So a final timeline remains uncertain. (And predicting how the EU’s closed-door trilogues will go is never an exact science.)

One thing is clear: The effort is timely — given how AI hype has rocketed in recent months, fuelled by developments in powerful generative AI tools, like DALL-E and ChatGPT.

The excitement around the boom in usage of generative AI tools that let anyone produce works such as written compositions or visual imagery just by inputting a few simple instructions has been tempered by growing concern over the potential for fast-scaling negative impacts to accompany the touted productivity benefits.

EU lawmakers have found themselves at the center of the debate — and perhaps garnering more global attention than usual — since they’re faced with the tricky task of figuring out how the bloc’s incoming AI rules should be adapted to apply to viral generative AI.

The Commission’s original draft proposed to regulate artificial intelligence by categorizing applications into different risk bands. Under this plan, the bulk of AI apps would be categorized as low risk — meaning they escape any legal requirements. On the flip side, a handful of unacceptable risk use-cases would be outright prohibited (such as China-style social credit scoring). Then, in the middle, the framework would apply rules to a third category of apps where there are clear potential safety risks (and/or risks to fundamental rights) which are nonetheless deemed manageable.

The AI Act contains a set list of “high risk” categories which covers AI being used in a number of areas that touch safety and human rights, such as law enforcement, justice, education, employment healthcare and so on. Apps falling in this category would be subject to a regime of pre- and post-market compliance, with a series of obligations in areas like data quality and governance; and mitigations for discrimination — with the potential for enforcement (and penalties) if they breach requirements.

The proposal also contained another middle category which applies to technologies such as chatbots and deepfakes — AI-powered tech that raise some concerns but not, in the Commission’s view, so many as high risk scenarios. Such apps don’t attract the full sweep of compliance requirements in the draft text but the law would apply transparency requirements that aren’t demanded of low risk apps.

Being first to the punch drafting laws for such a fast-developing, cutting-edge tech field meant the EU was working on the AI Act long before the hype around generative AI went mainstream. And while the bloc’s lawmakers were moving rapidly in one sense, its co-legislative process can be pretty painstaking. So, as it turns out, two years on from the first draft the exact parameters of the AI legislation are still in the process of being hashed out.

The EU’s co-legislators, in the parliament and Council, hold the power to revise the draft by proposing and negotiating amendments. So there’s a clear opportunity for the bloc to address loopholes around generative AI without needing to wait for follow-on legislation to be proposed down the line, with the greater delay that would entail.

Even so, the EU AI Act probably won’t be in force before 2025 — or even later, depending on whether lawmakers decide to give app makers one or two years before enforcement kicks in. (That’s another point of debate for MEPs, per Tudorache.)

He stresses that it will be important to give companies enough time to prepare to comply with what he says will be “a comprehensive and far reaching regulation”. He also emphasizes the need to allow time for Member States to prepare to enforce the rules around such complex technologies, adding: “I don’t think that all Member States are prepared to play the regulator role. They need themselves time to ramp up expertise, find expertise, to convince expertise to work for the public sector.

“Otherwise, there’s going to be such a disconnect between between the realities of the industry, the realities of implementation, and regulator, and you won’t be able to force the two worlds into each other. And we don’t want that either. So I think everybody needs that lag.”

MEPs are also seeking to amend the draft AI Act in other ways — including by proposing a centralized enforcement element to act as a sort of backstop for Member State-level agencies; as well as proposing some additional prohibited use-cases (such as predictive policing; which is an area where the Council may well seek to push back).

“We are changing fundamentally the governance from what was in the Commission text, and also what is in the Council text,” says Tudorache on the enforcement point. “We are proposing a much stronger role for what we call the AI Office. Including the possibility to have joint investigations. So we’re trying to put as sharp teeth as possible. And also avoid silos. We want to avoid the 27 different jurisdiction effect [i.e. of fragmented enforcements and forum shopping to evade enforcement].”

The EU’s approach to regulating AI draws on how it’s historically tackled product liability. This fit is obviously a stretch, given how malleable AI technologies are and the length/complexity of the ‘AI value chain’ — i.e. how many entities may be involved in the development, iteration, customization and deployment of AI models. So figuring out liability along that chain is absolutely a key challenge for lawmakers.

The risk-based approach also raises specific questions over how to handle the particularly viral flavor of generative AI that’s blasted into mainstream consciousness in recent months, since these tools don’t necessarily have a clear cut use-case. You can use ChatGPT to conduct research, generate fiction, write a best man’s speech, churn out marketing copy or pen lyrics to a cheesy pop song, for example — with the caveat that what it outputs may be neither accurate nor much good (and it certainly won’t be original).

Similarly, generative AI art tools could be used for different ends: As an inspirational aid to artistic production, say, to free up creatives to do their best work; or to replace the role of a qualified human illustrator with cheaper machine output.

(Some also argue that generative AI technologies are even more speculative; that they are not general purpose at all but rather inherently flawed and incapable; representing an amalgam of blunt-force investment that’s being imposed upon societies without permission or consent in a cripplingly-expensive and rights-trampling fishing expedition-style search for profit-making solutions.)

The core concern MEPs are seeking to tackle, therefore, is to ensure that underlying generative AI models like OpenAI’s GPT can’t just dodge risk-based regulation entirely by claiming they have no set purpose.

Deployers of generative AI models could also seek to argue they’re offering a tool that’s general purpose enough to escape any liability under the incoming law — unless there is clarity in the regulation about relative liabilities and obligations throughout the value chain.

One obviously unfair and dysfunctional scenario would be for all the regulated risk and liability to be pushed downstream, onto only the deployers of specific high risks apps. Since these entities would, almost certainly, be utilizing generative AI models developed by other/s upstream — so wouldn’t have access to the data, weights etc used to train the core model — which would make it impossible for them to comply with AI Act obligations, whether around data quality or mitigating bias.

There was already criticism about this aspect of the proposal prior to the generative AI hype kicking off in earnest. But the speed of adoption of technologies like ChatGPT appears to have convinced parliamentarians of the need to amend the text to make sure generative AI does not escape being regulated.

And while Tudorache isn’t in a position to know whether the Council will align with the parliamentarians’ sense of mission here, he says he has “a feeling” they will buy in — albeit, most likely seeking to add their own “tweaks and bells and whistles” to how exactly the text tackles general purpose AIs.

In terms of next steps, once MEPs close their discussions on the file there will be a few votes in the parliament to adopt the mandate. (First two committee votes and then a plenary vote.)

He predicts the latter will “very likely” end up being taking place in the plenary session in early June — setting up for trilogue discussions to kick off with the Council and a sprint to get agreement on a text during the six months of the Spanish presidency. “I’m actually quite confident… we can finish with the Spanish presidency,” he adds. “They are very, very eager to make this the flagship of their presidency.”

Asked why he thinks the Commission avoided tackling generative AI in the original proposal, he suggests even just a couple of years ago very few people realized how powerful — and potentially problematic — these technology would become, nor indeed how quickly things could develop in the field. So it’s a testament to how difficult it’s getting for lawmakers to set rules around shapeshifting digital technologies which aren’t already out of date before they’ve even been through the democratic law-setting process.

Somewhat by chance, the timeline appears to be working out for the EU’s AI Act — or, at least, the region’s lawmakers have an opportunity to respond to recent developments. (Of course it remains to be seen what else might emerge over the next two years or so of generative AI which could freshly complicate these latest futureproofing efforts.)

Given the pace and disruptive potential of the latest wave of generative AI models, MEPs are sounding keen that others follow their lead — and Tudorache was one of a number of parliamentarians who put their names to an open letter earlier this week, calling for international efforts to cooperate on setting some shared principles for AI governance.

The letter also affirms MEPs’ commitment to setting “rules specifically tailored to foundational models” — with the stated goal of ensuring “human-centric, safe, and trustworthy” AI.

He says the letter was written in response to the open letter put out last month — signed by the likes of Elon Musk (who has since been reported to be trying to develop his own GPAI) — calling for a moratorium on development of any more powerful generative AI models so that shared safety protocols could be developed.

“I saw people asking, oh, where are the policymakers? Listen, the business environment is concerned, academia is concerned, and where are the policymakers — they’re not listening. And then I thought well that’s what we’re doing over here in Europe,” he tells TechCrunch. “So that’s why I then brought together my colleagues and I said let’s actually have an open reply to that.”

“We’re not saying that the response is to basically pause and run to the hills. But to actually, again, responsibly take on the challenge [of regulating AI] and do something about it — because we can. If we’re not doing it as regulators then who else would?” he adds.

Signing MEPs also believe the task of AI regulation is such a crucial one they shouldn’t just be waiting around in the hopes that adoption of the EU AI Act will led to another ‘Brussels effect’ kicking in in a few years down the line, as happened after the bloc updated its data protection regime in 2018 — influencing a number of similar legislative efforts in other jurisdictions. Rather this AI regulation mission must involve direct encouragement — because the stakes are simply too high.

“We need to start actively reaching out towards other like minded democracies [and others] because there needs to be a global conversation and a global, very serious reflection as to the role of this powerful technology in our societies, and how to craft some basic rules for the future,” urges Tudorache.

South Korea is Testing Out an AI-based Gender Detector

The Seoul Metro announced its plans to pilot an AI-based gender detector program it developed, per South Korean outlet KBS as reported on April 20. 

The plan is slated to begin at the end of June and last for about six months, starting with the women’s restroom in Sinseol-dong Station. Plans for expansion will only begin once the reliability of the program is confirmed, the Seoul Metro said, per KBS.

The AI-based gender detector is able to automatically detect a person’s gender, display CCTV images in pop-up form, and broadcast announcements, KBS reported, citing the Seoul Metro.

According to KBS, citing the Seoul Metro, the system is able to distinguish gender based on body shape, clothing, belongings, and behavioral patterns.

Taking into consideration that most subway station restroom cleaners are currently women, the corporation will be putting the installation of the program in men’s restrooms on hold, per KBS.

But some people are skeptical about the program.

“Do you think all women look exactly the same? Are you asking male-passing women to not use the restroom?” reads a tweet

“Can installing this at the women’s restroom really stop men from coming?” another tweet reads. 

According to KBS, the program was built as a preventive measure in response to a murder that took place in a metro station bathroom.

On September 14, a Seoul Metro employee fatally stabbed a 28-year-old female coworker in her 20s in the women’s restroom at Sindang Station. The man has been sentenced to 40 years in jail, per BBC.

Members of the public paid their respects to the victim with handwritten Post-it notes at the entrance of the restroom where the incident took place. 

“I want to be alive at the end of my workday,” reads one. “Is it too much to ask, to be safe to reject people I don’t like?” reads another, per BBC.

Following the incident, the Seoul Metro has been implementing various safety measures, including self-defense training for its workers and separating men’s and women’s restrooms in renovated public buildings, per KBS.

Codename Athena Microsoft developing secret AI chips to challenge Nvidia’s dominance

Microsoft is reportedly working on its own AI chips to train complex language models. The move is thought to be intended to free the corporation from reliance on Nvidia chips, which are in high demand. 

Select Microsoft and OpenAI staff members have been granted access to the chips to verify their functionality, The Information reported on Tuesday. 

“Microsoft has another secret weapon in its arsenal: its own artificial intelligence chip for powering the large-language models responsible for understanding and generating humanlike language,” read The Information article. 

Since 2019, Microsoft has been secretly developing the chips, and that same year the Redmond, Washington-based tech giant also made its first investment in OpenAI, the company behind the sensational ChatGPT chatbot. 

Nvidia is presently the main provider of AI server chips, and businesses are scrambling to buy them in order to use AI software. For the commercialization of ChatGPT, it is predicted that OpenAI would need more than 30,000 of Nvidia’s A100 GPUs. 

While Nvidia tries to meet demand, Microsoft wants to develop its own AI chips. The corporation is apparently speeding up work on the project, code-named “Athena“.

Microsoft intends to make its AI chips widely available to Microsoft and OpenAI as early as next year, though it hasn’t yet said if it will make them available to Azure cloud users, noted The Information.

Microsoft joins other tech titans making AI chips 

The chips are not meant to replace Nvidia’s, but if Microsoft continues to roll out AI-powered capabilities in Bing, Office programs, GitHub, and other services, they could drastically reduce prices.

Bloomberg reported in late 2020 that Microsoft was considering developing its own ARM-based processors for servers and possibly even a future Surface device. Microsoft has been working on its own ARM-based chips for some years.

Although these chips haven’t yet been made available, Microsoft has collaborated with AMD and Qualcomm to develop specialized CPUs for its Surface Laptop and Surface Pro X devices.

The news sees Microsoft join the list of tech behemoths with their own internal AI chips, which already includes the likes of Amazon, Google, and Meta. However, most companies still rely on the use of Nvidia chips to power their most recent large language models.

The most cutting-edge graphics cards from Nvidia are going for more than $40,000 on eBay as demand for the chips used to develop and use artificial intelligence software increases, CNBC reported last week. 

The A100, a nearly $10,000 processor that has been dubbed the “workhorse” for AI applications, was replaced by the H100, which Nvidia unveiled last year.

Rise Of Skynet? AI Takes Control Of A Chinese Satellite For 24 Hours

According to Chinese state media, the South China Morning Post (SCMP), Chinese researchers have announced that they have allowed artificial intelligence (AI) to gain control of a satellite in near-Earth orbit. This was done to test how an AI would behave while operating an object in space. According to the accounts of the “landmark experiment,” a ground-based AI controlled the tiny Earth observation satellite Qimingxing 1 for 24 hours, without any interference from humans.

According to the SCMP, the experiment’s results have been published in the Geomatics and Information Science journal of Wuhan University.

Allegedly, the AI selected a few locations on Earth and instructed the Qimingxing 1 to take a closer look.

No information was provided about why the technology may have chosen these places. One of the areas reportedly targeted was Patna, an old city in northeastern India near the Ganges River and home to the Bihar Regiment, a branch of the Indian Army that, in 2020, engaged the military of China in a terrible conflict in the Galwan Valley along the disputed border.

The AI also prioritized Osaka, one of the busiest ports in Japan that occasionally accommodates US Navy ships operating in the Pacific.

Before now, most satellites required specific directives or tasks to operate. Unexpected occurrences, like a war or an earthquake, may trigger an assignment, or a satellite may be scheduled to undertake ongoing observations of certain targets.

The team claims that while artificial intelligence technology is increasingly being used in space missions, such as for image recognition, mapping out flight paths, and collision avoidance, it has not been given control of a satellite, resulting in a waste of time and resources.

SCMP states that China has more than 260 remote-sensing satellites in orbit, but they frequently operate “idly” in space, gathering low-value, time-sensitive data without any particular objective. The satellites have a short lifespan and are expensive. According to the researchers, it is crucial to make the most of their usefulness with new orbital applications.

The team proposed that if it discovered anomalous objects or activities, an AI-controlled satellite might warn certain users, such as the military, the national security administration, and other pertinent institutions.

However, for AI to be effective, it must have a thorough awareness of the globe; as a result, it must learn not just how to recognize man-made and natural objects, but also how to understand the intricate and constantly-changing connections between them and the many human communities.

“The AI’s decision-making process was extremely complex. The machine needs to consider many factors – such as real-time cloud conditions, camera angles, target value and the limits of a satellite’s mobility – when planning a day’s work,” explains the SCMP.

Can Al Completely Replace Journalists and News Anchors’ Jobs?

The future of journalism can potentially go massive changes if the progression of artificial intelligence (AI) goes as predicted and takes center stage. Journalists and News Anchors have a lot to worry about as their careers can come to a sad and technological ending but to what extent is this claim real and how fast can AI replace these jobs?

Professor Charlie Beckett, head of the Polis/LSE Journalism AI research project has advised caution and would discourage journalists from using new tools without human supervision: 

“AI is not about the total automation of content production from start to finish: it is about augmentation to give professionals and creatives the tools to work faster, freeing them up to spend more time on what humans do best. Human journalism is also full of flaws and we mitigate the risks through editing. The same applies to AI. Make sure you understand the tools you are using and the risks. Don’t expect too much of the tech.”

There are numerous pros of AI-powered journalism, as it is free of bias and personal preference and promises to deliver faster, more accurate, and more in-depth coverage. With machine learning algorithms at their disposal, journalists can analyze vast amounts of data and information, uncovering patterns and insights that would otherwise remain hidden. 

The result will be a new era of investigative journalism, one where reporters can delve deeper into complex stories and bring to light important issues that would otherwise go unnoticed.

However, the cons of AI also bring with it a darker side. The growing reliance on algorithms and automation threatens to undermine the credibility and trustworthiness of journalism. The rise of AI in journalism also raises concerns about job security and the potential for AI to perpetuate existing biases in the data it uses to generate news. 

With machines taking over the tedious and time-consuming tasks of journalism, many worry that human reporters will become obsolete, replaced by cold, impartial algorithms. And as AI continues to evolve, it is becoming increasingly difficult to distinguish between news generated by humans and by machines, putting the very foundations of journalism at risk.

Slavica Ceperkovic, a visiting professor of interactive media at New York University Abu Dhabi, has a front-row seat to how media is changing. Her students – who are learning to build new worlds in augmented and virtual reality – are adapting fast to this changing technological landscape, using online AI tools such as Notion and Discord to organize their work and what they are learning, she told The National.

And they don’t discriminate regarding the medium their information comes in – through short-form video, as seen with the meteoric rise of TikTok, which is having a moment of popularity.

Despite the expert predictions and guesses, the future of journalism is uncertain, but one thing is clear: AI will play a critical role in shaping its evolution. Whether it will be a force for good or a harbinger of doom remains to be seen. But as the field continues to evolve, journalists and news organizations must be vigilant, embracing new technologies while preserving the core principles of truth, accuracy, and impartiality that have always defined the profession.

The use of AI to support and produce pieces of journalism is something outlets have been experimenting with for some time. Francesco Marconi categorizes AI innovation in the past decade into three waves: automation, augmentation, and generation. 

“During the first phase the focus was on automating data-driven news stories, such as financial reports, sports results, and economic indicators, using natural language generation techniques,” he says. 

There are many examples of news publishers automating some content, including global agencies like Reuters, AFP, and AP, and smaller outlets. 

According to Marconi, the second wave arrived when “the emphasis shifted to augmenting reporting through machine learning and natural language processing to analyze large datasets and uncover trends.” 

An example of this can be found at the Argentinian newspaper La Nación, which began using AI to support its data team in 2019, and then went on to set up an AI lab in collaboration with data analysts and developers.

The third and current wave is generative AI. It’s powered by large language models capable of generating narrative text at scale. This new development offers applications to journalism that goes beyond simple automated reports and data analysis. Now, we could ask a chatbot to write a longer, balanced article on a subject or an opinion piece from a particular standpoint. We could even ask it to do so in the style of a well-known writer or publication.

Microsoft developing its own Al chip – The Information

Microsoft Corp is developing its own artificial intelligence chip code-named “Athena” that will power the technology behind AI chatbots like ChatGPT, the Information reported on Tuesday, citing two people familiar with the matter.

The company, which was an early backer of ChatGPT-owner OpenAI, has been working on the chip since 2019 and it is being tested by a small group of Microsoft and OpenAI employees, the report said.

Microsoft is hoping the chip will perform better than what it currently buys from other vendors, saving it time and money on its costly AI efforts, the report said. Other big tech companies including Amazon and Google also make their own in-house chips for AI.

So far, chip designer Nvidia dominates the market for such chips.

Microsoft and Nvidia did not immediately respond to a request for comment.

The rollout is being accelerated by Microsoft following the success of ChatGPT, the report said. The Windows maker earlier this year launched its own AI-powered search engine, Bing AI, capitalizing on its partnership with OpenAI and trying to grab market share from Google.

Elon Musk to start rival to Microsoft-backed ChatGPT

Elon Musk says he will launch an artificial intelligence (AI) platform that he calls “TruthGPT” to challenge the offerings from Microsoft and Google.

Musk criticised Microsoft-backed OpenAI, the firm behind chatbot sensation ChatGPT, accusing it of “training the AI to lie” and said OpenAI has now become a “closed source”, “for-profit” organisation “closely allied with Microsoft”.

He also accused Larry Page, co-founder of Google, of not taking AI safety seriously.

“I’m going to start something which I call ‘TruthGPT’, or a maximum truth-seeking AI that tries to understand the nature of the universe,” Musk said in an interview with Fox News Channel’s Tucker Carlson to be aired later on Monday.

“And I think this might be the best path to safety, in the sense that an AI that cares about understanding the universe – it is unlikely to annihilate humans because we are an interesting part of the universe,” he said.

Musk, OpenAI and Page did not immediately respond to Reuters’ requests for comment.

Musk has been poaching AI researchers from Alphabet Inc’s Google to launch a startup to rival OpenAI, people familiar with the matter told Reuters.

Musk last month registered a firm named X.AI Corp, incorporated in Nevada, according to a state filing.

The firm listed Musk as the sole director and Jared Birchall, the managing director of Musk’s family office, as a secretary.

The move came even after Musk and a group of artificial intelligence experts and industry executives called for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, citing potential risks to society.

Musk also reiterated his warnings about AI during the interview with Carlson, saying “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production”, according to the excerpts.

“It has the potential of civilisational destruction,” he said.

He said, for example, a super-intelligent AI can write incredibly well and potentially manipulate public opinions.

He tweeted at the weekend that he had met with former US President Barack Obama when he was president and told him Washington needed to “encourage AI regulation”.

Musk co-founded OpenAI in 2015 but stepped down from the company’s board in 2018.

In 2019, he tweeted that he left OpenAI because he had to focus on Tesla and SpaceX.

He also tweeted at the time that other reasons for his departure from OpenAI were, “Tesla was competing for some of the same people as OpenAI & I didn’t agree with some of what OpenAI team wanted to do”.

Musk, CEO of Tesla and SpaceX, has also become CEO of Twitter, a social media platform he bought for $US44 billion ($A66 billion) last year.

In the interview with Fox News, Musk said he recently valued Twitter at “less than half” of the acquisition price.

In January, Microsoft Corp announced a further multi-billion dollar investment in OpenAI, intensifying competition with rival Google and fuelling the race to attract AI funding in Silicon Valley.

AI-Generated Image Wins Photography Award

An artist who won an award at a world-renowned photography competition says the winning image was actually generated by AI.

German photographer Boris Eldagsen said that he wouldn’t be accepting the prize because his image “The Electrician” wasn’t a real photo. It had come top in the creative category in the open competition at the World Photography Organisation’s Sony World Photography Awards 2023.

“AI is not photography,” Eldagsen, who has been a photographer for around three decades, wrote on his website. “Therefore I will not accept the award.”

"The Electrician," an image of two women created with DALL-E 2
Boris Eldagsen said that the image “has all the flaws of AI.” 

The 1940s-style black-and-white image shows a woman stood behind another with her hand on the other woman’s shoulders. Other hands appear to be adjusting the dress of the woman in the foreground. Both women’s gazes are averted.

Though the image looks photorealistic, there are some signs that it has been generated by AI, such as the position of some of the fingers, the appearance of some fingernails, and the shape of one of the women’s pupils. Her dress also appears to blend into her arm.

“It has all the flaws of AI, and it could have been spotted but it wasn’t,” Eldagsen told Insider, adding that he was surprised the image won. After hearing of his success in early March, he immediately told the competition’s organizers that the image was AI-generated, he said.

AI image-generation sites such as DALL-E, Midjourney, and Stable Diffusion have boomed in popularity over recent months. In their prompts, users can ask the sites to create artwork in the style of a particular artist or images of events that never happened — leading to deepfake images of former President Donald Trump being arrested going viral. Users can also ask the platforms to edit existing images.

Eldagsen told Insider that he generated the image in September using DALL-E 2 in a process he referred to as “promptography.”

“For me, working with AI image generators is a co-creation, in which I am the director,” he wrote on his website. “It is not about pressing a button – and done it is. It is about exploring the complexity of this process, starting with refining text prompts, then developing a complex workflow, and mixing various platforms and techniques.”

Eldagsen told Insider that he wanted to start a conversation around the relationship between AI and photography. Competition organizers should create separate categories for AI-generated art, which is becoming increasingly realistic, he said.

Three images, each of two women, created by Boris Eldagsen with DALL-E 2
Some of the images Boris Eldagsen generated with DALL-E 2 in the process of creating “The Electrician.” 

“Midjourney 5 really looks like photography,” he said.

“The Electrician” has since been removed from the Sony World Photography Awards 2023 and no longer features on the World Photography Organisation’s website or at the physical exhibition in London.

A spokesperson for CREO, the company behind the awards, told Insider that the category “The Electrician” won “welcomes various experimental approaches.”

“As such, following our correspondence with Boris and the warranties he provided, we felt that his entry fulfilled the criteria for this category, and we were supportive of his participation,” the spokesperson continued, adding that the image was removed after Eldagsen declined the award.

“The Electrician” is part of a series by Eldagsen called “pseudomnesia,” the Latin term for “fake memory.” The images are “fake memories of a past, that never existed, that no-one photographed,” created by putting them through AI image generators between 20 and 40 times, Eldagsen says on his website.

Two images of a woman generated using DALL-E 2
Images from Boris Eldagsen’s “pseudomnesia” series. 

“The photographic language of photography has now separated itself from the medium,” Eldagsen told Insider.

Google To Give Its Search Engine an AI

Google is synonymous with the internet. Millions throng to the search engine every day. From catching up on what’s happening in the world to discovering how to dye your cat green, Google has the answer to everything.

So, it was only a matter of time before the tech giant integrated its widely used search engine with artificial intelligence. This was inevitable, with almost all major tech companies dashing to cash in on the expanding AI space. It has already launched an AI-powered chatbot called Bard, similar to ChatGPT.

Not only is Google tweaking and testing new features on the existing search engine, but it is also developing a brand new AI-powered search engine under the project name ‘Magi’, as reported by The New York Times.

160 researchers, designers, and the executive task force is working on giving users a more personalized experience by anticipating their needs. According to The Times by anonymous Google employees, the company has been in panic mode since OpenAI launched ChatGPT in November last year. In response, two weeks later, Google created a task force to start building AI products.

So what exactly is Project Magi?

Under Project ‘Magi’, what we’re looking at is an exodus from run-of-the-mill search engines. A suite of new AI features will include a more conversational and chatbot-like engagement in answering questions related to software coding and writing code, offering a list of options for objects to purchase, information for research, etc.

There are other new tools in development as well. GIFI – an AI tool to generate image results in Google Image and another tool called Tivoli Tutor, allowing users to engage with a chatbot to learn a new language. Searchalong is another product in the works, which lets users ask questions to a chatbot while simultaneously surfing the internet on Google Chrome.

The new search engine is still in the primitive stages of development, and there has been no confirmatory announcement of when it will be launched. However, The Times reports that Google will release the new features to a limited audience of one million users in the United States alone. It plans to increase this number to 30 million by 2023.

In a statement, Google spokeswoman Lara Levin said, “Not every brainstorm deck or product idea leads to a launch, but as we’ve said before, we’re excited about bringing new A.I.-powered features to search and will share more details soon.”

A bid to stay relevant

The new developments are also in response to the recent speculative news of Samsung possibly replacing Google with Microsoft’s Bing as its key search engine. The new Bing, running on GPT-4, is an attractive option with AI-powered features. Remember that Microsoft is also working with OpenAI, the parent company of ChatGPT. 

Google is jittery as its agreement with Samsung is worth around $3 billion annually. Also, its deal with Apple, which stands at $20 billion, is up for renewal this year.

Top 10 Al-powered Applications For Daily Use

Artificial intelligence (AI) has swiftly transitioned from a futuristic concept to an integral part of our daily lives. 

From voice assistants that simplify our routines to advanced algorithms that streamline complex tasks, AI-powered apps are shaping how we live, work, and communicate. These intelligent tools enhance our productivity and cater to our interests, making our day-to-day experiences more efficient. 

In this article, we delve into the world of AI and present the top 10 AI-powered apps that you can seamlessly integrate into your daily routine. So, whether you’re looking to optimize your time, learn a new skill, or elevate your entertainment, we’ve got you covered with the best AI-driven solutions available today.

Siri

Siri doesn’t need much introduction as it is one of the most popular AI-powered apps. It is Apple’s intelligent voice assistant integrated into all iOS devices. 

Siri is that helpful friend in your pocket, always ready to assist you with tasks, answer your questions, or perhaps even crack a joke when you are feeling down. Need directions, restaurant recommendations, or simply want to send a text message? Just say, “Hey Siri,” and your wish is her command. 

Siri uses advanced natural language processing and machine learning algorithms to understand and respond to user queries. With continuous improvements, Siri has become more accurate and context-aware, offering personalized suggestions and experiences for users.

Siri can be used to streamline daily tasks and save time, providing hands-free assistance across various Apple devices. It also supports multiple languages – over 20 languages, allowing users to interact with the voice assistant in their preferred language.

Siri comes pre-installed on Apple devices, including iPhones, iPads, Apple Watches, and Macs, at no additional cost.

Amazon Alexa

Amazon Alexa is a versatile voice-controlled virtual assistant developed by Amazon. Primarily integrated with Amazon Echo smart speakers, it can also be used on smartphones, tablets, and other smart home devices. 

Alexa turns your home into an automated space, enabling you to control smart devices, play music, and get information with just your voice. Want to turn off the lights, set a timer, or get a news update? Alexa is here to help. 

Alexa uses natural language processing, machine learning, and voice queries to interpret and respond to user commands. Its AI capabilities are regularly updated, providing an ever-improving user experience.

Amazon Alexa is available on Amazon Echo devices, Fire TV, and Fire tablets, as well as through the Alexa app on iOS and Android devices. The Alexa app is free, but some features or third-party skills may require a subscription or one-time purchase.

Google Assistant

Google Assistant is another popular AI-driven application developed by Google, Google’s version of Apple’s Siri and Amazon’s Alexa. It is a voice-activated virtual assistant activated with the phrase, “Ok, Google.” 

Available on Android and iOS devices, as well as Google Home smart speakers, it can perform tasks such as answering questions, setting reminders, making restaurant reservations, sending messages, controlling smart home devices, and providing real-time information using voice commands.

Google Assistant harnesses the power of Google’s vast knowledge base and machine learning algorithms to deliver accurate and relevant responses.

It is convenient and easy to use. With support for multiple languages and seamless integration with Google services, it provides users with a comprehensive and personalized experience.

Google Assistant comes pre-installed on most Android devices and is available as a standalone app for iOS devices. It is also integrated into Google Home and Nest smart speakers.

Google Assistant is free to use.

ELSA (English Language Speech Assistant)

Imagine having a personal language coach on your phone, available 24/7 to guide you through mastering a new language. 

Meet ELSA Speak, an AI-powered app designed to help users improve their English pronunciation and fluency.

ELSA provides personalized lessons, real-time feedback, and a vast library of practice exercises to ensure you gain confidence in your speaking skills. 

The app’s friendly interface and intuitive design also make learning enjoyable. ELSA is integrated with a microphone to listen to the user’s speech and provide them with the correct English pronunciation.

There is a free trial for seven days. Afterward, you have to subscribe to ELSA Pro.

Cortana

Have you ever wished for a personal assistant to help you manage your busy schedule, set reminders, or answer your questions? Cortana, an AI-powered digital assistant by Microsoft, is here to make your life easier. 

Cortana is a versatile, AI-powered virtual assistant that can be accessed on numerous platforms and devices, including Android, iOS, Invoke smart speaker, Alexa, Microsoft Band, Windows 10, Windows Mobile, Windows Mixed Reality, and Xbox One. 

Additionally, it is compatible with popular headsets such as HyperX CloudX, Logitech G933, Sennheiser GSP350, etc.

Cortana’s intelligent features include voice recognition, allowing you to interact with the app using natural language.

Need a weather update or quick information on a topic? Just ask Cortana, and the app will provide your needed information. 

You can also use it to sync your calendar, emails, and contacts, enabling Cortana to help you stay organized and on top of your tasks. With Cortana’s seamless integration across various devices, you’ll never miss a beat in your personal or professional life. 

Although Cortana is not free, it offers various pricing options for different user needs.

Socratic

Homework and studying can be challenging, but what if you had an AI-powered tutor to help you tackle those tough questions? 

Socratic, developed by Google, is an innovative app designed to assist students in understanding complex concepts across various subjects. 

Simply snap a photo of your question or problem, and Socratic’s AI algorithms will analyze it and provide you with step-by-step explanations, relevant videos, and curated resources to aid your learning. 

The app covers various subjects, from mathematics and science to literature and history. The app is available on Android and iOS for free.

Replika

Replika is an AI-powered chatbot designed to be your companion and conversation partner. It is designed to learn about you and provide an empathetic and engaging communication experience. 

As you interact with your Replika, it learns more about your thoughts, feelings, and experiences, using advanced natural language processing and machine learning algorithms to create a personalized and human-like connection.

Replika offers a judgment-free space for users to express their emotions, practice social skills, or enjoy a friendly chat. The app can help alleviate loneliness, boost self-awareness, and promote mental well-being.

Replika is available on both iOS and Android devices. While there is a free version with basic features, a subscription plan called Replika Pro is offered, currently for $7.99 a month, unlocking advanced features and customization options.

Youper

Youper is an AI-powered emotional health assistant that provides personalized mental health support through guided conversations, mindfulness exercises, and mood tracking.

Youper combines natural language processing, cognitive-behavioral therapy, and mood analysis to understand your emotional patterns and offer customized support tailored to your needs.

Youper users can benefit from improved emotional well-being, reduced stress, and enhanced self-awareness. The app offers a convenient and private platform for users to explore and address their mental health concerns.

Youper is available for download on iOS and Android devices. The basic version is free, while the Youper Premium subscription, which unlocks additional features, is priced at $36 per month.

Fyle

Have you ever found yourself drowning in a sea of paper receipts, trying to make sense of your monthly expenses? Or you need help staying organized when submitting your work-related expenses for reimbursement. Say hello to Fyle, an AI-powered app that makes expense management feel like a breeze.

Fyle is an AI-powered expense management app designed to simplify and streamline the process of tracking, reporting, and reimbursing business expenses. Fyle allows you to automatically capture and categorize your expenses simply by snapping a photo of your receipt. The app uses machine learning and optical character recognition to extract expense details from receipts, invoices, and emails, eliminating manual data entry and minimizing human error.

Fyle saves users time and effort while ensuring accurate and organized expense tracking. The app also helps businesses comply with company policies and tax regulations, ultimately improving financial management.

Fyle is available on iOS and Android devices. Pricing plans currently start at $6.99 per user/month, with custom plans available for larger organizations.

DataBot

DataBot is another AI-powered virtual assistant that offers a wide range of services, including voice command recognition, news updates, weather forecasts, translations, and more.

Databot can answer your questions, provide fun facts, and even converse with you. It employs natural language processing and machine learning to understand user queries and deliver accurate, contextually relevant information and responses.

DataBot users can enjoy a hands-free, personalized assistant that caters to their daily informational and organizational needs. The app helps save time, improves productivity, and provides quick access to essential information.

DataBot is free for download on iOS, Android, and Windows devices.

Conclusion

The rapid advancements in artificial intelligence have led to the development of an impressive array of AI-powered apps that can enhance our daily lives in numerous ways. These apps are revolutionizing how we interact with technology, making our routines more innovative, more efficient, and less repetitive.