WordPress Ad Banner

Indian Channel Introduce Al-Based News Anchor “AI-Sana” for Weather Report

Indian media organization “India Today” has introduced a female AI news anchor named “Sana,” following in the footsteps of Chinese artificial intelligence-powered news anchors. The move towards AI-driven bots is due to their proven effectiveness and ability to produce satisfactory outcomes with less labor. At the India Today Conference 2023, Kalli Purie, vice chairman of the India Today Group, introduced the Sana AI bot.

Purie described Sana as a “bright, gorgeous, ageless, tireless” robot who can “speak in multiple languages” and is “totally” under his control. The Sana feature of Aaj Tak AI will launch the following week and provide daily news updates in multiple languages numerous times per day. Additionally, Sana will host a new show where she explains a specific topic every day, interacts with the audience, and takes questions.

11-year-old girl develops app that detects eye diseases

The 11-year-old girl’s mobile app can analyse various parameters such as light and colour intensity to locate the eyes within the frame range

Meet Leena Rafaq an 11-year-old Dubai-based Malayali girl originally from Kerala, who has developed an AI application to detect eye diseases and other conditions through a unique scanning method using an iPhone. Rafaq named the application “Ogler EyeScan” and began developing it when she was 10. In a video, she said that her application can analyse various parameters like color and light intensity, distance and look-up points to locate eyes within the range of the frame using advanced computer and machine learning,

In her post, Leena added that “Exciting news! I am thrilled to announce the submission of my new Artificially Intelligent mobile app, named Ogler EyeScan,” Rafeeq said in a LinkedIn post on Saturday adding that she created the AI mobile app when she was 10, the application also identifies any light burst issues and if the eyes are positioned exactly inside the scanner frame. She also said “Olger” can identify conditions like Arcus, Malenoma, Pterygium and even Cataracts with help of trained models.

Leena said that her app is currently under review in Apple’s app store and is hopeful that it will be approved soon. The Olger EyeScan is only supported in iPhone 10 and above with iOS 16+. She also said, “This App was developed natively with SwiftUI without any third-party libraries or packages, and it took me six months of research and development to bring this innovative app to life.”

Reverting to some of the comments on her viral LinkedIn post, she mentioned that the accuracy of her app is “nearly 70% at this moment.”

AI detected woman’s breast cancer 4 years before it developed

An AI program was successfully able to detect breast cancer in a woman four years before it developed.

While some developments of AI can sound pretty scary there are times when advancements in technology can do a great deal of good.

Certain forms of technology are being used to help diagnose conditions that impair a person’s mobility and there have been advances in the way we’re using Artificial Intelligence too.

AI is being used in cancer screening technology to pick up potential issues long before they develop into something harmful.

This technology is currently being used to great success in Hungary, while the US, UK and the rest of Europe are also looking at testing it for themselves.

While there are still many hurdles to get through, this technology could be a valuable tool for radiologists and ultimately be a lifesaver.

The image on the left shows something the AI identified as cancer, on the right is four years later as it started to develop. Credit: Lauder Breast Center at the Memorial Sloan Kettering Cancer Center/CNN
The image on the left shows something the AI identified as cancer, on the right is four years later as it started to develop. Credit: Lauder Breast Center at the Memorial Sloan Kettering Cancer Center/CNN

Speaking to CNN, Dr Larry Norton of the Lauder Breast Center explained that while the technology has been around for decades AI is becoming a useful tool in refining the process and helping identify potential health issues.

He said: “AI is a tool that machines use for looking at images and comparing those images to ones that have already been recorded in the machine to identify abnormalities.

“This technology can look at mammograms and identify areas that a human radiologist may want to look at more carefully.

“It’s called computer assisted detection, it’s actually been around since the late 1990s but the technology is improving.”

Dr Norton went on to explain how the technology worked, saying: “There’s lots of abnormalities that you see, they’re changes that are not really cancer. You can’t call everything cancer because anyone going for a mammogram is gonna need a biopsy. That’s not very practical.

“What this work does is it identifies risk. It can tell a woman ‘you’re at high risk of developing breast cancer’ before you develop breast cancer.”

However, he stressed that while AI had made some impressive advancements, this technology was in place to help human decision-makers rather than outright replace medical professionals.

“One thing humans can do that machines can’t do is order special tests. Things like contrast enhanced mammograms and MRIs,” Dr Norton said.

“The other thing humans can do is look at previous mammograms and see if there’s any changes.”

“We’ve got to think of AI as a tool for helping radiologists look at the images better. It’s not a standalone test, it’s not gonna replace a radiologist.”

According to the New York Times, the use of this AI technology in breast cancer screening has reduced the workload of a radiologist by around 30 percent while increasing cancer detection rates by 13 percent which sounds like entirely positive news.

They also report that the AI was tested with some of the most challenging cancer cases where the early signs of breast cancer had not been spotted by radiologists, with the AI successfully managing to identify the cancer.

Scientists Want To Use Real Human Brain Cells For Al

A team of scientists, led by Johns Hopkins University, has proposed the development of a biological computer that could surpass silicon-based machines in performance and energy efficiency. 

The computer will be powered by millions of human brain cells, arranged in arrays of brain organoids, which are small three-dimensional neural structures grown from human stem cells. The organoids will be connected to sensors and output devices, and trained using techniques such as machine learning and big data. 

The researchers have published a detailed roadmap in the journal Frontiers in Science, outlining their vision for what they call “organoid intelligence”. This ultra-efficient system aims to solve problems that are beyond the capabilities of conventional digital computers, while also supporting the development of neuroscience and medical research. 

Although similar to quantum computing in ambition, the project raises ethical concerns regarding the “consciousness” of brain organoid assemblies.

Generative AI set to affect 300mn jobs across major economies

The latest breakthroughs in artificial intelligence could lead to the automation of a quarter of work in the US and the eurozone, according to a study by Goldman Sachs.

The investment bank said Monday that “generative” AI systems like ChatGPT, which can create content indistinguishable from human performance, could spark a productivity boom that would eventually lift annual global gross domestic product by 7 percent over a 10-year period would increase period.

But if the technology lived up to its promise, it would also bring “significant disruption” to the labor market and expose the equivalent of 300 million full-time workers in major economies to automation, according to Joseph Briggs and Devesh Kodnani, the paper’s authors. Lawyers and administrative staff would be most at risk of being fired.

They calculate that about two-thirds of jobs in the US and Europe experience some level of AI automation, based on data on the tasks typically performed across thousands of jobs.

Most people would see less than half of their workload automated and likely continue their work, freeing up some of their time for more productive activities.

In the USA, this should apply to 63 percent of the workforce, they calculated. Another 30 percent who work physically or outdoors would not be affected, although their work could be vulnerable to other forms of automation.

But about 7 percent of US workers have jobs where at least half of their tasks could be done by generative AI and are vulnerable to replacement.

Goldman said his research pointed to a similar impact in Europe. With manual labor accounting for a larger share of employment in the developing world, around a fifth of the work globally could be done by AI — or about 300 million full-time jobs in major economies.

The report will ignite debate on the potential of AI technologies to both revive the rich world’s flagging productivity growth and create a new class of dispossessed white-collar workers who risk suffering a fate similar to that of manufacturing workers in the United States 1980s.

Goldman’s estimates of the impact are more conservative than some academic studies that considered the impact of a broader range of related technologies.

A paper published last week by OpenAI, the creators of GPT-4, found that 80 percent of the US workforce could see at least 10 percent of their jobs done by generative AI, based on analysis by human researchers and the company’s large machine language model ( LLM).

Europol, the law enforcement agency, also warned this week that rapid advances in generative AI could help online scammers and cybercriminals, so “dark LLMs . . . could become an important criminal business model of the future”.

Goldman said that if corporate investment in AI continued to grow at a similar pace to software investment in the 1990s, US investment alone could reach 1 percent of US GDP by 2030.

The Goldman estimates are based on an analysis of US and European data on the tasks typically performed in thousands of different jobs. Researchers hypothesized that AI would be able to perform tasks such as filling out tax returns for a small business; evaluation of a complex insurance claim; or document the results of a crime scene investigation.

They did not envisage AI being used for more sensitive tasks such as making a court decision, checking a patient’s status in intensive care, or studying international tax laws.

A Wharton Professor Gave AI Tools 30 Minutes To Work On A Business Project

Artificial intelligence is presenting new possibilities in terms of how to do work, and leaving many observers nervous about what will become of white-collar jobs.

Ethan Mollick, a management professor at the Wharton School of the University of Pennsylvania, has been closely following developments in generative A.I. tools, which can create essays, images, voices, code, and much else based on a user’s text prompts.

Ethan Mollick

He recently decided to see how much such tools could accomplish in only 30 minutes, and described the results this weekend on his blog One Useful Thing. The results were, he writes, “superhuman.”

In that short amount of time, he writes, the tools managed to do market research, create a positioning document, write an email campaign, create a website, create a logo and “hero shot” graphic, make a social media campaign for multiple platforms, and script and create a video.

The project involved marketing the launch of a new educational game, and he wanted A.I. tools to do all the work while he only gave directions. He chose a game he himself authored so that he could gauge the quality of work. The game, Wharton Interactive’s Saturn Parable, is designed to teach leadership and team skills on a fictional mission to Saturn.

First, Mollick turned to the version of Bing powered by GPT-4. Bing, of course, is Microsoft’s search engine—long a distant second to Google—while GPT-4 is the successor to ChatGPT, the A.I. chatbot from OpenAI that took the world by storm after its release in late November. Microsoft has invested billions in OpenAI.

Mollick instructed Bing to teach itself about the game and the business simulation market of which it’s a part. He then instructed it to “pretend you are a marketing genius” and produce a document that “outlines an email marketing campaign and a single web page to promote the game.”

In under three minutes it generated four emails totaling 1,757 words.

He then asked Bing to outline the web page, including text and graphics, and then used GPT-4 to build the site.

He asked MidJourney, a generative A.I. tool that produces images from text prompts, to produce the “hero image” (the large image visitors encounter first when visiting a website).

Next, he asked Bing to start the social media campaign, and it produced posts for five platforms, including Facebook and Twitter.

Then he asked Bing to write a script for a video, an A.I. tool called ElevenLabs to create a realistic voice, and another called D-id to turn it into a video.

At that point, Mollick ran out of time. But, he notes, if he’d had the plugins that OpenAI announced this week, his A.I. chatbot, connected to email automation software, could have actually run the email campaign for him.

According to OpenAI, plugins for Slack, Expedia, and Instacart are among the first to be created, with many more to come. The problem with A.I. chatbots, the company notes, is that “the only information they can learn from is their training data.” Plugins can be their “eyes and ears,” giving them access to more recent or specific data.

Mollick writes that he would have needed a team and “maybe days of work” to do all the work the A.I. tools did in 30 minutes.

Bill Gates wrote on his blog this week that ChatGPT and similar tools “will increasingly be like having a white-collar worker available to help you with various tasks.”

Actual white-collar workers might be forgiven for feeling some anxiety.

Top 5 Ways GPT-4 Can Increase Workers Productivity

The language model that powers chatGPT, GPT-4, has developed the latest version with considerable improvements over GPT 3 and GPT 3.5. Though, workers can use GPT-4 to enhance their productivity and quality of work.

The newest version of GPT is capable enough to accept inputs in both the forms of text and images. In contrast, GPT 3 and 3.5 could only take data in text.

ChatGPT is an innovative artificial intelligence technology created by OpenAI. The company aims to provide effective results in less period.

However, I can suggest some potential ways in which a language model like GPT-4 could increase workers’ productivity in the future based on the advancements made by previous models like GPT-3:

  1. Automating Repetitive Tasks: GPT-4 could be used to automate repetitive tasks such as data entry, email responses, and social media posts, freeing up workers’ time to focus on more complex tasks.
  2. Enhancing Communication: GPT-4 could help workers communicate more efficiently by providing real-time language translation and automatic summarization of long texts or meetings, making it easier to understand and respond to information.
  3. Improving Research: GPT-4 could assist workers in conducting research by providing accurate and relevant information from a vast range of sources, making it easier to find the data needed to complete tasks.
  4. Streamlining Workflows: GPT-4 could help workers streamline their workflows by providing suggestions on the best way to complete a task, identifying potential roadblocks, and offering solutions to overcome them.
  5. Personalizing Work: GPT-4 could help workers personalize their work by analyzing their work habits, preferences, and productivity patterns and offering customized recommendations on how to improve their workflow and increase productivity.

Top 10 Productive Use Cases of ChatGPT for the Year 2023

ChatGPT is an OpenAI system developed by the company called OpenAI to improve the conventional capabilities of AI systems. The ChatGPT was specifically developed for the use of digital assistants and chatbots. Compared to other chatbots ChatGPT can be able to pick what has already been said in the conversation and it corrects itself if it makes any mistake. In this article, we have explained the top ten productive use cases of ChatGPT for the year 2023. Read to know more about ChatGPt Use Cases for 2023.

  1. It Answers Questions

Unlike any other chatbot, ChatGPT can answer questions in a classy way. And ChatGPT is capable of explaining complex issues in different ways or tonalities of speaking.

  1. It Develops APPS

Some users on Twitter asked ChatGPT for help in creating an app, and guess what! it actually worked. The AI tool even gave an example of code that can be used for a particular scenario. And in addition to that, it also gives general tips for App development; nevertheless, it must not be adopted without any personal correction or review.

  1. It Acts as An Alternative to Google Search

ChatGPT is not just a competitor of other chatbots but it has the potential to replace google search. This is because it has smart answers to the queries that users search for answers to. But the only drawback is that it cannot give the source references.

  1. It Can Compose Emails

ChatGPT can compose emails. Some Twitter users asked ChatGPT to compose readymade emails which received composed emails as a result. This probably puts an end to the blank page’s era.  

  1. It Creates Recipes

Depending on the request received, ChatGPT can also show suggestions on the recipes for cooking. Unfortunately, at this point, we cannot judge to what extent these are successful. But it would be interesting to try the resulting recipes for yourself.   

  1. Writing Funny Dialogue

ChatGPT also convinces users with its artistic skills. Some users had fun with the funny dialogues that the AI chatbot generated. It also writes skits and the results are impressive and great fun to read.

  1. Language Modeling

ChatGPT can be used to train other models, such as named entity recognition and part-of-speech tagging. This can be beneficial for businesses that require the extraction of meaning from text data.

  1. Converts Text to Speech

ChatGPT can convert text to speech, allowing it to be used in a wide range of applications such as voice assistants, automated customer service, and more.

  1. Text Classification

ChatGPT can be used to categorise text, such as spam or not spam, or positive or negative sentiment. Businesses that need to filter or organise large amounts of text data may find this useful. Apart from that, ChatGPT can also translate text from one language to another using. This can be useful for businesses that need to communicate in multiple languages with customers or partners.

  1. Sentiment Analysis

ChatGPT can analyze text sentiment to determine whether it is positive, negative, or neutral. Businesses that need to monitor customer sentiment or social media mentions may find this useful. Apart from that the ChatGPT also be used to summarize lengthy texts, making them easier to understand and digest. This applies to news articles, legal documents, and other types of content.

Privacy Alert: ChatGPT Exposes Private Conversations

OpenAI CEO expresses regret, claims error has been fixed.

Artificial Intelligence (AI) is transforming our lives and work, but recent developments have raised concerns about the privacy and security of user data when using AI-powered tools.

One of these concerns is the ChatGPT glitch that allowed some users to see the titles of other users’ conversations.

ChatGPT glitch

ChatGPT is an AI chatbot developed by OpenAI that allows users to draft messages, write songs, and code. Each conversation is stored in the user’s chat history bar.

However, users began seeing conversations they didn’t have with the chatbot in their chat history as early as Monday. Users shared these on social media sites, including Reddit and Twitter.

Company Response

OpenAI CEO Sam Altman expressed regret and confirmed that the “significant” error had been fixed. The company also briefly disabled the chatbot to address the issue. OpenAI claims that users couldn’t access the actual chats. Despite this, many users are still worried about their privacy on the platform.

Privacy Concerns

The glitch suggests that OpenAI has access to user chats, which raises questions about how the company uses this information.

The company’s privacy policy states that user data, such as prompts and responses, may be used to continue training the model.

However, that data is only used after personally identifiable information has been removed. Users fear that their private information could be released through the tool.

AI Tools and Privacy

The ChatGPT glitch comes as Google and Microsoft compete for control of the burgeoning market for AI tools. Concerns have been raised that missteps like these could be harmful or have unintended consequences.

There needs to be a greater focus on privacy and security concerns as AI becomes more prevalent in our lives. Companies must be transparent about how they collect, store, and use user data and must work quickly to address any issues.

Google Begins ChatGPT Rival “Bard” Testing to Limited Users

Google Begins ChatGPT Rival Bard Testing to Limited Users

Users can join a waitlist to gain access to Bard, which promises to help users outline and write essay drafts, plan a friend’s baby shower, and get lunch ideas based on what’s in the fridge.

A company representative told CNN it will be a separate, complementary experience to Google Search, and users can also visit Search to check its responses or sources. Google said in a blog post it plans to “thoughtfully” add large language models to search “in a deeper way” at a later time.

Google said it will start rolling out the tool in the United States and United Kingdom, and plans to expand it to more countries and languages in the future.

The news comes as Google, Microsoft, Facebook and other tech companies race to develop and deploy AI-powered tools in the wake of the recent, viral success of ChatGPT. Last week, Google announced it is also bringing AI to its productivity tools, including Gmail, Sheets and Docs. Shortly after, Microsoft announced a similar AI upgrade to its productivity tools.

Google unveiled Bard last month in a demo that was later called out for providing an inaccurate response to a question about a telescope. Shares of Google’s parent company Alphabet fell 7.7% that day, wiping $100 billion off its market value.

Like ChatGPT, which was released publicly in late November by AI research company OpenAI, Bard is built on a large language model. These models are trained on vast troves of data online in order to generate compelling responses to user prompts. The immense attention on ChatGPT reportedly prompted Google’s management to declare a “code red” situation for its search business.

But Bard’s blunder highlighted the challenge Google and other companies face with integrating the technology into their core products. Large language models can present a handful of issues, such as perpetuating biases, being factually incorrect and responding in an aggressive manner.

Google acknowledged in the blog post Tuesday that AI tools are “not without their faults.” The company said it continues to use human feedback to improve its systems and add new “guardrails, like capping the number of exchanges in a dialogue, to try to keep interactions helpful and on topic.”

Last week, OpenAI released GPT-4, the next-generation version of the technology that powers ChatGPT and Microsoft’s new Bing browser, with similar safeguards. In the first day after it was unveiled, GPT-4 stunned many users in early tests and a company demo with its ability to draft lawsuits, pass standardized exams and build a working website from a hand-drawn sketch.