WordPress Ad Banner

End of Fake Sick Leave Applications, Al Can Now Detect Cold from a Person’s Voice

Every employed individual has been guilty of calling in sick without actually being sick. It was all fun and games, until now. Researchers from Sardar Vallabhbhai National Institute of Technology, Surat and the Rhenish University of Applied Science, Germany, have conducted a study on developing speech signal-based non-invasive diagnosis techniques in the field of biomedical signal processing. The study aimed to develop a method that can identify a person with a common cold from their speech with higher performance and fewer features. The objective of the study was to detect viral infections and similar illnesses with comparable symptoms to prevent the spread of these diseases and remotely monitor patient health.

The researchers found that the way people talk when they have a cold is different from when they don’t. They came up with three things to help figure this out: Normalized Harmonic Peak with respect to the First Harmonic Peak (NHPF), Normalized Harmonic Peak with respect to the Maximum value of Harmonic Peak (NHPM), and Successive Harmonic Peak Ratio (SHPR). NHPF and NHPM show how loud different sounds are compared to the first and loudest sound, while SHPR compares the loudness of different sounds to the ones next to them. The classifiers look at the scores for each sound and decide if it sounds like someone with a cold or not. Then, they add up all the scores for each sound and decide which one it sounds more like overall.

Elon Musk’s Twitter now working on a generative Al project

The mystery of what Elon Musk plans to do with Twitter may have just begun to unravel. Even as Musk tries to hide it all behind the veil of X, an Insider report has revealed that Twitter may be working on a generative artificial intelligence (AI) project, much like a ChatGPT, using its own treasure trove of data.

The news also comes at a time when Musk has been very vocal about generative AI products and recently called for a moratorium to be placed on the release of new products in the next six months. Interestingly, Musk’s tirade is directed at OpenAI, an organization that he co-founded and donated money for AI research. Since 2018, Musk and OpenAI have split ways, with the latter funding financial support from Microsoft

How Twitter could deploy AI

According to Insider’s report, Twitter has purchased 10,000 graphics processing units (GPUs), which indicates plans to work on large language models (LLMs).

Interesting Engineering has previously reported how Microsoft stitched together tens of thousands of GPUs for OpenAI’s developmental work, and it now appears that the leftover engineers at Twitter will be tasked with doing the same.

To lead the task, Musk hired AI researchers from DeepMind, the AI research wing of Google’s parent company, Alphabet, and has personally approached people in the AI field, reports suggest.

Although speculation, it appears that Musk could task AI with improving the search functionality at Twitter, something Musk has publicly complained about earlier.

The other area where AI could help is in serving personalized advertisements, as the platform looks to make money in ways other than subscription fees. Generative AI could dish out targeted images and text to users on the platform.

While the project is an attempt to breathe fresh life into Twitter, Musk could also be using it to settle scores with OpenAI CEO Sam Altman. According to a Semafor report, back in 2018, Musk wanted to lead OpenAI’s research efforts and take up the CEO’s job, a move that was opposed by Altman and other co-founders.

Musk soon left OpenAI while also taking away funding for development works at OpenAI. A move that saw the latter strike up a partnership with Microsoft. The recent success of OpenAI’s ChatGPT has allegedly infuriated Musk, who also shut down the organization’s access to Twitter’s database

With an in-house AI project, Musk could be looking to strike back at OpenAI, which has been a runaway success story so far.

Artist Uses AI To Imagine How Taj Mahal Was Built

Artist Jyo John Mulloor shared a bunch of AI-generated visuals to show what the Taj Mahal might have looked like during its construction.

The viral Artificial Intelligence trend has taken over social media, and artists are now using several AI tools to come up with fascinating results. Many artists are now employing this technology to produce unique and unimaginable results, that instantly capture the internet’s attention. Now, artist Jyo John Mulloor shared a bunch of AI-generated visuals to show what the Taj Mahal might have looked like during its construction. The pictures were created with the help of the AI image generator Midjourney.

“A glimpse into the past! Shah Jahan’s incredible legacy, the Taj Mahal, was captured during its construction. Grateful to have these rare photos and his permission letter to share with you all,” read the caption shared along with the photos. 

See the photos here:

The first seven pictures show the several construction phases of the magnificent monument, with workers seen in the backdrop. The initial images show the under-construction mausoleum without its signature minarets. The last picture shows the Taj Mahal, as it is standing today, in all its architectural brilliance.

Instagram users loved the pictures and posted a variety of comments. “Love it! And the letter.. What a touch! What an imagination. You are bringing it all alive. Love from India,” one user wrote. Another commented, ”Lovely form to show your imagination.”

A third added, ”Want to see Pyramid construction and It’s mystery tools used for building.” ”That’s just incredible,” shared a fourth.

Recently, another artist also used Midjourney to reimagine the world’s wealthiest people as poor, and the results are stunning. Artist Gokul Pillai shared seven pictures on Instagram that show what billionaires would look like if they lived in slums. The post featured Donald Trump, Bill Gates, Mukesh Ambani, Mark Zuckerberg, Warren Buffett, Jeff Bezos, and Elon Musk. 

‘Fedha’ an AI generated news presenter debuts in Kuwait

Kuwait media outlet, Kuwait news, debuted their first artificial intelligence (AI) generated news presenter ‘Fedha’. The media outlet Kuwaiti News plans to the news presenter Fedha to read online bulletins in future. “Fedha” appeared on the Twitter account of the Kuwait News website on Saturday as an image of a woman, her light-coloured hair uncovered, wearing a black jacket and white T-shirt.

Kuwaiti News is affiliated with the Kuwait Times, founded in 1961 as the Gulf region’s first English-language daily.

“Fedha represents everyone,” Abdullah Boftain, deputy editor in chief for both outlets said.

“I’m Fedha, the first presenter in Kuwait who works with artificial intelligence at Kuwait News. What kind of news do you prefer? Let’s hear your opinions,” she said in classical Arabic.

Boftain further informed that the move is a test of AI’s potential to offer “new and innovative content”.

In future Fedha could adopt the Kuwaiti accent and present news bulletins on the site’s Twitter account, which has 1.2 million followers, he said.

“Fedha is a popular, old Kuwaiti name that refers to silver, the metal. We always imagine robots to be silver and metallic in colour, so we combined the two,” Boftain said.

The presenter’s blonde hair and light-coloured eyes reflect the oil-rich country’s diverse population of Kuwaitis and expatriates, according to Boftain.

Her initial 13-second video generated a flood of reactions on social media, including from journalists.

The rapid rise of AI globally has raised the promise of benefits, such as in health care and the elimination of mundane tasks, but also fears, for example over its potential spread of disinformation, threat to certain jobs, and to artistic integrity.

Kuwait ranked 158 out of 180 countries and territories in the Reporters Without Borders (RSF) 2022 Press Freedom Index.

Scientist says humans will be able to upload the dead to a computer by end of year

A computer scientist is urging the world to record their elderly parents and loved ones as he predicts consciousness could be uploaded onto a computer this year.

Dr Pratik Desai, who has founded multiple Silicon Valley AI startups, said that if people have enough video and voice recorders of their loved ones, there is a ‘100 percent chance’ of relatives ‘living with you forever.’

Desai, who has created his own ChatGPT-like system, wrote on Twitter: ‘This should be even possible by end of the year.’

Many scientists believe the rapid advancements in AI, which ChatGPT is spearheading, are poised to usher in a new golden era for technology.

However,  the world’s greatest minds are split on the technology – Elon Musk and more than 1,000 tech leaders are calling for a pause, warning it could destroy humanity.

On the other side are other experts, like Bill Gates, who believe AI will improve our lives – and it seems other experts are on board with the idea it will help us live on forever.

A computer scientist believes technology to create digital humans after they die will be possible by the end of this year

A computer scientist believes technology to create digital humans after they die will be possible by the end of this year.

Desai is on the side of Gates, believing we can recreate our dead loved ones as avatars living in a computer. 

The process would include digitizing videos, voice recordings, documents and photos of the person, then fed to an AI system that learns everything it can about the individual.

Users can then design a specific avatar that looks and acts just like their living relative did.

The advancement of ChatGPT has progressed one company working on virtual humans.

The project called Live Forever creates a VR robot of a person with the same speech and mannerisms as the person it was tasked with replicating.

Artur Sychov, the founder of Live Forever, told Motherboard in 2022 that he predicted the technology would be out in five years, but due to recent advancements inAI, he expects it will only be a short time.

‘We can take this data and apply AI to it and recreate you as an avatar on your land parcel or inside your NFT world, and people will be able to come and talk to you,’ Sychov told Motherboard. 

‘You will meet the person. And you would maybe for the first 10 minutes while talking to that person, you would not know that it’s actually AI. That’s the goal.’ 

Another AI company, DeepBrain AI, has created a memorial hall that lets people reunite with their dead loved ones in an immersive experience.

The service, called Rememory, uses photos, videos, and a seven-hour interview of the person while still living.

The AI-powered virtual person is designed with deep learning technologies to capture the individual’s look and voice, which is displayed on a 400-inch screen.

In 2020, a Korean television show used virtual reality to reunite a mother with her seven-year-old daughter, who died in 2016.

The show, ‘Meeting You,’ recounted the story of a family’s loss of their seven-year-old daughter Nayeon. 

The two could touch, play and hold conversations, and the little girl reassured her mother that she was no longer in pain. 

Jang Ji-sung, Nayeon’s mother, put on the Vive virtual reality (VR) headset and was transported into a garden where her daughter stood there smiling in a bright purple dress.

‘Oh my pretty, I have missed you,’ the mother can be heard saying as she strokes the digital replica of her daughter.

Desai did not provide many details about his idea of the technology, but former Google Engineer Ray Kurzweil is also working on a digital afterlife for humans – specifically to resurrect his father. 

Kurzweil, 75, said his father passed when he was 22 years old and hopes to one day talk to him through the help of a computer.

‘I will be able to talk to this re-creation,’ he told BBC in 2012. ‘Ultimately, it will be so realistic it will be like talking to my father.’ 

Kurzweil explained he has hundreds of boxes containing documents, recordings, movies and photographs of his father, which he is digitizing.

‘A very good way to express all of this documentation would be to create an avatar that an AI would create that would be as much like my father as possible, given the information we have about him, including possibly his DNA,’ Kurzweil said.

The scientist continued to explain that his digital father would undergo a Turing Test, which is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. ‘If an entity passes the Turing test, let alone a specific person, that person is conscious,’ Kurzweil said. Along with uploading memories from the dead, Kurzweil also predicts that humans will reach immortality in just eight years.

He recently spoke with the YouTube channel Adagio, discussing the expansion in genetics, nanotechnology, and robotics, which he believes will lead to age-reversing ‘nanobots.’

These tiny robots will repair damaged cells and tissues that deteriorate as the body ages and make us immune to diseases like cancer.

Kurzweil was hired by Google in 2012 to ‘work on new projects involving machine learning and language processing,’ but he was making predictions in technological advances long before.

In 1990, he predicted the world’s best chess player would lose to a computer by 2000, and it happened in 1997 when Deep Blue beat Gary Kasparov.

Kurzweil made another startling prediction in 1999: he said that by 2023 a $1,000 laptop would have a human brain’s computing power and storage capacity.

What happens when you die is the world's greatest mystery, but scientists are working on technologies where death is not the end

What happens when you die is the world’s greatest mystery, but scientists are working on technologies where death is not the end. He said that machines are already making us more intelligent and connecting them to our neocortex will help people think more smartly. Contrary to the fears of some, he believes that implanting computers in our brains will improve us.

‘We’re going to get more neocortex, we’re going to be funnier, we’re going to be better at music. We’re going to be sexier’, he said.

‘We’re really going to exemplify all the things that we value in humans to a greater degree.’

Rather than a vision of the future where machines take over humanity, Kurzweil believes we will create a human-machine synthesis that will make us better.The concept of nanomachines being inserted into the human body has been in science fiction for decades.

Levi’s to ‘supplement’ human models with Al models

Levi Strauss & Co. has teamed up with a digital fashion studio Lalaland.ai for customized artificial intelligence (AI)-generated models. With AI model, the brand hopes to reach out to consumers on a wider range of body types, ages, sizes, and skin tones. This is also part of Levi’s sustainability efforts as it requires fewer resources.

Amy Gershkoff Bolles, global head of digital and emerging technology strategy at Levi’s, said they are excited about the potential capabilities this may afford the brand for the consumer experience. “We see fashion and technology as both an art and a science, and we’re thrilled to be partnering with Lalaland.ai, a company with such high-quality technology that can help us continue on our journey for a more diverse and inclusive customer experience.”

Source lalalandai

Levi’s hopes to use AI-generated models to supplement human models and increase the number and diversity of models for the brand. With AI-generated images, Levi’s can enable customers to see their products on more models that look like themselves, creating a more personal and inclusive shopping experience.

But Levi’s also pointed out that artificial intelligence will never fully replace human models. However, it is the first step in a dystopian slow walk toward automating the industry. Similarly, L’Oreal’s Maybelline New York introduced its own digital avatar, named May, as part of a mascara release last month. May is now the face of the beauty brand’s new Falsies Surreal Extension Mascara.

Is Your Job Exposed to ChatGPT?

Accountants and writers are among the jobs that could be most affected by AI tools. Most jobs will be changed in some form by new AI tools, according to a study by researchers at the University of Pennsylvania and OpenAI, the company that makes the popular AI tool ChatGPT. But for nearly 20% of jobs, at least half of their tasks could be completed much faster with ChatGPT and similar tools. Here are some of those jobs.

The researchers found that at least half of accounting and auditing tasks could be completed much faster with the technology. Research has also found that state-of-the-art GPTs—generative pre-trained transformers—excel in tasks such as translation, classification, creative writing and generating computer code.

Several scenarios in the study found that 100% of the tasks performed by mathematicians could be done more quickly with the technology.

Information-processing roles—including public relations specialists and court reporters—are highly exposed, the study found.

Another highly-exposed job, blockchain engineers working on the technology underpinning bitcoin and other cryptocurrencies.

The jobs that will be least affected by AI tools include short-order cooks, motorcycle mechanics and oil-and-gas laborers.

The researchers didn’t predict whether jobs will be lost or whose jobs will be lost, said Matt Beane, an assistant professor at the University of California, Santa Barbara, who studies the impact of technology on the labor market and wasn’t involved in the study. The real challenge, he said, is for companies, schools and policy makers to help people adapt. “That’s a multi-trillion dollar problem,” he said.

How Does OpenAI’s GPT-4 Differ from Its Predecessor, GPT-3.5?

OpenAI’s GPT-4: How it is different from its predecessor GPT-3.5, With ChatGPT’s ongoing success and popularity, OpenAI has now created GPT-4, the highly anticipated successor to GPT-3.5. GPT-4 is a large multimodal model capable of accepting text and image inputs and generating text outputs.

With its unmatched language-generating capabilities, GPT-3.5 has raised the bar for natural language processing (NLP). OpenAI’s GPT-4 is expected to push the boundaries of NLP even further and enable the development of more advanced and sophisticated language-based applications.

OpenAI’s technical report states that GPT-4 has demonstrated human-level proficiency in academic and professional environments, including achieving scores in the top 10% of test takers on the bar exam. While GPT-4 continues to utilize the Transformer architecture, it surpasses its predecessors by exhibiting enhanced capabilities in comprehending the subtleties of language, such as its context, mood, and significance.

One of the most impressive feats of GPT-4 is its ability to understand and follow user intent. This may have significant implications in many sectors, including finance, healthcare, education, etc. Additionally, its advanced NLP capabilities could lead to the development of more efficient and accurate virtual assistants, such as chatbots.

In this article, we will focus on understanding the main differences between GPT-4 and GPT-3.5 in terms of performance and training. We will also take a look at GPT-4’s release and which industries are most likely to benefit from it.

GPT 3.5: What are its capabilities?

In 2022, OpenAI released ChatGPT based on the GPT-3.5 series. This is a series of models that have been trained on a blend of text and code, with the data or information being used dating to before September 2021. 

GPT stands for generative pre-trained transformer and is a language model that uses neural networks to generate human-like text. The largest model in GPT-3.5 has 175 billion parameters (the training data used is referred to as the ‘parameters’) which give the model its high accuracy compared to its predecessors. ChatGPT is capable of language translation, writing various types of creative content, and answering user questions in an informative way.

OpenAI's GPT-4: How is it different from its predecessor GPT-3.5?
OpenAI’s ChatGPT is built on their GPT-3.5 seriesOpenAI/Wikimedia Commons 

The output from ChatGPT is of very high quality, often making it difficult to distinguish between human and machine-generated text. Among other things, ChatGPT has been used to generate news articles, write poems, and create chatbots that can hold conversations with humans. Since its launch, ChatGPT has surprised users with its powerful technology with a variety of applications in many fields.

A brief overview of GPT-4 and its capabilities

Similar to its predecessors, GPT-4 is a language model capable of generating human-like responses. The exact architecture of GPT-4 and the amount of training data used with the model have not been revealed by OpenAI. According to their report, GPT-4 can accept input in the form of images and text and provide responses accordingly. 

The main aim behind their development of this version was to improve previous GPT models’ responses in complex scenarios and fine-tune responses based on human feedback. This is considered a significant improvement, allowing the model to align more closely with human intent. 

The model was used in various professional, academic, and social scenarios to test its capabilities. They found that GPT-4 performed excellently — comparable to humans. In particular, the model scored in the top 10% of test takers for the Uniform Bar Examination and did well in other tests, such as the SAT. Scientists and developers from OpenAI believe that the excellent performance of the model is highly reliant on the pre-training process.

OpenAI's GPT-4: How is it different from its predecessor GPT-3.5?
GPT-4 performs as well as humans in exams and assessmentsKF/Wikimedia Commons 

Undeniably, GPT-4’s most exciting feature is its ability to accept input in the form of images or visuals. The input visuals can be in the form of documents with texts and photos, diagrams, or screenshots. Additionally, the model has also shown the ability to identify humor in visual inputs. This means that it can not only generate funny text but it can also recognize and explain jokes in images.

Also Read: Top 5 Ways GPT-4 Can Increase Workers Productivity

Performance comparison between GPT-3.5 and GPT-4

GPT-4 has shown improved performances in many different situations compared to GPT-3.5. According to early reports by users and comments by OpenAI’s co-founder, GPT-4 is better than GPT-3.5 at producing creative writing, and it is capable of generating poems and other creative text. Additionally, GPT-4 can correct itself when it makes a mistake and produce a perfect response, which is lacking in GPT-3.5.

Another area where GPT-4 outperforms GPT-3.5 and other state-of-the-art models is in exam-taking. GPT-4 is acing exams and tests, even the challenging ones like the bar exam. This is an exciting development and may be used as a teaching (or cheating) aid in schools.

GPT-4 is also showing promise in the area of massive multi-task language understanding (MMLU). This is a benchmark that measures the knowledge acquired by a model during pretraining. GPT-4 has demonstrated excellent performance in a total of 27 languages, including English.

OpenAI's GPT-4: How is it different from its predecessor GPT-3.5?
GPT-4 outperforms GPT-3.6 in exams and testsOpenAI (2023) 

GPT-4’s improved factual accuracy is a significant development. This means that users can be more confident that the information they receive from GPT-4 is accurate and up-to-date. This is especially important in areas such as learning, technology, writing, history, math, science, recommendation, code, and business.

GPT-4’s improved factual accuracy is likely due to several factors, including its larger dataset, its more sophisticated training methods, and its ability to learn from feedback. However, this cannot be said with certainty since the training methods have not been disclosed. However, it is likely that with continued development, GPT-4’s factual accuracy will improve even further.

OpenAI's GPT-4: How is it different from its predecessor GPT-3.5?
Performance of GPT-4 on MMLUOpenAI (2023) 

Potential applications of GPT-4

Due to its multimodal interface, GPT-4 has the potential to revolutionize many industries, including customer service, education, and entertainment. It can also improve existing technologies and research by improving chatbots and further advances in machine learning (ML) research. 

Customer service

GPT-4 can be used to automate customer service tasks, such as answering questions, resolving issues, and providing support. This would allow human customer support representatives to concentrate on more complicated problems that would require more time and effort.

Education

The model can be used to create educational content, such as interactive lessons, practice exercises, and assessments. By allowing students to interact with the technology in real time, teachers can get real-time feedback on how well students are understanding the material.

Entertainment

It can also be used to create entertainment content, such as stories, poems, and music. For example, it can be used to generate realistic and nuanced dialogues for movies, TV shows, and video games. This can help to make these products more immersive and engaging for users, while also freeing up the creator’s time to focus on more technical aspects of their work.

OpenAI's GPT-4: How is it different from its predecessor GPT-3.5?
GPT-4’s recognition of humor in visual inputOpenAI (2023) 

Improving chatbots

GPT-4 can be used to improve existing chatbots by making them more human-like and engaging. Chatbots powered by GPT-4 can hold conversations that are more natural and nuanced, and they can provide more helpful and informative answers to questions.

Further advancements in machine learning research 

Finally, GPT-4 can be used to further machine learning (ML) research. By studying how GPT-4 can generate responses in different forms, researchers can develop new and innovative ML algorithms that can improve on the fallacies of existing models.

These are just a few examples of the potential applications of GPT-4. As GPT-4 continues to develop, we will likely see even more innovative and creative uses for this technology.

Limitations of GPT-4

Similar to other language models, GPT-4 also has certain limitations. Some of these limitations include bias, accuracy, and safety. Let’s take a look at each of them below.

Bias: Since GPT-4 is trained on a large dataset of text and code, the model will inherit any existing biases in the dataset. 

Accuracy: Just like its predecessors, GPT-4 is capable of making factual mistakes and providing inaccurate or misleading information. 

Safety: GPT-4-generated material has the potential to be harmful and degrading. As a result, when using the model, it is critical to be conscious of this danger. It is good to be aware of these limitations when using GPT-4, because it can help users to take appropriate actions to mitigate the risks and use GPT-4 to its full potential.

Chrome Extension Lets You Use ChatGPT On Any Web Page

Generative AI chatbots, such as OpenAI’s ChatGPT, are the biggest talking point for everyone remotely related to the tech industry. With its update to the GPT-4 generative model, the interactive chatbot has become significantly refined and can also process images and videos as input cues to generate responses. While AI experts claim that tools like ChatGPT will automate the mundane aspects of work, the most challenging part of using ChatGPT is having to input prompts in a separate browser window manually.

Thankfully, numerous Chrome extensions solve this by allowing you to run the chatbot in any tab. Not just that, these extensions can automatically pick up probable cues and provide contextual responses based on the web page’s contents. One example is Merlin, a GPT-powered chatbot that operates as a Chrome extension and provides actions like generating summaries of search results, transcribing YouTube videos, drafting write-ups, and even generating responses using the GPT-4 model, which is currently only available to ChatGPT Plus and Bing AI chatbot users.

How to get Merlin Chrome extension for ChatGPT

Merlin ChatGPT Chrome Extension setup

To get started, you need to install the Merlin Chrome extension in your web browser. Because it’s a Chrome extension, it will only work with Chromium-based web browsers, including Google Chrome, Vivaldi, Brave, Opera, and similar. It does not currently work on Microsoft Edge despite it also being based on Chromium.

Head over to Merlin’s dedicated page on the Chrome Web Store from your browser and click “Add to Chrome.” Another dialog box will appear, warning you that Merlin can read your data on all websites. While that can be concerning to some users, Merlin’s Privacy Policy states that it does not collect personal information such as your IP address, contents on all web pages, and information about your device. It may, however, collect information about the prompts and any text you select on a website for the chatbot to provide relevant information.

Things Merlin can do

ChatGPT Chrome extension blog post

The most evident benefit of Merlin is that you can prompt it to appear on any web page (Google Docs is not supported at the moment). You can evoke the extension by pressing Alt+M on a Windows computer or Command+M on a Mac. You can also change this shortcut by entering chrome://extensions/shortcuts in the address bar on top of the browser.

You can also select a portion of the text, right-click, and click the option that says “give Merlin context.” Alternatively, when you select any text on any web page, a tiny icon at the bottom shows up right corner. Clicking it makes Merlin pop up, showing you the selected text. On this dialog box, you can enter your prompt on the text field at the top of the window while the results are generated below. On this window, you can also choose whether the responses are generated using GPT-3 or GPT-4.

Apple Files Patent for Newer Version Of The iPod

Apple has recently filed a patent with the U.S. Patent Office for a device that is very reminiscent of the iPod. The device is meant do everything a smartphone does without the annoying calls and texts interfering with the user experience.

It can contain music, videos and books. It seems the main notable difference to the old iPhone is that it can host wireless earbuds. That’s definitely an improvement over the older model and a feature worth having.

However, thus far there is no news on whether the device will come to market. It is still for now just a patent and many patents never see the light of day.

Apple could indeed be trying to stop the competition from ever inventing anything that could cut into the company’s market share.

In July of 2017, Apple officially killed off the iPod Shuffle and Nano. The company stated at the time :”Today, we are simplifying our iPod lineup with two models of iPod touch now with double the capacity starting at just $199 and we are discontinuing the iPod shuffle and iPod nano.”

A top-secret iPod mission

However, in August of 2020, news surfaced of the firm working with the government to build a top-secret iPod. A former Apple software engineer, who worked for the company for 18 years, shared the story of how Apple helped a U.S. Department of Energy contractor modify a 5th-generation iPod to secretly record and store data.

The events took place in 2005 when the engineer was approached by the director of iPod software to try “help two engineers from the U.S. Department of Energy build a special iPod.” The secret iPod was to be developed right under Steve Jobs’ nose, with only four individuals being aware of the project at that time.