WordPress Ad Banner

Samsung Bans ChatGPT-like AI After Security Breach, Warns of Employee Termination

Samsung Electronics Co. has prohibited its employees from utilizing generative AI tools, including ChatGPT, Google Bard, and Bing AI, among others. 

Due to worries about the security of crucial code, the tech giant informed personnel at one of its largest divisions about the new policy on Monday, according to media sources with access to the company’s internal memo. 

“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung wrote to its staff in the memo. 

“While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”

Since the data supplied to these AI platforms are stored on external servers, it is challenging to recover and delete.

The South Korean company found that employees had uploaded sensitive code to the platforms, raising fears that the data may be made available to other users.

Samsung engineers unintentionally uploaded internal source code to ChatGPT earlier in April. A corporate official acknowledged that a message prohibiting the use of generative AI services had been sent last week, according to a Bloomberg report

“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” further read the memo. 

“However, until these measures are prepared, we are temporarily restricting the use of generative AI.” 

Samsung isn’t the only one banning generative AI 

In a poll on AI tools performed by Samsung last month, it was discovered that 65% of participants thought that using such services could be dangerous for security.

The new regulations forbid the use of generative AI systems on the company’s internal networks, as well as on the company’s laptops, tablets, and phones. 

Consumer electronics from the corporation, such as Windows laptops and Android smartphones, are unaffected. Samsung issued a warning that violating the new rules could result in termination.

The use of generative AI has drawn criticism from many sources, including Samsung. Some Wall Street financial institutions, including JPMorgan Chase & Co., Bank of America Corp., and Citigroup Inc., either outlawed or restricted its use in February.

Meanwhile, Samsung is developing its own internal AI tools for software development, translation, and document summarization. The business is also attempting to prevent critical company data from being uploaded to outside services. 

Earlier, an “incognito” mode was added by OpenAI to ChatGPT that enables users to prevent their chats from being used to train AI models, addressing privacy concerns.

Samsung warns employees of termination 

The security procedures at Samsung HQ are being reviewed in order to establish a safe atmosphere where generative AI can be used to increase employee productivity and efficiency. 

The business is temporarily limiting the usage of generative AI, nevertheless, until these measures are ready, noted the Bloomberg report. 

“We ask that you diligently adhere to our security guideline, and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” the memo warned employees.

Google Says Goodbye to Passwords With Passkeys Launch

In a major announcement, Google has revealed that it is now offering support for passkeys across all its platforms. With this update, users will be able to enjoy a password less sign-in experience on websites and apps using fingerprinting, facial recognition, or a local pin, without the need to enter a password or complete 2-step verification (2SV).

To set up a passkey, users can log in to a website or app using their existing username and password, and then opt to create a passkey that can be stored in a solution like Google Password Manager for future logins.

Compared to traditional passwords, passkeys are much more secure and resistant to credential theft, phishing, and social engineering scams. This makes them a safer and more convenient alternative, especially considering how even the most tech-savvy users can be fooled by phishing attempts and other scams.

In their official blog post, Google software engineers Arnar Birgisson and Diana K. Smetters noted that “passkeys are a more convenient and safer alternative to passwords.” With broader support for password less sign-in options, Google accounts are now more resistant to identity-based attacks, offering users greater peace of mind and protection online.

Password-based security inefficient for modern enterprise

The release comes as the weaknesses of password-based security are becoming increasingly apparent, with hackers leaking more than 721 million passwords online last year. Vendors including Microsoft and Apple have committed to developing a common passwordless sign-in standard. 

While existing technologies like multi-factor authentication (MFA) have helped to enhance online account security, they haven’t fully addressed the risk of credential theft due to their susceptibility to SIM swap attacks that hijack the SMS verification process, and the inconvenience of adding additional authentication steps for end users. 

Password less login options like passkeys that enable users to log in with bio-metric data provide a user-friendly alternative that decreases the likelihood of a successful account takeover attempt. 

Cybersecurity Researchers Gain Control of ESA Nanosatellite in an Ethical Hacking Exercise

A team of four cybersecurity researchers from the multinational technology company Thales was successful in hacking into a nanosatellite belonging to the European Space Agency (ESA). The attempt was carried out as part of ESA’s cybersecurity exercise as part of its CYSAT conference.

With countries opening up space to private players, there is a surge in the number of satellites orbiting the planet. A hacked satellite is a potential concern for governments around the world and it can be used to transmit sensitive information or even be weaponized. The ESA introduced the Hack CYSAT challenge, the first of its kind in the world to understand the potential impact of a real cyberattack.

How hackers gained control of a Nanosatellite

The satellite made available for this challenge was the OPS-SAT demonstration nanosatellite that was launched in 2019. According to a press release from Thales, the team of cybersecurity researchers accessed the satellite’s onboard system and “used standard access rights to gain control of its application environment.”

The intrusion allowed the hackers to gain access to the satellite’s global positioning system attitude control system as well as its onboard camera. The researchers also exploited several vulnerabilities in the satellite which allowed them to enter malicious code into the satellite’s system.

Cybersecurity researchers gain control of ESA nanosatellite in an ethical hacking exercise
Artist’s representation of a satellite providing internet services NiseriN/iStock 

Doing so, let the researchers compromise the data the satellite was sending back to Earth, especially by modifying the images captured by the onboard camera. In addition to this, the hackers could also mask selected geographical areas visible in the satellite imagery to simulate the hiding of activities therein.

The ESA remained in control of the satellite during the test and also returned it to normal operation later on, so there isn’t a nanosatellite spiraling out of control in orbit as of now.

“This unprecedented exercise was a chance to raise awareness of potential flaws and vulnerabilities so that they can be remediated more effectively, and to adapt current and future solutions to improve the cyber resilience of satellites and space programs in general, including both ground segments and orbital systems,” said Pierre-Yves Jolivet, VP Cyber Solutions at Thales in a press release.

While the vulnerabilities of the ESA satellite are worrying, those in the commercial satellites are a greater cause of concern. Last year, Interesting Engineering reported how a hacker built a $25 tool to hack into SpaceX’s Starlink system which has a constellation of nearly 3,600 satellites in low-Earth orbit

Bloomberg report last month stated that Russia managed to hack into several mainstream satellite internet systems in February last year. Around the same time, hacker group Anonymous claimed that it had hacked into Russian spy satellites in response to its invasion of Ukraine, a charge Russia denied.

Google Accidentally Sends Users Free Money

Pay It Forward

If you use Google Pay, you might be in luck. Users of the payment service noticed that for seemingly no discernible reason, they’re receiving cash deposits of up to $1,000 in their app accounts. There may be no such thing as a free lunch, but, hey, here’s some free money.

The whole fiasco was kick started on Tuesday, when confused Google Payers took to social media wondering why they received the influx of cash.

“Uhhh, Google Pay seems to just be randomly giving users free money right now,” tweeted Mishaal Rahman, a tech journalist, attaching a screenshot of the errant couple buckets he received.

In a thread, befuddled Redditors also joined in on the fun, comparing the dinero they received or wondering how to cash in on the free goods.

“I just got almost $100 in six different cashback rewards for ‘dogfooding the Google pay remittance experience,”” wrote one user. “What does this mean?” they asked.

Rewards? For What?

What it means is that Google Pay screwed up its “reward” program, according to Ars Technica. It’s a lot like any other rewards program, where you can earn a discount or the odd dollar or two for regularly using the service.

Except in this case, these cash rewards were meant to be doled out to employees for “dogfooding,” which in IT slang, is when devs use their own product (like an app) for a while to test it, usually before it’s released.

For one reason or another, these compensatory payments got sent out to a bunch of random Google Pay users. Google, in an email it sent to the surprised recipients days later, blamed it on an “error.”

“You received this email because an unintended cash credit was deposited to your Google Pay account,” the company wrote, as quoted by Ars. “The issue has since been resolved and where possible, the credit has been reversed.”

Presumably, anyone that left the money sitting in their app accounts had the credits reversed. But if you were hasty or eager enough to cash out the credits right away, Google says, “well played” — you earned it, buddy (for some cosmic reason).

Users of iPhones have been issued an urgent warning by Apple.

Apple informed Millions of iPhone customers of a pop-up notification that occurs when water is found in the device’s charging port. If you will Ignore the notification it will cause the pins on the Lightning port or the cable to corrode, and the result will be permanent damage or connectivity issues.

Two types of notifications you’ll find there, that will have a  yellow warning triangle and a blue water drop inside.

The first message informs you that “Charging Not Available,” while the second message informs you that “Liquid Detected in Lightning Connector.” Except in extreme cases, it is critical not to ignore both notifications.

If you want to dry your iPhone Apple has some suggestions, such as lightly tapping it with the Lightning connector facing down to remove any excess liquid and placing it in a dry place with adequate airflow, and waiting about 30 minutes before charging it again.

If that notification appears again on the screen, that means that there is still liquid present, so you have to leave the iPhone in a dry place with some airflow for up to a day before charging or connecting a Lightning accessory.

Apple warns against using external heat sources or compressed air to dry out the iPhone and discourages inserting foreign objects into the Lightning port, such as cotton swabs or paper towels.

It is also advised not to put the iPhone in a bag of rice because this can cause damage to the device.

Clearview Al Has Scraped More Than 30 Billion Photos from Social Media and Gave Them to Cops

Clearview AI, a controversial facial recognition company, recently announced that it has scraped more than 30 billion photos from social media platforms. Despite facing multiple setbacks and bans from cities such as Portland, San Francisco, and Seattle, Clearview AI remains undeterred and has continued to grow its database significantly over the last year.

While Clearview AI is no longer able to provide its services to private businesses, it is still being used by more than 2,400 law enforcement agencies in the United States. The company’s CEO claims that police have used Clearview AI’s tools over a million times, and its database of scraped social media images now tops 30 billion.

The company’s AI-powered technology can recognize millions of faces thanks to images uploaded to social media. However, this technology has been met with widespread criticism due to concerns over privacy and the potential for misuse. Around 17 cities have banned the use of Clearview AI, but law enforcement agencies seem more than happy to use the platform.

The Miami Police Department recently confirmed that it uses Clearview AI regularly, which is a rare admission by law enforcement. The fact that law enforcement agencies are willing to use this technology despite the controversy surrounding it raises concerns about the impact it could have on civil liberties and human rights.

While facial recognition technology can be useful in certain situations, such as identifying criminals or finding missing persons, its potential for misuse is significant. Clearview AI’s massive database of scraped social media images is a prime example of how technology can be used to infringe on privacy rights. As this technology continues to evolve, it is important that we have a discussion about its use and potential impact on society.

Data from a public university in Islamabad has been leaked by the hacking group, Medusa.

Hackers that attacked the Institute of Space Technology and demanded a $500,000 ransom earlier this month have now leaked the public university’s data

Located in Islamabad, the Institute of Space Technology was a recent target for hacking group Medusa, who got hold of the university’s data and demanded a $500,000 ransom in exchange for the data which included passports, pay slips, analysis details, and other sensitive information about the university.

It’s being reported that Medusa has now leaked all of the university data on a public telegram channel, after several days of demanding the university to pay up the ransom.

The Institute of Space Technology has been silent about the whole matter and has not given any public statements about the hacking and data ransoms. The university’s website is also currently facing issues.

Soon after leaking the data on a telegram channel, Medusa updated their blog and notified that it has now leaked the data that it stole from the Institute of Space Technology. According to different sources, the stolen data has been uploaded in small downloadable files, each one being sized around 3.89 GB.

It’s still unclear if Medusa has released all information it stole from the university or if it’s still holding small amounts of data, in hopes of getting a ransom.

IST’s is yet to respond to give out a statement or respond to any questions about the hacking, the university has remained silent to both Medusa and the media.

Parts of the Source Code that Runs Twitter Leaked Online

Some parts of Twitter’s source code — the fundamental computer code on which the social network runs — were leaked online, the social media company said in a legal filing on Sunday that was first reported by The New York Times.

According to the legal document, filed with the U.S. District Court of the Northern District of California, Twitter had asked GitHub, an internet hosting service for software development, to take down the code where it was posted. The platform complied and said the content had been disabled, according to the filing. Twitter also asked the court to identify the alleged infringer or infringers who posted Twitter’s source code on systems operated by GitHub without Twitter’s authorization.

Twitter, based in San Francisco, noted in the filing that the postings infringe copyrights held by Twitter.

The leak creates more challenges for billionaire Elon Musk, who bought Twitter last October for $44 billion and took the company private. Since then, it has been engulfed in chaos, with massive layoffs and advertisers fleeing.

Meanwhile, the Federal Trade Commission is probing Musk’s mass layoffs at Twitter and trying to obtain his internal communications as part of ongoing oversight into the social media company’s privacy and cybersecurity practices, according to documents described in a congressional report.

GPT-4 Can’t Stop Helping Hackers Make Cyber Criminal Tools

OpenAI released the latest version of its machine learning software, GPT-4, to great fanfare this week. One of the features the company highlighted about the new version was that it was supposed to have rules protecting it from cybercriminal use. Within a matter of days, though, researchers say they have tricked it into making malware and helping them craft phishing emails, just as they had done for the previous iteration of OpenAI’s software, ChatGPT. On the bright side, they also were able to use the software to patch holes in cyber defenses, too.

Researchers from cybersecurity firm Check Point showed Forbes how they got around OpenAI blocks on malware development by simply removing the word “malware” in a request. GPT-4 then helped them create software that collected PDF files and sent them to a remote server. It went further, giving the researchers advice on how to make it run on a Windows 10 PC and make it a smaller file, so it was able to run more quickly and have a lower chance of being spotted by security software.

To have GPT-4 help craft phishing emails, the researchers took two approaches. In the first, they used GPT-3.5, which didn’t block requests to craft malicious messages, to write a phishing email impersonating a legitimate bank. They then requested GPT-4, which had initially refused to create an original phishing message, to improve the language. In the second, they asked for advice on creating a phishing awareness campaign for a business and requested a template for a fake phishing email, which the tool duly provided.

“GPT-4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity,” the Check Point researchers noted in their report, handed to Forbes ahead of publication. “What we’re seeing is that GPT-4 can serve both good and bad actors. Good actors can use GPT-4 to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime.”

“GPT-4 can empower bad actors, even non-technical ones, with the tools to speed up and validate their activity,” the Check Point researchers noted in their report, handed to Forbes ahead of publication. “What we’re seeing is that GPT-4 can serve both good and bad actors. Good actors can use GPT-4 to craft and stitch code that is useful to society; but simultaneously, bad actors can use this AI technology for rapid execution of cybercrime.”