WordPress Ad Banner

WhatsApp Enhances Android User Security with Device-Based Authentication

In its ongoing commitment to bolster user privacy and security, WhatsApp has introduced a new security feature tailored for Android users. This update empowers them to unlock and access their accounts through a variety of device-based authentication methods. The move comes following an extensive testing phase in WhatsApp’s beta channel and is now being made available to a wider Android audience. This innovative feature is designed to offer enhanced flexibility and security, allowing users to opt for their preferred method of account protection.

With this latest update, Android users can now select between two distinct authentication methods: two-factor authentication (2FA) and device-based authentication. The latter option brings a range of convenience and security features, including facial recognition, fingerprint scanning, or a personal PIN. This addition caters to users who value swifter access to their accounts without compromising the robust security of their conversations and data.

It is important to note that, as of the current release, there has been no confirmation regarding the availability of this feature for iOS users. This means that iPhone users may need to wait a bit longer to experience the advantages of this enhanced security feature. Nonetheless, WhatsApp’s commitment to user privacy and security is evident in their efforts to continually refine their platform, aligning with the evolving landscape of digital privacy and data protection.

This latest development underscores WhatsApp’s dedication to meeting the growing concerns surrounding digital privacy and safeguarding against an ever-increasing tide of cyber threats. With the introduction of device-based authentication for Android users, WhatsApp provides a customizable security solution, assuring its users that their conversations and data are in safe hands. Android users now have the flexibility to select the authentication method that best suits their preferences and security needs, making WhatsApp a more secure platform for their communication needs.

Samsung Bans ChatGPT-like AI After Security Breach, Warns of Employee Termination

Samsung Electronics Co. has prohibited its employees from utilizing generative AI tools, including ChatGPT, Google Bard, and Bing AI, among others. 

Due to worries about the security of crucial code, the tech giant informed personnel at one of its largest divisions about the new policy on Monday, according to media sources with access to the company’s internal memo. 

“Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung wrote to its staff in the memo. 

“While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”

Since the data supplied to these AI platforms are stored on external servers, it is challenging to recover and delete.

The South Korean company found that employees had uploaded sensitive code to the platforms, raising fears that the data may be made available to other users.

Samsung engineers unintentionally uploaded internal source code to ChatGPT earlier in April. A corporate official acknowledged that a message prohibiting the use of generative AI services had been sent last week, according to a Bloomberg report

“HQ is reviewing security measures to create a secure environment for safely using generative AI to enhance employees’ productivity and efficiency,” further read the memo. 

“However, until these measures are prepared, we are temporarily restricting the use of generative AI.” 

Samsung isn’t the only one banning generative AI 

In a poll on AI tools performed by Samsung last month, it was discovered that 65% of participants thought that using such services could be dangerous for security.

The new regulations forbid the use of generative AI systems on the company’s internal networks, as well as on the company’s laptops, tablets, and phones. 

Consumer electronics from the corporation, such as Windows laptops and Android smartphones, are unaffected. Samsung issued a warning that violating the new rules could result in termination.

The use of generative AI has drawn criticism from many sources, including Samsung. Some Wall Street financial institutions, including JPMorgan Chase & Co., Bank of America Corp., and Citigroup Inc., either outlawed or restricted its use in February.

Meanwhile, Samsung is developing its own internal AI tools for software development, translation, and document summarization. The business is also attempting to prevent critical company data from being uploaded to outside services. 

Earlier, an “incognito” mode was added by OpenAI to ChatGPT that enables users to prevent their chats from being used to train AI models, addressing privacy concerns.

Samsung warns employees of termination 

The security procedures at Samsung HQ are being reviewed in order to establish a safe atmosphere where generative AI can be used to increase employee productivity and efficiency. 

The business is temporarily limiting the usage of generative AI, nevertheless, until these measures are ready, noted the Bloomberg report. 

“We ask that you diligently adhere to our security guideline, and failure to do so may result in a breach or compromise of company information resulting in disciplinary action up to and including termination of employment,” the memo warned employees.

WhatsApp will soon allow you to lock separate Chats

WhatsApp is often working on new features and rolling out new tools or privacy updates every other week. So it comes as no surprise that the instant-messaging platform is currently working on a new privacy feature that will allow users to lock their private chats within the app itself.

As per a report in WaBetaInfo, WhatsApp will let users lock any specific chat on the platform by using fingerprint, passcode or face lock.

Apart from text messages, this feature will also protect media files, audios and documents shared between two parties in a chat.

In case someone tries to open these locked chats on WhatsApp and fails the authentication process several times, then they will need to clear the chat in order to open it, the report suggests.

However, the Lock Chat feature for WhatsApp chats is currently being tested for Android beta. Once the feature starts to roll out slowly for public, it will also be launched for iOS users.

The Lock Chat feature will reportedly archive chats that are locked in a separate folder altogether, making it easier to spot yet it stays hidden and isn’t in your face.

Privacy Alert: ChatGPT Exposes Private Conversations

OpenAI CEO expresses regret, claims error has been fixed.

Artificial Intelligence (AI) is transforming our lives and work, but recent developments have raised concerns about the privacy and security of user data when using AI-powered tools.

One of these concerns is the ChatGPT glitch that allowed some users to see the titles of other users’ conversations.

ChatGPT glitch

ChatGPT is an AI chatbot developed by OpenAI that allows users to draft messages, write songs, and code. Each conversation is stored in the user’s chat history bar.

However, users began seeing conversations they didn’t have with the chatbot in their chat history as early as Monday. Users shared these on social media sites, including Reddit and Twitter.

Company Response

OpenAI CEO Sam Altman expressed regret and confirmed that the “significant” error had been fixed. The company also briefly disabled the chatbot to address the issue. OpenAI claims that users couldn’t access the actual chats. Despite this, many users are still worried about their privacy on the platform.

Privacy Concerns

The glitch suggests that OpenAI has access to user chats, which raises questions about how the company uses this information.

The company’s privacy policy states that user data, such as prompts and responses, may be used to continue training the model.

However, that data is only used after personally identifiable information has been removed. Users fear that their private information could be released through the tool.

AI Tools and Privacy

The ChatGPT glitch comes as Google and Microsoft compete for control of the burgeoning market for AI tools. Concerns have been raised that missteps like these could be harmful or have unintended consequences.

There needs to be a greater focus on privacy and security concerns as AI becomes more prevalent in our lives. Companies must be transparent about how they collect, store, and use user data and must work quickly to address any issues.