WordPress Ad Banner

Orca Security and Google Cloud Expand Partnership for Enterprise Cloud Security

Orca Security, a leading agentless cloud security company, has announced an expanded partnership with Google Cloud, with the aim of strengthening the security of cloud workloads, data, and users. By integrating the Orca Cloud Security platform with key Google security products like Google Chronicle, Security Command Center, and VirusTotal, the collaboration seeks to provide comprehensive security solutions for multi-cloud development and runtime environments.

Orca Security takes pride in being the first third-party security solution to integrate the VirusTotal API v3, which was released earlier this year. This partnership represents a significant advancement in cloud security, as it equips organizations with essential tools to enhance visibility and achieve robust security in their cloud environments.

The integration with Google Chronicle, Security Command Center, and VirusTotal offers numerous advantages to Orca Security’s customers. By leveraging Google Cloud’s robust security services through Chronicle and Security Command Center, customers can consolidate cloud security telemetry to endpoint solutions. This consolidation of data enhances the security offering provided to Google’s customers.

With regards to VirusTotal, Orca Security is strengthening its malware capabilities by incorporating the platform’s robust data. This integration ensures broader coverage and deeper telemetry for malware data, thereby enhancing overall enterprise security.

Orca Security utilizes the latest Google Cloud API updates to introduce advanced features and capabilities, going beyond the identification of security risks and prevention of attacks such as denial-of-service and ransomware. The platform can also uncover idle, paused, and stopped workloads, as well as orphaned applications and endpoints that require consolidation or decommissioning.

Avi Shua, the CIO of Orca Security, emphasized the significance of consolidating an organization’s cloud insights into a unified data model. This approach enables security teams to gain context and prioritize risks for their cloud-native applications.

The platform now offers an Attack Path analysis feature that consolidates multiple individual risks into an interactive dashboard. This feature allows security teams to understand the impact of a workload vulnerability, including overprivileged users and exposed storage buckets containing sensitive personally identifiable information (PII). By comprehending this chain of vulnerabilities, organizations can assess the risks they face.

Orca’s malware detection capabilities, utilizing hash-based and heuristic approaches, provide confidence in findings. The integration with VirusTotal enables analysts and incident response teams to quickly access additional intelligence on the identified malware. This helps them understand the nature of the suspected malware and its potential connection to a larger threat.

Moving forward, Orca Security is committed to strengthening its team supporting the Google Cloud partnership, focusing on product development and go-to-market efforts. The collaboration aims to empower security leaders to address critical issues effectively. By integrating security throughout the application lifecycle, organizations can unify their development, DevOps, and security teams to deploy the most secure software and enhance the security of their cloud-native applications.

In addition to the core integrations, Orca Security is actively exploring the incorporation of the Mandiant Threat Intel feed to provide enhanced context for attack paths and findings. The company is also collaborating with Google Cloud partner SADA to expand the Orca Cloud Camp, which will showcase the unique combination of Orca, SADA, and Google. This partnership will be unveiled at the upcoming Google Next event.

CCP Targets AirDrop and Bluetooth for Content Control

In a move that will surprise nobody, the Chinese Communist Party (CCP) is making moves to restrict the use of wireless file-sharing services like AirDrop and Bluetooth to restrict the spread of “illegal and undesirable” informationthe BBC reports. Used to circumvent China’s so-called “Great Firewall,” these services are seen by CCP censors as a way to bypass this system to spread information critical of the Chinese premier and The People’s Republic of China’s ruling party.

Over the last few years, protestors who oppose the government have frequently utilized AirDrop to coordinate and distribute their political demands. For example, certain activists employed this tool on the Shanghai subway last October to share posters criticizing Xi Jinping, just as the Chinese president anticipated his third term as the country’s leader. AirDrop has also gained popularity among activists due to its reliance on Bluetooth connections between nearby devices. This allows individuals to share information with strangers without disclosing their details or relying on a centralized network that can be monitored.

To this end, China’s primary internet regulator, the Cyberspace Administration of China, has published proposed guidelines on “close-range mesh network services” and initiated a month-long period for public feedback starting on Tuesday. According to the proposed regulations, it would be mandatory for service providers to stop the spread of harmful and unlawful content, maintain necessary records, and promptly notify regulatory authorities of any discoveries.

“The new draft regulations would bring airdrop and similar services firmly into China’s online content control apparatus,” Tom Nunlist, a senior analyst at the consulting firm Trivium China, told the GuardianTo ensure compliance, service providers must assist regulatory authorities, including internet regulators and law enforcement agencies, with necessary data and technical support during inspections. Users are also mandated to register with their authentic identities. Furthermore, any technology or feature influencing public sentiment must undergo a security evaluation before implementation.

However, some international companies have already taken steps to restrict the spread of “dangerous” information in China. For example, following Mr. Xi’s re-election, Apple introduced a new version of its file-sharing feature in China, which imposes restrictions. Chinese users of Apple products can now only receive files from non-contact sources within a 10-minute window. After the 10-minute limit, file-sharing can only occur among contacts. Apple has not explained the update’s introduction in China. However, Apple has received criticism for its efforts to appease Beijing over the years.

According to activists, the BBC reports that China’s most recent action is seen as an attempt to suppress the limited file-sharing tools available to them. While China has justified these regulations as necessary for national security and public interest, they are believed to be hindering activists’ efforts.

“The authorities are desperate to plug loopholes on the Internet to silence opposing voices,” Netherlands-based human rights activist Lin Shengliang told the BBC, adding that more such regulations could follow. Although virtual private networks or VPNs may allow some users to circumvent these restrictions, activists are concerned that the number of people able to do so will be too limited to have a significant impact.

Despite this, Shengliang is optimistic that the recent surge of protests in China, sparked by its zero-Covid measures, represents a new era of political awareness that will not be quickly extinguished. “We will find new ways to speak up,” Shengliang said. “If we are bold and stand together, we will not be silenced,” he added.

OpenAI CTO’s Twitter Account Hacked

On Thursday evening, the chief technology officer of OpenAI, Mira Murati, fell victim to a Twitter account hack that aimed to promote a deceptive cryptocurrency scheme.

According to timestamps on the tweets, around 6:03 p.m. Pacific Time, Murati lost control of her Twitter account, which then started endorsing a new cryptocurrency called “$OPENAI.” The tweets claimed that the cryptocurrency was driven by AI-based language models, capitalizing on OpenAI’s reputation as an AI research organization.

The unauthorized tweets urged Murati’s followers to send money to an Ethereum digital wallet address in order to receive free coins as part of an alleged initial coin offering. Although the tweets were swiftly deleted, they reappeared minutes later with slightly altered wording. The fraudulent tweets remained visible on Murati’s account for over 45 minutes. Her account has since been restored to its original state, with the misleading tweets removed.

This incident underscores the dangers associated with high-profile Twitter accounts being targeted by scammers who exploit the credibility and extensive following of such accounts to deceive people and extract money under false pretenses.

The apparent breach of Murati’s account occurred merely four months after Twitter implemented changes to its two-factor authentication policies. These changes eliminated SMS text messaging as a security option for account protection, unless users subscribe to the premium Twitter Blue service. Security experts have expressed concerns that these modifications could increase the vulnerability of high-profile accounts to unauthorized takeovers.

Murati’s Twitter account features a blue checkmark, indicating her subscription to Twitter Blue and potential access to SMS-based two-factor authentication. VentureBeat has reached out to OpenAI for a comment on the matter and will update the story accordingly.

Russia’s FSB Alleges NSA Used Malware to Exploit Apple Phones

In a statement issued Thursday, the Russian Federal Security Service (FSB) claimed to have uncovered a covert operation by the U.S. National Security Agency (NSA) to infiltrate Apple phones using previously unknown malware. The FSB says the alleged plot, as reported by Reuters, was targeted at exploiting specially crafted “back door” vulnerabilities.

FSB Uncovers Plot

The FSB, the primary successor agency to the Soviet KGB, estimates several thousand iPhones, including those owned by Russian citizens, have been compromised. In addition, in a move that underscores the global implications of this alleged operation, the FSB reports that phones belonging to foreign diplomats stationed in Russia and former Soviet territories were also targeted. These reportedly include devices owned by representatives from NATO member countries, Israel, Syria, and China.

“The FSB has uncovered an intelligence action of the American special services using Apple mobile devices,” the agency stated. As of this report, Apple and the NSA have yet to respond to requests for comment.

Russia’s foreign ministry also chimed in, saying that the plot demonstrates the tight-knit relationship between the NSA and Apple. It claimed that the clandestine data collection was conducted “through software vulnerabilities in US-made mobile phones.”

The foreign ministry further accused U.S. intelligence services of using I.T. corporations for mass data collection, often without the knowledge of the targeted individuals. “The U.S. intelligence services have been using I.T. corporations for decades to collect large-scale data of internet users without their knowledge,” the ministry said.

This revelation comes at a time when the United States is regarded as the world’s top cyber power in terms of intent and capability, according to Harvard University’s Belfer Centre Cyber 2022 Power Index.

Global Cybersecurity Implications

These allegations come amidst heightened tensions. For example, after Russian troops moved into Ukraine last year, U.S. and British intelligence went public with information suggesting President Vladimir Putin planned the invasion. The source of this intelligence, however, remains unclear.

As Western intelligence agencies have accused Russia of constructing an advanced domestic surveillance structure, Russian officials have continually questioned the security of U.S. technology. Putin has said he does not own a smartphone but uses the internet occasionally.

Earlier this year, the Kremlin reportedly directed officials involved in preparation for Russia’s 2024 presidential election to cease using Apple iPhones due to concerns about potential vulnerability to Western intelligence agencies.

The FSB’s recent findings bring to the surface a narrative of alleged cooperation between tech giants and intelligence agencies, which will undoubtedly stir debates regarding data privacy, surveillance ethics, and cyber warfare’s geopolitics.

Portugal Blocks Chinese Tech Giant Huawei from 5G Network

In a bold move, Portugal has decided to close its doors to companies from “high-risk” countries and jurisdictions regarding its fifth-generation (5G) phone network. Following in the footsteps of other Western nations, Portugal has effectively blocked Chinese tech giant Huawei Technologies Co. from its market, raising eyebrows and sparking conversations about the implications for national security.

The Portuguese government recently released a statement outlining its decision to prohibit the use of equipment from suppliers based outside the European Union, as well as those from non-member states of the North Atlantic Treaty Organization (NATO) or the Organization for Economic Co-operation and Development (OECD). This measure aims to safeguard national networks from potential security risks associated with equipment supplied by companies from these “high-risk” regions.

In their statement, the security assessment committee of the government’s Higher Council for Cyberspace Security highlighted that companies from outside these specific jurisdictions pose a significant risk to the security of national networks. This decision effectively excludes Chinese suppliers, including Huawei, which had previously collaborated with some Portuguese telecommunications firms to develop their 5G networks.

One such collaboration was between Altice Portugal and Huawei, announced in 2019, as they worked together to develop cutting-edge 5G technology. However, earlier this year, Altice Portugal made a significant shift, selecting Nokia Oyj as the equipment provider for its core 5G network. At the time of this reporting, Altice was not available for immediate comment on the recent ban.

More details on the decision

The government’s statement did not explicitly name any specific suppliers now banned from participation. Additionally, no timeline was provided for telecommunication companies in Portugal to remove equipment supplied by these now-banned suppliers from their networks. The Portuguese business newspaper Jornal Economico first reported the decision, stirring discussions and speculation within the industry.

This move by Portugal echoes the concerns expressed by several other Western countries regarding the potential risks associated with allowing companies from certain nations to participate in building critical infrastructure like 5G networks. These concerns primarily revolve around the possibility of foreign governments accessing or influencing sensitive data and communications raising national security issues.

While the decision might raise eyebrows and elicit mixed reactions, it underscores Portugal’s commitment to ensuring the integrity and security of its telecommunication infrastructure. By effectively blocking Chinese companies, Portugal joins a growing list of countries that have taken similar steps to mitigate potential risks associated with foreign suppliers.

The ban, however, marks a significant development in the ongoing global debate over the involvement of Chinese tech companies in 5G network infrastructure. As countries navigate the complex landscape of emerging technologies and national security, it is clear that the issue of trust and risk assessment will continue to shape the future of telecommunications.

As the world races toward the 5G revolution, the implications of such decisions are far-reaching. While Portugal takes a definitive stance, the debate surrounding the balance between innovation, national security, and international cooperation intensifies. Time will tell how this move will impact the telecommunications landscape in Portugal and beyond.

Meet ‘DarkBERT:’ South Korea’s Dark Web AI Could Combat Cybercrime

A team of South Korean researchers has taken the unprecedented step of developing and training artificial intelligence (AI) on the so-called “Dark Web.” The Dark Web trained AI, called DarkBERT, was unleashed to trawl and index what it could find to help shed light on ways to combat cybercrime.

The “Dark Web” is a section of the internet that remains hidden and cannot be accessed through standard web browsers. This part of the web is notorious for its anonymous websites and marketplaces that facilitate illegal activities, such as drug and weapon trading, stolen data sales, and a haven for cybercriminals.

How Does DarkBERT Function?

Currently, the DarkBERT is still in the works. The developers are currently working on the AI to adapt well to the language that might be being used on the dark web. The researchers will be training the model by crawling through the Tor network.

It has also been reported that the pre-trained model will be filtered well and deduplicated. Data processing will be incorporated into the model to identify threats or concerns from the expected sensitive information.

According to the team, their LLM was far better at making sense of the dark web than other models that were trained to complete similar tasks, including RoBERTa, which Facebook researchers designed back in 2019 to “predict intentionally hidden sections of text within otherwise unannotated language examples,” according to an official description.

“Our evaluation results show that DarkBERT-based classification model outperforms that of known pre-trained language models,” the researchers wrote in their paper.

According to the team, DarkBERT has the potential to be employed for diverse cybersecurity purposes, including identifying websites that vend ransomware or release confidential data. Additionally, it can scour through the numerous dark web forums updated daily and keep an eye on any illegal information exchange.

What’s next?

A lot has been going on as the DarkBERT is being developed. The researchers will be incorporating multiple languages into the pre-trained model. DarkBERT performance is expected to be better with using the latest language in the pre-trained model to allow the crawling of additional data.

UN AI Adviser Warns About the Destructive Use of Deepfakes

Neil Sahota, an artificial intelligence (AI) expert and adviser to the United Nations, recently raised concerns about the increasing threat posed by highly realistic deepfakes. In an interview with CTVNews.ca on Friday, Sahota highlighted the risks associated with these manipulated media creations.

Sahota described deepfakes as digital replicas or mirror images of real-world individuals, often created without their consent and for malicious purposes, primarily aimed at deceiving or tricking others. The emergence of deepfakes has resulted in various instances of fake content going viral, encompassing a wide range of topics, including political simulations and celebrity endorsements.

While famous individuals have often been the primary targets, Sahota emphasized that ordinary civilians are also vulnerable to this form of manipulation. He noted that deepfakes initially gained traction through the distribution of revenge porn, highlighting the importance of remaining vigilant.

To identify manipulated media, Sahota advised individuals to pay attention to subtle inconsistencies in video and audio content. Signs to watch out for include unusual body language, odd shadowing effects, and discrepancies in the spoken words. By maintaining a vigilant eye and questioning the authenticity of media, individuals can become better equipped to identify potential deepfake content.

As deepfake technology continues to advance, Sahota’s warnings serve as a reminder of the critical need to exercise caution and skepticism when consuming digital media, as well as the urgent need for proactive measures to address the risks associated with deepfakes.

Not enough

Sahota also argued that currently policymakers are not doing enough in terms of educating the public on the many dangers of deepfakes and how to spot them. He recommended that a content verification system be implemented that would use digital tokens to authenticate media and identify deepfakes.

“Even celebrities are trying to figure out a way to create a trusted stamp, some sort of token or authentication system so that if you’re having any kind of non-in-person engagement, you have a way to verify,” he told CTVNews.ca.

“That’s kind of what’s starting to happen at the UN-level. Like, how do we authenticate conversations, authenticate video?”

Google To Delete Inactive Accounts For Security Purpose

Google, owned by Alphabet Inc, has made an announcement regarding the deletion of inactive accounts that have been dormant for a period of two years, starting from December. The primary objective of this action is to mitigate potential security risks, such as hacking, by eliminating unused accounts from the Google system.

According to Google, if a Google account remains unused or has not been logged into for a minimum of two years, both the account itself and its associated content across various Google services may be deleted. This includes popular platforms such as Gmail, Docs, Drive, Meet, Calendar, YouTube, and Google Photos.

It is important to note that this policy change strictly applies to personal Google Accounts and does not extend to accounts owned by organizations such as schools or businesses. The measure primarily targets individual users who have not actively engaged with their accounts over an extended period.

In 2020, Google had previously announced plans to remove content stored in inactive accounts but refrained from deleting the accounts entirely. However, with this recent update, the company will now take the additional step of completely deleting the inactive accounts.

To ensure that users are well-informed about this impending action, Google will send multiple notifications to the account’s email address and the associated recovery email of the inactive accounts before initiating the deletion process. This proactive approach allows users to take appropriate action, such as logging into their accounts or retrieving important data, in order to prevent unintended deletion.

This move by Google aligns with a growing trend among major technology companies to address inactive accounts and streamline their systems. Elon Musk recently announced that Twitter would be removing accounts that have been inactive for several years and archiving them. Musk emphasized the significance of freeing up abandoned handles on the platform.

Google’s Decision to Clear inactive Accounts for Improved Efficiency:

The decision to delete inactive accounts reflects Google’s commitment to user privacy and data security. By removing unused accounts, the company can better protect against potential security breaches, unauthorized access, and data misuse. Additionally, it allows Google to optimize its infrastructure by clearing out dormant accounts that are no longer actively contributing to its services.

For users, this initiative serves as a reminder to regularly review and manage their online accounts. It is crucial to maintain active usage or regularly log into accounts to prevent unintentional loss of data or potential security risks. By staying engaged with their Google accounts, users can ensure that their information remains secure and accessible.

In conclusion, Google’s decision to delete inactive accounts starting in December reflects the company’s commitment to enhancing security measures and protecting user data. By removing unused accounts, Google aims to mitigate potential security threats and streamline its systems. Users are advised to stay vigilant, regularly log into their accounts, and respond to notifications to avoid unintended deletion of their accounts and associated content.

IBM Set to Revolutionize Data Security with Latest Quantum-Safe Technology

What Exactly Is Quantum-Safe Technology? And Why Is It Important? To understand this, we need to take a step back and look at what Quantum Computing is. Unlike Classical Computers, which store and process information using Binary Digits or Bits, Quantum Computers use Quantum Bits or Qubits, which can exist in multiple states simultaneously. This allows Quantum Computers to perform certain tasks, such as factoring large numbers, much faster than Classical Computers.

However, this also means that some of the Cryptographic Algorithms that are currently used to secure data, such as RSA and ECC, could be broken by Quantum Computers. This is where Quantum-Safe Technology comes in. It is a set of Cryptographic Algorithms that are resistant to attacks by Quantum Computers. It ensures that data remains secure in a post-quantum world.

Recently, IBM unveiled its “End-to-End Quantum-Safe Technology” at the annual Think Conference held in Orlando, Florida. IBM Quantum Safe is not just a single algorithm or tool. Rather, it is a comprehensive suite of tools and capabilities that can be used by organizations to secure their data. This includes Quantum-Safe Cryptography, which uses algorithms such as Lattice-Based Cryptography and Hash-Based Cryptography, as well as Post-Quantum Key Exchange Protocols.

What sets the IBM quantum-safe apart?

What sets IBM Quantum Safe apart is not just the technology itself. It is also IBM’s deep expertise in security. IBM has been working on quantum-safe cryptography for over a decade and has contributed to the development of many of the algorithms now considered quantum-safe. This means that IBM Quantum Safe is not just a theoretical concept but a practical solution tested and validated in real-world scenarios.

This is especially important for governmental agencies and businesses, which handle some of the most valuable and sensitive data. In a post-quantum world, the security of this data could be compromised if it is not protected by quantum-safe technology. IBM Quantum Safe provides these organizations with a way to future-proof their security and ensure that their data remains secure, even in the face of advances in quantum computing.

The announcement of IBM Quantum Safe has generated a lot of excitement in the technology industry. As quantum computing advances, the need for quantum-safe technology will only grow. IBM Quantum Safe provides a practical solution to this problem and has the potential to become the industry standard for post-quantum cryptography.

In her keynote address at the Think conference, Rometty emphasized the importance of quantum-safe technology in ensuring data security. “We are at an inflection point in our industry,” she said. “We need to ensure that our data remains secure in a post-quantum world. That is why we have developed IBM Quantum Safe – to provide a practical, comprehensive solution that can be used by organizations of all sizes and across all industries.”

With IBM’s deep expertise in security and its commitment to developing practical solutions, IBM Quantum Safe has the potential to become the gold standard for quantum-safe technology.

Twitter Acknowledges Accidental Exposure of Private Circle Tweets in ‘Security Incident’

After weeks of silence, Twitter has finally acknowledged a bug that caused tweets shared within users private Twitter Circles to become public. Affected users received an email from the platform on Friday, notifying them of a “security incident” that resulted in their semi-private tweets being visible to a wider audience beyond their intended Circle of close friends.

The email, obtained by Fortune, stated, “In April 2023, a security incident may have allowed users outside of your Twitter Circle to see tweets that should have otherwise been limited to the Circle to which you were posting.” It further explained that the issue was promptly identified and resolved by the Twitter security team, ensuring that the affected tweets were no longer visible outside of the intended Circle.

Twitter Circle allows users to share tweets exclusively with a private group of followers, thereby limiting their visibility to a select audience rather than broadcasting them to all followers or the public. However, in April, users started encountering glitches with this feature. In a test tweet, one user discovered that a person outside their Twitter Circle could see a tweet they had specifically sent to their personal list of followers.

One month after this incident, impacted users received the email from Twitter, acknowledging the issue and reassuring them that the security team had addressed the problem. The exact number of users affected by this bug remains unknown.

“Twitter is committed to protecting the privacy of the people who use our service, and we understand the risks that an incident like this can introduce, and we deeply regret this happened,” expressed the company in the email to affected users, according to Fortune.