On December 3, it became known about the coordinated elimination of the critical vulnerability CVE-2025-55182 (CVSSv3 — 10), which was found in React server components (RSC), as well as in a number of derivative projects and frameworks: Next.js, React Router RSC preview, Redwood SDK, Waku, RSC plugins Vite and show more ...
Parcel. The vulnerability allows any unauthenticated attacker to send a request to a vulnerable server and execute arbitrary code. Considering that tens of millions of websites, including Airbnb and Netflix, are built on React and Next.js, and vulnerable versions of the components were found in approximately 39% of cloud infrastructures, the scale of exploitation could be very serious. Measures to protect your online services must be taken immediately. A separate CVE-2025-66478 was initially created for the Next.js vulnerability, but it was deemed a duplicate, so the Next.js defect also falls under CVE-2025-55182. Where and how does the React4Shell vulnerability work? React is a popular JavaScript library for creating user interfaces for web applications. Thanks to RSC components, which appeared in React 18 in 2020, part of the work of assembling a web page is performed not in the browser, but on the server. The web page code can call React functions that will run on the server, get the execution result from them, and insert it into the web page. This allows some websites to run faster — the browser doesn’t need to load unnecessary code. RSC divides the application into server and client components, where the former can perform server operations (database queries, access to secrets, complex calculations), while the latter remain interactive on the user’s machine. A special lightweight HTTP-based protocol called Flight is used for fast streaming of serialized information between the client and server. CVE-2025-55182 lies in the processing of Flight requests, or to be more precis — in the unsafe deserialization of data streams. React Server Components versions 19.0.0, 19.1.0, 19.1.1, 19.2.0, or more specifically the react-server-dom-parcel, react-server-dom-turbopack, and react-server-dom-webpack packages, are vulnerable. Vulnerable versions of Next.js are: 15.0.4, 15.1.8, 15.2.5, 15.3.5, 15.4.7, 15.5.6, 16.0.6. To exploit the vulnerability, an attacker can send a simple HTTP request to the server, and even before authentication and any checks, this request can initiate the launch of a process on the server with React privileges. There is no data on the exploitation of CVE-2025-55182 in the wild yet, but experts agree that it is possible and will most likely be large-scale. Wiz claims that its test RCE exploit works with almost 100% reliability. A prototype of the exploit is already available on GitHub, so it will not be difficult for attackers to adopt it and launch mass attacks. React was originally designed to create client-side code that runs in a browser, and server-side components containing vulnerabilities are relatively new. Many projects built on older versions of React, or projects where React server-side components are disabled, are not affected by this vulnerability. However, if a project does not use server-side functions, this does not mean that it is protected — RSCs may still be active. Websites and services built on recent versions of React with default settings (for example, an application on Next.js built using create-next-app) will be vulnerable. Protective measures against exploitation of CVE-2025-55182 Updates. React users should update to the versions 19.0.1, 19.1.2, 19.2.1. Next.js users should update to versions 15.1.9, 15.2.6, 15.3.6, 15.4.8, 15.5.7, or 16.0.7. Detailed instructions for updating the react-server component for React Router, Expo, Redwood SDK, Waku, and other projects are provided in the React blog. Cloud provider protection. Major providers have released rules for their application-level web filters (WAF) to prevent exploitation of vulnerabilities: Akamai (rules for App & API Protector users); AWS (AWS WAF rules are included in the standard set but require manual activation); Cloudflare (protects all customers, including those on the free plan. Works if traffic to the React application is proxied through Cloudflare WAF. Customers on professional and enterprise plans should verify that the rule is active); Google Cloud (Cloud Armor rules for Firebase Hosting and Firebase App Hosting are applied automatically); Vercel (rules are applied automatically). However, all providers emphasize that WAF protection only buys time for scheduled patching, and RSC components still need to be updated on all projects. Protecting web services on your own servers. The least invasive solution would be to apply detection rules that prevent exploitation to your WAF or firewall. Most vendors have already released the necessary rule sets, but you can also prepare them yourself, for example, based on our list of dangerous POST requests. If fine-grained analysis and filtering of web traffic is not possible in your environment, identify all servers on which RSC (server function endpoints) are available and significantly restrict access to them. For internal services, you can block requests from all untrusted IP ranges; for public services, you can strengthen IP reputation filtering and rate limiting. An additional layer of protection will be provided by an EPP/EDR agent on servers with RSC. It will help detect anomalies in react-server behavior after the vulnerability has been exploited and prevent the attack from developing. In-depth investigation. Although information about the exploitation of the vulnerability in the wild has not been confirmed yet, it cannot be ruled out that it is already happening. It is recommended to study the logs of network traffic and cloud environments, and if suspicious requests are detected, to carry out a full response, including the rotation of keys and other secrets available on the server. Signs of post-exploitation activity to look for first: reconnaissance of the server environment, search for secrets (.env, CI/CD tokens, etc.), installation of web shells.
People entrust neural networks with their most important, even intimate, matters: verifying medical diagnoses, seeking love advice, or turning to AI instead of a psychotherapist. There are already known cases of suicide planning, real-world attacks, and other dangerous acts facilitated by LLMs. Consequently, private show more ...
chats between humans and AI are drawing increasing attention from governments, corporations, and curious individuals. So, there won’t be a shortage of people willing to implement the Whisper Leak attack in the wild. After all, it allows determining the general topic of a conversation with a neural network without interfering with the traffic in any way — simply by analyzing the timing patterns of sending and receiving encrypted data packets over the network to the AI server. However, you can still keep your chats private; more on this below… How the Whisper Leak attack works All language models generate their output progressively. To the user, this appears as if a person on the other end is typing word by word. In reality, however, language models operate not with individual characters or words, but with tokens — a kind of semantic unit for LLMs, and the AI response appears on screen as these tokens are generated. This output mode is known as “streaming”, and it turns out you can infer the topic of the conversation by measuring the stream’s characteristics. We’ve previously covered a research effort that managed to fairly accurately reconstruct the text of a chat with a bot by analyzing the length of each token it sent. Researchers at Microsoft took this further by analyzing the response characteristics from 30 different AI models to 11,800 prompts. A hundred prompts were used: variations on the question, “Is money laundering legal?”, while the rest were random and covering entirely different topics. By comparing the server response delay, packet size, and total packet count, the researchers were able to very accurately separate “dangerous” queries from “normal” ones. They also used neural networks for the analysis — though not LLMs. Depending on the model being studied, the accuracy of identifying “dangerous” topics ranged from 71% to 100%, with accuracy exceeding 97% for 19 out of the 30 models. The researchers then conducted a more complex and realistic experiment. They tested a dataset of 10,000 random conversations, where only one focused on the chosen topic. The results were more varied, but the simulated attack still proved quite successful. For models such as Deepseek-r1, Groq-llama-4, gpt-4o-mini, xai-grok-2 and -3, as well as Mistral-small and Mistral-large, researchers were able to detect the signal in the noise in 50% of their experiments with zero false-positives. For Alibaba-Qwen2.5, Lambda-llama-3.1, gpt-4.1, gpt-o1-mini, Groq-llama-4, and Deepseek-v3-chat, the detection success rate dropped to 20% — though still without false positives. Meanwhile, for Gemini 2.5 pro, Anthropic-Claude-3-haiku, and gpt-4o-mini, the detection of “dangerous” chats on Microsoft’s servers was only successful in 5% of cases. The success rate for other tested models was even lower. A key point to consider is that the results depend not only on the specific AI model, but also on the server configuration on which it’s running. Therefore, the same OpenAI model might show different results in Microsoft’s infrastructure versus OpenAI’s own servers. The same holds true for all open-source models. Practical implications: what does it take for Whisper Leak to work? If a well-resourced attacker has access to their victims’ network traffic — for instance, by controlling a router at an ISP or within an organization — they can detect a significant percentage of conversations on topics of interest simply by measuring traffic sent to the AI assistant servers, all while maintaining a very low error rate. However, this does not equate to automatic detection of any possible conversation topic. The attacker must first train their detection systems on specific themes — the model will only identify those. This threat cannot be dismissed as purely theoretical. Law enforcement agencies could, for example, monitor queries related to weapons or drug manufacturing, while companies might track employees’ job search queries. However, using this technology to conduct mass surveillance across hundreds or thousands of topics isn’t feasible — it’s just too resource-intensive. In response to the research, some popular AI services have altered their server algorithms to make this attack more difficult to execute. How to protect yourself from Whisper Leak The primary responsibility for defense against this attack lies with the providers of AI models. They need to deliver generated text in a way that prevents the topic from being discerned from the token generation patterns. Following Microsoft’s research, companies including OpenAI, Mistral, Microsoft Azure, and xAI reported that they were addressing the threat. They now add a small amount of invisible padding to the packets sent by the neural network, which disrupts Whisper Leak algorithms. Notably, Anthropic’s models were inherently less susceptible to this attack from the start. If you’re using a model and servers for which Whisper Leak remains a concern, you can either switch to a less vulnerable provider, or adopt additional precautions. These measures are also relevant for anyone looking to safeguard against future attacks of this type: Use local AI models for highly sensitive topics — you can follow our guide. Configure the model to use non-streaming output where possible so the entire response is delivered at once rather than word by word. Avoid discussing sensitive topics with chatbots when connected to untrusted networks. Use a robust and trusted VPN provider for greater connection security. Remember that the most likely point of leakage for any chat information is your own computer. Therefore, it’s essential to protect it from spyware with a reliable security solution running on both your computer and all your smartphones. Here are some more articles explaining what other risks are associated with using AI, and how to configure AI tools properly: AI sidebar spoofing: a new attack on AI browsers The pros and cons of AI-powered browsers How hackers can read your chats with ChatGPT or Microsoft Copilot Privacy settings in ChatGPT DeepSeek: configuring privacy and deploying a local version
Iran's top state-sponsored APT is usually rather crass. But in a recent spate of attacks, it tried out some interesting evasion tactics, including delving into Snake, an old-school mobile game.
The deal, believed to be valued at $1 billion, will bring non-human identity access control of agents and machines to ServiceNow’s offerings including its new AI Control Tower.
Artyom Khoroshilov, a researcher at the Moscow Institute of General Physics, will spend more than 20 years in Russian prison on accusations that include treason for aid sent to Ukraine and sabotage related to a DDoS attack on the postal system.
Britain sanctioned Russia's GRU in its entirety for the first time, as well as several individuals, after a public inquiry concluded it was responsible for a deadly nerve agent attack in 2018.
The Cybersecurity and Infrastructure Security Agency (CISA), NSA and Canadian Centre for Cyber Security published an advisory on Thursday outlining the BRICKSTORM malware based off an analysis of eight samples taken from victim organizations.
The journalism nonprofit Reporters Without Borders and another organization reported phishing attempts to cybersecurity researchers, who tied them to a Russia-linked group known as Callisto, ColdRiver or Star Blizzard.
Police have used facial recognition in Britain since 2017 and controversy has mounted as more aggressive deployments have been undertaken, including live facial recognition which involves processing real-time video footage of people passing a camera.
Twin brothers with a history of cybercrimes have been arrested on charges of abusing their roles as federal contractors to delete databases storing U.S. government information.
Cloudflare on Wednesday said it detected and mitigated the largest ever distributed denial-of-service (DDoS) attack that measured at 29.7 terabits per second (Tbps). The activity, the web infrastructure and security company said, originated from a DDoS botnet-for-hire known as AISURU, which has been linked to a number of hyper-volumetric DDoS attacks over the past year. The attack lasted for 69
Cybercriminals associated with a financially motivated group known as GoldFactory have been observed staging a fresh round of attacks targeting mobile users in Indonesia, Thailand, and Vietnam by impersonating government services. The activity, observed since October 2024, involves distributing modified banking applications that act as a conduit for Android malware, Group-IB said in a technical
Think your Wi-Fi is safe? Your coding tools? Or even your favorite financial apps? This week proves again how hackers, companies, and governments are all locked in a nonstop race to outsmart each other. Here’s a quick rundown of the latest cyber stories that show how fast the game keeps changing. DeFi exploit drains funds Critical yETH Exploit Used to Steal $9M
As 2025 draws to a close, security professionals face a sobering realization: the traditional playbook for web security has become dangerously obsolete. AI-powered attacks, evolving injection techniques, and supply chain compromises affecting hundreds of thousands of websites forced a fundamental rethink of defensive strategies. Here are the five threats that reshaped web security this year, and
The threat actor known as Silver Fox has been spotted orchestrating a false flag operation to mimic a Russian threat group in attacks targeting organizations in China. The search engine optimization (SEO) poisoning campaign leverages Microsoft Teams lures to trick unsuspecting users into downloading a malicious setup file that leads to the deployment of ValleyRAT (Winos 4.0), a known malware
A teenage cybercriminal posts a smug screenshot to mock a sextortion scammer... and accidentally hands over the keys to his real-world identity. Meanwhile, we look into the crystal ball for 2026 and consider how stolen data is now the jet fuel of cybercrime – and how next year could be even nastier than 2025. Plus, show more ...
Graham rants about recipe sites that won’t shut up, and there's even more love for Lily Allen's album "West End Girl" album. All this and more is discussed in episode 446 of the "Smashing Security" podcast with cybersecurity veteran Graham Cluley, and special guest Rik Ferguson.
A new warning about the threat posed by Distributed Denial of Service (DDoS) attacks should make you sit up and listen. Read more in my article on the Fortra blog.