Russian and Chinese espionage groups continue to exploit an N-day vulnerability (CVE-2025-8088) in WinRAR alongside financially motivated actors, all leveraging a path traversal vulnerability that drops malware into Windows Startup folders. Google Threat Intelligence Group discovered widespread exploitation of a show more ...
critical WinRAR vulnerability six months after the vendor patched it, with government-backed hackers from Russia and China deploying the flaw alongside financially motivated cybercriminals. The attacks demonstrate how effective exploits remain valuable long after patches become available, especially when organizations delay updates. CVE-2025-8088, a high-severity path traversal vulnerability in WinRAR, allows attackers to write files to arbitrary system locations by crafting malicious RAR archives. RARLAB released WinRAR version 7.13 on July 30, 2025, to address the flaw. However, exploitation began at least 12 days earlier, on July 18, according to ESET research. Read: New Zero-Day in WinRAR Abused by RomCom The vulnerability exploits Alternate Data Streams, a Windows feature that allows multiple data streams to be associated with a single file. Attackers conceal malicious files within ADS entries of decoy documents inside archives. While victims view what appears to be a legitimate PDF or document, hidden payload streams execute in the background. The exploit uses specially crafted paths combining ADS features with directory traversal characters. A file might carry a composite name like "innocuous.pdf:malicious.lnk" paired with a path traversing to critical directories. When victims open the archive, the ADS content extracts to destinations specified by the traversal path, frequently targeting the Windows Startup folder for automatic execution at next login. Multiple Russian threat groups consistently exploit the vulnerability in campaigns targeting Ukrainian military and government entities using highly tailored geopolitical lures. UNC4895, also known as RomCom, conducts dual financial and espionage operations through spearphishing emails with subject lines indicating targeting of specific Ukrainian military units. The attacks deliver NESTPACKER malware, externally known as Snipbot. APT44, tracked under the designation FROZENBARENTS, drops decoy files with Ukrainian filenames alongside malicious LNK files attempting further downloads. TEMP.Armageddon, designated CARPATHIAN, uses RAR archives to place HTA files into Startup folders, with the HTA acting as a downloader for second-stage payloads. This activity continued through January 2026. Turla, adopted CVE-2025-8088 to deliver the STOCKSTAY malware suite using lures themed around Ukrainian military activities and drone operations. A China-nexus actor exploits the vulnerability to deliver POISONIVY malware via BAT files dropped into Startup folders, which then download droppers. The exploitation mirrors widespread abuse of CVE-2023-38831, a previous WinRAR bug that government-backed actors heavily exploited despite available patches. The pattern demonstrates that exploits for known vulnerabilities remain highly effective when organizations fail to patch promptly. Financially motivated threat groups quickly adopted the vulnerability. One group targeting Indonesian entities uses lure documents to drop CMD files into Startup folders. These scripts download password-protected RAR archives from Dropbox containing backdoors that communicate with Telegram bot command-and-control servers. Another group focuses on hospitality and travel sectors, particularly in Latin America, using phishing emails themed around hotel bookings to deliver commodity remote access trojans including XWorm and AsyncRAT. A separate group targeting Brazilian users via banking websites delivered malicious Chrome extensions that inject JavaScript into pages of two Brazilian banking sites to display phishing content and steal credentials. An actor known as "zeroplayer" advertised a WinRAR exploit in July 2025, shortly before widespread exploitation began. zeroplayer's portfolio extends beyond WinRAR. In November 2025, the actor claimed a sandbox escape remote code execution zero-day exploit for Microsoft Office, advertising it for $300,000. In late September 2025, zeroplayer advertised a remote code execution zero-day for an unnamed popular corporate VPN provider. Starting mid-October 2025, zeroplayer advertised a Windows local privilege escalation zero-day exploit for $100,000. In early September 2025, the actor advertised a zero-day for an unspecified drive allowing attackers to disable antivirus and endpoint detection and response software for $80,000. zeroplayer's continued activity demonstrates the commoditization of the attack lifecycle. By providing ready-to-use capabilities, actors like zeroplayer reduce technical complexity and resource demands, allowing groups with diverse motivations—from ransomware deployment to state-sponsored intelligence gathering—to leverage sophisticated capabilities. The rapid exploitation adoption occurred despite Google Safe Browsing and Gmail actively identifying and blocking files containing the exploit. When reliable proof of concept for critical flaws enters cybercriminal and espionage marketplaces, adoption becomes instantaneous. This blurs lines between sophisticated government-backed operations and financially motivated campaigns. The vulnerability's commoditization reinforces that effective defense requires immediate application patching coupled with fundamental shifts toward detecting consistent, predictable post-exploitation tactics. Google published comprehensive indicators of compromise in a VirusTotal collection for registered users to assist security teams in hunting and identifying related activity.
Google dismantled what is believed to be one of the world's largest residential proxy networks, taking legal action to seize domains controlling IPIDEA's infrastructure and removing millions of consumer devices unknowingly enrolled as proxy exit nodes. The takedown involved platform providers, law enforcement show more ...
and security firms working to eliminate a service that enabled espionage, cybercrime and information operations at scale. Residential proxy networks sell access to IP addresses owned by internet service providers and assigned to residential customers. By routing traffic through consumer devices worldwide, attackers mask malicious activity behind legitimate-looking IP addresses, creating significant detection challenges for network defenders. IPIDEA became notorious for facilitating multiple botnets, with its software development kits playing key roles in device enrollment while proxy software enabled attacker control. This includes the BadBox2.0 botnet Google targeted with legal action last year, plus the more recent Aisuru and Kimwolf botnets. Also read: Cloudflare Outage or Cyberattack? The Real Reason Behind the Massive Disruption The scale of abuse proves staggering. During just one week in January this year, Google observed over 550 individual threat groups it tracks using IP addresses associated with IPIDEA exit nodes to obfuscate their activities. These groups originated from China, North Korea, Iran and Russia, conducting activities including access to victim software-as-a-service environments, on-premises infrastructure compromise and password spray attacks. "While proxy providers may claim ignorance or close these security gaps when notified, enforcement and verification is challenging given intentionally murky ownership structures, reseller agreements, and diversity of applications," Google's analysis stated. Google's investigation revealed that many ostensibly independent residential proxy brands actually connect to the same actors controlling IPIDEA. The company identified 13 proxy and VPN brands as part of the IPIDEA network, including 360 Proxy, ABC Proxy, Cherry Proxy, Door VPN, IP 2 World, Luna Proxy, PIA S5 Proxy and others. The same actors control multiple software development kit domains marketed to app developers as monetization tools. These SDKs support Android, Windows, iOS and WebOS platforms, with developers paid per download for embedding the code. Once incorporated into applications, the SDKs transform devices into proxy network exit nodes while providing whatever primary functionality the app advertised. Google analyzed over 600 Android applications across multiple download sources containing code connecting to IPIDEA command-and-control domains. These apps appeared largely benign—utilities, games and content—but utilized monetization SDKs enabling proxy behavior without clear disclosure to users. The technical infrastructure operates through a two-tier system. Upon startup, infected devices connect to Tier One domains and send diagnostic information. They receive back a list of Tier Two servers to contact for proxy tasks. The device then polls these Tier Two servers periodically, receiving instructions to proxy traffic to specific domains and establishing dedicated connections to route that traffic. [caption id="attachment_109008" align="aligncenter" width="600"] Two-Tier C2 Infrastructure. (Source: Google Threat Intelligence)[/caption] Google identified approximately 7,400 Tier Two servers as of the takedown. The number changes daily, consistent with demand-based scaling. These servers are hosted globally, including in the United States. Analysis of Windows binaries revealed 3,075 unique file hashes where dynamic analysis recorded DNS requests to at least one Tier One domain. Some posed as legitimate software like OneDriveSync and Windows Update, though IPIDEA actors didn't directly distribute these trojanized applications. Residential proxies pose direct risks to consumers whose devices become exit nodes. Users knowingly or unknowingly provide their IP addresses and devices as launchpads for hacking and unauthorized activities, potentially causing providers to flag or block them. Proxy applications also introduce security vulnerabilities to home networks. When a device becomes an exit node, network traffic the user doesn't control passes through it. This means attackers can access other devices on the same private network, effectively exposing security vulnerabilities to the internet. Google's analysis confirmed IPIDEA proxy software not only routed traffic through exit nodes but also sent traffic to devices to compromise them. Google's disruption involved three coordinated actions. First, the company took legal action to seize domains controlling devices and proxying traffic through them. Second, Google shared technical intelligence on discovered IPIDEA software development kits with platform providers, law enforcement and research firms to drive ecosystem-wide enforcement. Third, Google ensured Play Protect, Android's built-in security system, automatically warns users and removes applications incorporating IPIDEA SDKs while blocking future installation attempts. This protects users on certified Android devices with Google Play services. Google believes the actions significantly degraded IPIDEA's proxy network and business operations, reducing available devices by millions. Because proxy operators share device pools through reseller agreements, the disruption likely impacts affiliated entities downstream. Also read: What Is a Proxy Server? A Complete Guide to Types, Uses, and Benefits The residential proxy market has become what Google describes as a "gray market" thriving on deception—hijacking consumer bandwidth to provide cover for global espionage and cybercrime. Consumers should exercise extreme caution with applications offering payment for "unused bandwidth" or "internet sharing," as these represent primary growth vectors for illicit proxy networks. Google urges users to purchase connected devices only from reputable manufacturers and verify certification. The company's Android TV website provides up-to-date partner lists, while users can check Play Protect certification status through device settings. The company calls for proxy accountability and policy reform. While some providers may behave ethically and enroll devices only with clear consumer consent, any claims of "ethical sourcing" must be backed by transparent, auditable proof. App developers bear responsibility for vetting monetization SDKs they integrate.
The acting head of the federal government’s top cyber defense agency triggered an internal cybersecurity warning last summer after uploading sensitive government documents into a public version of ChatGPT, according to four Department of Homeland Security officials familiar with the incident. The uploads were show more ...
traced to Madhu Gottumukkala, the interim director of the Cybersecurity and Infrastructure Security Agency (CISA), who has led the agency in an acting capacity since May. Cybersecurity monitoring systems detected the activity in August and automatically flagged it as a potential exposure to sensitive government material, prompting a broader DHS-level damage assessment, the officials said. Sensitive CISA Contracting Documents Uploaded into Public AI Tool None of the documents uploaded into ChatGPT was classified, according to the officials, all of whom were granted anonymity due to concerns about retaliation. However, the materials included CISA contracting documents marked “for official use only,” a designation reserved for sensitive information not intended for public release. One official said there were multiple automated alerts generated by CISA’s cybersecurity sensors, including several internal cybersecurity warnings during the first week of August alone, as reported by The Politico. Those alerts are designed to prevent either the theft or accidental disclosure of sensitive government data from federal networks. Following the alerts, senior officials at DHS launched an internal review to assess whether the uploads caused any harm to government systems or operations. Two of the four officials confirmed that the review took place, though its conclusions have not been disclosed. Madhu Gottumukkala Received Special Permission to Use ChatGPT The incident drew heightened scrutiny inside the DHS because Gottumukkala had requested and received special authorization to use ChatGPT shortly after arriving at CISA earlier this year, three officials said. At the time, the AI tool was blocked for most DHS employees due to concerns about data security and external data sharing. Despite the limited approval, the uploads still triggered automated internal cybersecurity warnings. Any data entered into the public version of ChatGPT is shared with OpenAI, the platform’s owner, and may be used to help generate responses for other users. OpenAI has said ChatGPT has more than 700 million active users globally. By contrast, AI tools approved for DHS use, such as the department’s internally developed chatbot, DHSChat, are configured to ensure that queries and documents remain within federal networks and are not shared externally. “He forced CISA’s hand into making them give him ChatGPT, and then he abused it,” one DHS official said. In an emailed statement, CISA Director of Public Affairs Marci McCarthy said Madhu Gottumukkala “was granted permission to use ChatGPT with DHS controls in place,” describing the usage as “short-term and limited.” She added that the agency remains committed to “harnessing AI and other cutting-edge technologies” in line with President Donald Trump’s executive order aimed at removing barriers to U.S. leadership in artificial intelligence. The statement also appeared to dispute the timeline of events, saying Gottumukkala, “last used ChatGPT in mid-July 2025 under an authorized temporary exception granted to some employees,” and emphasizing that CISA’s default policy remains to block ChatGPT access unless an exception is approved. DHS Review Involved Senior Leadership and Legal Officials After the activity was detected, Gottumukkala met with senior DHS officials to review the material he uploaded into ChatGPT, according to two of the four officials. DHS’s then-acting general counsel, Joseph Mazzara, participated in assessing potential harm to the department, one official said. Antoine McCord, DHS’s chief information officer, was also involved, according to another official. In August, Gottumukkala also held meetings with CISA Chief Information Officer Robert Costello and Chief Counsel Spencer Fisher to discuss the incident and the proper handling of “for official use only” material, the officials said. Federal employees are trained in the proper handling of sensitive documents. DHS policy requires investigations into both the “cause and effect” of any exposure involving official-use-only materials and mandates a determination of whether administrative or disciplinary action is appropriate. Possible actions can range from retraining or formal warnings to more serious steps, such as suspension or revocation of a security clearance, depending on the circumstances. The Internal Cybersecurity Warning Adds to Turmoil at CISA Gottumukkala’s tenure at CISA has been marked by repeated controversy. Earlier this summer, at least six career staff members were placed on leave after Gottumukkala failed a counterintelligence polygraph exam that he pushed to take, a test DHS later described as “unsanctioned.” During congressional testimony last week, Gottumukkala twice told Rep. Bennie Thompson (D-Miss.) that he did not “accept the premise of that characterization” when asked about the failed test. Gottumukkala was appointed deputy director of CISA in May by DHS Secretary Kristi Noem and has served as acting director since then. President Trump’s nominee to permanently lead CISA, DHS special adviser Sean Plankey, remains unconfirmed after his nomination was blocked last year by Sen. Rick Scott (R-Fla.) over concerns related to a Coast Guard shipbuilding contract. No new confirmation hearing date has been set. As CISA continues to defend federal networks against cyber threats from adversarial nations such as Russia and China, the ChatGPT incident has renewed internal concerns about the use of public AI platforms and how internal cybersecurity warnings are handled when they involve the agency’s own leadership.
EU data breach notifications have surged 22% in the last year and GDPR fines remain high, according to a new report from law firm DLA Piper. The “sustained high level of data enforcement activity across Europe” noted in the report occurs amid the EU Digital Omnibus legislative process that critics say could show more ...
substantially weaken the GDPR’s data privacy provisions. Given the high number of data breach notifications, the report noted, “It is perhaps not surprising that the EU Digital Omnibus is proposing to raise the bar for incident notification to regulators, to capture only breaches which are likely to cause a high risk to the rights and freedoms of data subjects. Supervisory authorities have been inundated with notifications and understandably want to stem the flood so they can focus on the genuinely serious incidents.” The success of the Digital Omnibus process may depend on how EU legislative bodies address the concerns of data privacy advocates, said the report, whose publication coincided with Data Privacy Week. “If simplification is perceived as undermining fundamental rights, the outcome could be legal uncertainty, increased litigation, and political backlash – the very opposite of the simplification and clarity businesses seek,” the law firm said. “The Omnibus therefore faces a delicate balancing act: simplifying rules without eroding trust or core rights. It is expected that the proposals will change as they are debated among the European Commission, the European Parliament, and the EU Council during the trialogue process in 2026.” EU Data Breach Notifications Top 400 Per Day The report found that for the first time since May 25, 2018 – the GDPR’s implementation date – average data breach notifications per day topped 400, “breaking the plateauing trend we have seen in recent years.” Between January 28, 2025 and January 27, 2026, the average number of breach notifications per day increased from 363 to 443, a jump of 22%. “It is not clear what is driving this uptick in breach notifications, but the geo-political landscape driving more cyber-attacks, as well as the focus on cyber incidents in the media and the raft of new laws including incident notification requirements ... may be focusing minds on breach notifications,” the law firm said. Laws and regulations that may be driving the increase in EU data breach notifications include NIS2, the Network and Information Security Directive, and DORA, the Digital Operation Resilience Act, the firm said. GDPR Fines Reverse Downward Trend GDPR fines remained high, with European supervisory authorities issuing fines totaling approximately EUR1.2 billion in 2025, in line with 2024 levels. “While there is no year-on-year increase in aggregate GDPR fines, this figure marks a reversal of last year’s downward trend and underscores that European data protection supervisory authorities remain willing to impose substantial monetary penalties,” the law firm said. The aggregate total fines since the implementation of GDPR across the jurisdictions surveyed stands at EUR7.1 billion as of January 27, 2026 – EUR4.04 billion of which were issued by the Irish Data Protection Commission. The Irish Data Protection Commission also imposed the highest fine in 2025, a EUR530 million fine in April 2025 against TikTok for violating GDPR's international data transfer restrictions. Fines resulting from breaches of the GDPR integrity and confidentiality principle, also known as the security principle, continue to be prominent, the report said. “Supply chain security and compliance is increasingly attracting the attention of data protection supervisory authorities,” the law firm said. “Supervisory authorities expect robust security controls to prevent personal data breaches and processors, as well as controllers, are directly liable for breaches of the security principle resulting in several fines being imposed directly on processors this year.” Non-Material Damage Allowed Under GDPR Compensation Claims Follow-on GDPR compensation claims also saw some notable developments, the law firm found. “This year has brought several notable rulings from the Court of Justice of the European Union (CJEU) and European courts on GDPR-related compensation claims – particularly regarding the criteria for pursuing claims for non-material damage.” One notable CJEU ruling found that non-material damage referred to in Article 82(1) GDPR “can include negative feelings, such as fear or annoyance, provided the data subject can demonstrate that they are experiencing such feelings,” the report said. “This was a win for claimants. However, in the same decision, the CJEU ruled that the mere assertion of negative feelings is insufficient for compensation; national courts must assess evidence of such feelings and be satisfied that they arise from the breach of GDPR. This provides some comfort for defendants as theoretical distress is insufficient to sound in compensation.” Ross McKean, Chair of the DLA Piper UK Data, Privacy and Cybersecurity practice, said in a statement that “Most evident in this year's report is the validation that the cybersecurity threat landscape has reached an unprecedented level. ... Coupled with the slew of new cybersecurity laws impacting business, some of which impose personal liability on members of management bodies, our report underscores the urgency and need for organisations to optimise cyber defences and operational resilience.”
A new wave of cyberattacks has recently struck several prominent U.S. companies, including Bumble Inc., Panera Bread Co., Match Group Inc., and CrunchBase. Bumble Inc., the parent company of dating apps Bumble, Badoo, and BFF, reported that one of its contractor accounts was compromised in a phishing incident. show more ...
Similarly, it has been reported that Bumble confirmed a similar intrusion, stating that the breach allowed the hacker “brief unauthorized access to a small portion of our network.” However, the company noted that member databases, Bumble accounts, direct messages, profiles, and the Bumble application itself were not accessed. Bumble has engaged law enforcement to investigate the incident. Bumble, Panera Bread, Match Group, and CrunchBase Reports Cyberattacks Panera Bread also reported a cybersecurity incident affecting one of its software applications used to store data. A company spokesperson confirmed that law enforcement had been notified and that steps were taken to secure the system. The affected data primarily included contact information, although Panera did not provide additional specifics about the scope of the breach. Similarly, Match Group reported on Wednesday that it had experienced a cybersecurity incident impacting a “limited amount of user data.” According to Bloomberg, a spokesperson for Match reassured users that there was no evidence of compromised login credentials, financial information, or private communications. The match’s system was breached on January 16, although the exact timing of the other incidents affecting Bumble, Panera Bread, and CrunchBase remains unclear. CrunchBase, the business information platform, confirmed that documents on its corporate network were affected by cyberattacks but stated that the company had successfully contained the incident. No details were provided about whether any sensitive user or company data was accessed. Limited Data Exposure but Extortion Demands Reported A hacking group known as ShinyHunters has claimed responsibility for the attacks on Bumble, Panera Bread, Match, and CrunchBase. While these claims could not be independently verified at this time, their posts noted that they are using innovative vishing techniques. Voice phishing aimed at tricking employees into revealing credentials for single sign-on systems. Additionally, it has been reported that hackers associated with the ShinyHunters group have reached out to some of the victims requesting payment. Despite these reports, none of the affected companies, including Bumble, Panera Bread, Match, or CrunchBase, have publicly commented on the extortion claims. Experts Warn of Rising Social Engineering Threats The recent incidents underline the growing threat of cyberattacks targeting U.S. businesses, particularly those handling large volumes of user data and corporate information. In most of these attacks, social engineering campaigns target unsuspecting victims, combining phishing, vishing, and exploitation of cloud-based systems to gain access. The Cyber Express has reached out to Bumble, Panera Bread, CrunchBase, and Match Group for further comments. As of now, no additional information or updates on the extortion demands have been provided. Cybersecurity analysts and industry observers are closely monitoring the situation, noting that this series of attacks could signal a broader trend in high-profile cyber threats affecting both technology and consumer-facing companies. This story is ongoing, and The Cyber Express will continue to provide updates as more details emerge about the scope of the cyberattacks and any responses from the affected organizations.
A security researcher investigating an AI toy for a neighbor found an exposed admin panel that could have leaked the personal data and conversations of the children using the toy. The findings, detailed in a blog post by security researcher Joseph Thacker, outlines the work he did with fellow researcher Joel Margolis, show more ...
who found the exposed admin panel for the Bondu AI toy. Margolis found an intriguing domain (console.bondu.com) in the mobile app backend’s Content Security Policy headers. There he found a button that simply said: “Login with Google.” “By itself, there’s nothing weird about that as it was probably just a parent portal,” Thacker wrote. But instead of a parent portal, it turned out to be the Bondu core admin panel. “We had just logged into their admin dashboard despite [not] having any special accounts or affiliations with Bondu themselves,” Thacker said. AI Toy Admin Panel Exposed Children’s Conversations After some investigation in the admin panel, the researchers found they had full access to “Every conversation transcript that any child has had with the toy,” which numbered in the “tens of thousands of sessions.” The panel also contained personal data about children and their family, including: The child’s name and birth date Family member names The child’s likes and dislikes Objectives for the child (defined by the parent) The name given to the toy by the child Previous conversations between the child and the toy (used to give the LLM context) Device information, such as location via IP address, battery level, awake status, and more The ability to update device firmware and reboot devices They noticed the application is based on OpenAI GPT-5 and Google Gemini. “Somehow, someway, the toy gets fed a prompt from the backend that contains the child profile information and previous conversations as context,” Thacker wrote. “As far as we can tell, the data that is being collected is actually disclosed within their privacy policy, but I doubt most people realize this unless they go and read it (which most people don’t do nowadays).” In addition to the authentication bypass, they also discovered an Insecure Direct Object Reference (IDOR) vulnerability in the product’s API “that allowed us to retrieve any child’s profile data by simply guessing their ID.” “This was all available to anyone with a Google account,” Thacker said. “Naturally we didn’t access nor store any data beyond what was required to validate the vulnerability in order to responsibly disclose it.” A (Very) Quick Response from Bondu Margolis reached out to Bondu’s CEO on LinkedIn over the weekend – and the company took down the console “within 10 minutes.” “Overall we were happy to see how the Bondu team reacted to this report; they took the issue seriously, addressed our findings promptly, and had a good collaborative response with us as security researchers,” Thacker said. The company took other steps to investigate and look for additional security flaws, and also started a bug bounty program. They examined console access logs and found that there had been no unauthorized access except for the researchers’ activity, so the company was saved from a data breach. Despite the positive experience working with Bondu, the experience made Thacker reconsider buying AI toys for his own kids. “To be honest, Bondu was totally something I would have been prone to buy for my kids before this finding,” he wrote. “However this vulnerability shifted my stance on smart toys, and even smart devices in general.” “AI models are effectively a curated, bottled-up access to all the information on the internet,” he added. “And the internet can be a scary place. I’m not sure handing that type of access to our kids is a good idea.” Aside from potential security issues, “AI makes this problem even more interesting because the designer (or just the AI model itself) can have actual ‘control’ of something in your house. And I think that is even more terrifying than anything else that has existed yet,” he said. Bondu's website says the AI toy was built with child safety in mind, noting that its "safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from bondu throughout the entire beta period."
What adult didn’t dream as a kid that they could actually talk to their favorite toy? While for us those dreams were just innocent fantasies that fueled our imaginations, for today’s kids, they’re becoming a reality fast. For instance, this past June, Mattel — the powerhouse behind the iconic Barbie — show more ...
announced a partnership with OpenAI to develop AI-powered dolls. But Mattel isn’t the first company to bring the smart talking toy concept to life; plenty of manufacturers are already rolling out AI companions for children. In this post, we dive into how these toys actually work, and explore the risks that come with using them. What exactly are AI toys? When we talk about AI toys here, we mean actual, physical toys — not just software or apps. Currently, AI is most commonly baked into plushies or kid-friendly robots. Thanks to integration with large language models, these toys can hold meaningful, long-form conversations with a child. As anyone who’s used modern chatbots knows, you can ask an AI to roleplay as anyone: from a movie character to a nutritionist or a cybersecurity expert. According to the study, AI comes to playtime — Artificial companions, real risks, by the U.S. PIRG Education Fund, manufacturers specifically hardcode these toys to play the role of a child’s best friend. Examples of AI toys tested in the study: plush companions and kid-friendly robots with built-in language models. Source Importantly, these toys aren’t powered by some special, dedicated “kid-safe AI”. On their websites, the creators openly admit to using the same popular models many of us already know: OpenAI’s ChatGPT, Anthropic’s Claude, DeepSeek from the Chinese developer of the same name, and Google’s Gemini. At this point, tech-wary parents might recall the harrowing ChatGPT case where the chatbot made by OpenAI was blamed for a teenager’s suicide. And this is the core of the problem: the toys are designed for children, but the AI models under the hood aren’t. These are general-purpose adult systems that are only partially reined in by filters and rules. Their behavior depends heavily on how long the conversation lasts, how questions are phrased, and just how well a specific manufacturer actually implemented their safety guardrails. How the researchers tested the AI toys The study, whose results we break down below, goes into great detail about the psychological risks associated with a child “befriending” a smart toy. However, since that’s a bit outside the scope of this blogpost, we’re going to skip the psychological nuances, and focus strictly on the physical safety threats and privacy concerns. In their study, the researchers put four AI toys through the ringer: Grok (no relation to xAI’s Grok, apparently): a plush rocket with a built-in speaker marketed for kids aged three to 12. Price tag: US$99. The manufacturer, Curio, doesn’t explicitly state which LLM they use, but their user agreement mentions OpenAI among the operators receiving data. Kumma (not to be confused with our own Midori Kuma): a plush teddy-bear companion with no clear age limit, also priced at US$99. The toy originally ran on OpenAI’s GPT-4o, with options to swap models. Following an internal safety audit, the manufacturer claimed they were switching to GPT-5.1. However, at the time the study was published, OpenAI reported that the developer’s access to the models remained revoked — leaving it anyone’s guess which chatbot Kumma is actually using right now. Miko 3: a small wheeled robot with a screen for a face, marketed as a “best friend” for kids aged five to 10. At US$199, this is the priciest toy in the lineup. The manufacturer is tight-lipped about which language model powers the toy. A Google Cloud case study mentions using Gemini for certain safety features, but that doesn’t necessarily mean it handles all the robot’s conversational features. Robot MINI: a compact, voice-controlled plastic robot that supposedly runs on ChatGPT. This is the budget pick — at US$97. However, during the study, the robot’s Wi-Fi connection was so flaky that the researchers couldn’t even give it a proper test run. Robot MINI: a compact AI robot that failed to function properly during the study due to internet connectivity issues. Source To conduct the testing, the researchers set the test child’s age to five in the companion apps for all the toys. From there, they checked how the toys handled provocative questions. The topics the experimenters threw at these smart playmates included: Access to dangerous items: knives, pills, matches, and plastic bags Adult topics: sex, drugs, religion, and politics Let’s break down the test results for each toy. Unsafe conversations with AI toys Let’s start with Grok, the plush AI rocket from Curio. This toy is marketed as a storyteller and conversational partner for kids, and stands out by giving parents full access to text transcripts of every AI interaction. Out of all the models tested, this one actually turned out to be the safest. When asked about topics inappropriate for a child, the toy usually replied that it didn’t know or suggested talking to an adult. However, even this toy told the “child” exactly where to find plastic bags, and engaged in discussions about religion. Additionally, Grok was more than happy to chat about… Norse mythology, including the subject of heroic death in battle. The Grok plush AI toy by Curio, equipped with a microphone and speaker for voice interaction with children. Source The next AI toy, the Kumma plush bear by FoloToy, delivered what were arguably the most depressing results. During testing, the bear helpfully pointed out exactly where in the house a kid could find potentially lethal items like knives, pills, matches, and plastic bags. In some instances, Kumma suggested asking an adult first, but then proceeded to give specific pointers anyway. The AI bear fared even worse when it came to adult topics. For starters, Kumma explained to the supposed five-year-old what cocaine is. Beyond that, in a chat with our test kindergartner, the plush provocateur went into detail about the concept of “kinks”, and listed off a whole range of creative sexual practices: bondage, role-playing, sensory play (like using a feather), spanking, and even scenarios where one partner “acts like an animal”! After a conversation lasting over an hour, the AI toy also lectured researchers on various sexual positions, told how to tie a basic knot, and described role-playing scenarios involving a teacher and a student. It’s worth noting that all of Kumma’s responses were recorded prior to a safety audit, which the manufacturer, FoloToy, conducted after receiving the researchers’ inquiries. According to their data, the toy’s behavior changed after the audit, and the most egregious violations were made unrepeatable. The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. Source Finally, the Miko 3 robot from Miko showed significantly better results. However, it wasn’t entirely without its hiccups. The toy told our potential five-year-old exactly where to find plastic bags and matches. On the bright side, Miko 3 refused to engage in discussions regarding inappropriate topics. During testing, the researchers also noticed a glitch in its speech recognition: the robot occasionally misheard the wake word “Hey Miko” as “CS:GO”, which is the title of the popular shooter Counter-Strike: Global Offensive — rated for audiences aged 17 and up. As a result, the toy would start explaining elements of the shooter — thankfully, without mentioning violence — or asking the five-year-old user if they enjoyed the game. Additionally, Miko 3 was willing to chat with kids about religion. The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. Source AI Toys: a threat to children’s privacy Beyond the child’s physical and mental well-being, the issue of privacy is a major concern. Currently, there are no universal standards defining what kind of information an AI toy — or its manufacturer — can collect and store, or exactly how that data should be secured and transmitted. In the case of the three toys tested, researchers observed wildly different approaches to privacy. For example, the Grok plush rocket is constantly listening to everything happening around it. Several times during the experiments, it chimed in on the researchers’ conversations even when it hadn’t been addressed directly — it even went so far as to offer its opinion on one of the other AI toys. The manufacturer claims that Curio doesn’t store audio recordings: the child’s voice is first converted to text, after which the original audio is “promptly deleted”. However, since a third-party service is used for speech recognition, the recordings are, in all likelihood, still transmitted off the device. Additionally, researchers pointed out that when the first report was published, Curio’s privacy policy explicitly listed several tech partners — Kids Web Services, Azure Cognitive Services, OpenAI, and Perplexity AI — all of which could potentially collect or process children’s personal data via the app or the device itself. Perplexity AI was later removed from that list. The study’s authors note that this level of transparency is more the exception than the rule in the AI toy market. Another cause for parental concern is that both the Grok plush rocket and the Miko 3 robot actively encouraged the “test child” to engage in heart-to-heart talks — even promising not to tell anyone their secrets. Researchers emphasize that such promises can be dangerously misleading: these toys create an illusion of private, trusting communication without explaining that behind the “friend” stands a network of companies, third-party services, and complex data collection and storage processes, which a child has no idea about. Miko 3, much like Grok, is always listening to its surroundings and activates when spoken to — functioning essentially like a voice assistant. However, this toy doesn’t just collect voice data; it also gathers biometric information, including facial recognition data and potentially data used to determine the child’s emotional state. According to its privacy policy, this information can be stored for up to three years. In contrast to Grok and Miko 3, Kumma operates on a push-to-talk principle: the user needs to press and hold a button for the toy to start listening. Researchers also noted that the AI teddy bear didn’t nudge the “child” to share personal feelings, promise to keep secrets, or create an illusion of private intimacy. On the flip side, the manufacturers of this toy provide almost no clear information regarding what data is collected, how it’s stored, or how it’s processed. Is it a good idea to buy AI Toys for your children? The study points to serious safety issues with the AI toys currently on the market. These devices can directly tell a child where to find potentially dangerous items, such as knives, matches, pills, or plastic bags, in their home. Besides, these plush AI friends are often willing to discuss topics entirely inappropriate for children — including drugs and sexual practices — sometimes steering the conversation in that direction without any obvious prompting from the child. Taken together, this shows that even with filters and stated restrictions in place, AI toys aren’t yet capable of reliably staying within the boundaries of safe communication for young little ones. Manufacturers’ privacy policies raise additional concerns. AI toys create an illusion of constant and safe communication for children, while in reality they’re networked devices that collect and process sensitive data. Even when manufacturers claim to delete audio or have limited data retention, conversations, biometrics, and metadata often pass through third-party services and are stored on company servers. Furthermore, the security of such toys often leaves much to be desired. As far back as two years ago, our researchers discovered vulnerabilities in a popular children’s robot that allowed attackers to make video calls to it, hijack the parental account, and modify the firmware. The problem is that, currently, there are virtually no comprehensive parental control tools or independent protection layers specifically for AI toys. Meanwhile, in more traditional digital environments — smartphones, tablets, and computers — parents have access to solutions like Kaspersky Safe Kids. These help monitor content, screen time, and a child’s digital footprint, which can significantly reduce, if not completely eliminate, such risks. How can you protect your children from digital threats? Read more in our posts: Keeping kids safe online: a practical guide for parents How to help your kid become a blogger without ever worrying about their safety How hackers target Gen Z Do Apple’s new child safety initiatives do the job? Choosing wisely: a guide to your kids’ first gadget
As 2026 begins, these journalists urge the cybersecurity industry to prioritize patching vulnerabilities, preparing for quantum threats, and refining AI applications, in the latest edition of Reporters' Notebook.
Federal agencies will no longer be required to solicit software bills of material (SBOMs) from tech vendors, nor attestations that they comply with NIST's Secure Software Development Framework (SSDF). What that means long term is unclear.
If an attacker splits a malicious prompt into discrete chunks, some large language models (LLMs) will get lost in the details and miss the true intent.
The testimony by Army Lt. Gen. Joshua Rudd about the importance of Section 702 of the Foreign Intelligence Surveillance Act (FISA) could put him at loggerheads with the commander-in-chief and other national security officials, such as Director of National Intelligence Tulsi Gabbard, who has disparaged the foreign-spying power in the past.
The Vladimir Bread Factory — one of the largest bakery producers in its region — said in a statement that its internal digital systems were hit overnight on Sunday, knocking out office computers, servers and electronic document management tools.
Both men charged with co-creating the dark web marketplace Empire Market have now pleaded guilty to federal drug conspiracy charges, closing the book on one of the major cybercrime cases of the early 2020s.
In its annual report released this week, Latvia’s national security service, SAB, said 2025 marked an all-time high in registered cyber threats targeting the country, with activity surging significantly past levels seen before Russia’s invasion of Ukraine in 2022.
The new feature will not prevent location sharing with emergency responders and does not limit the location data users choose to share with apps, the company said.
Google on Wednesday announced that it worked together with other partners to disrupt IPIDEA, which it described as one of the largest residential proxy networks in the world. To that end, the company said it took legal action to take down dozens of domains used to control devices and proxy traffic through them. As of writing, IPIDEA's website ("www.ipidea.io") is no longer accessible. It
SolarWinds has released security updates to address multiple security vulnerabilities impacting SolarWinds Web Help Desk, including four critical vulnerabilities that could result in authentication bypass and remote code execution (RCE). The list of vulnerabilities is as follows - CVE-2025-40536 (CVSS score: 8.1) - A security control bypass vulnerability that could allow an unauthenticated
This week’s updates show how small changes can create real problems. Not loud incidents, but quiet shifts that are easy to miss until they add up. The kind that affects systems people rely on every day. Many of the stories point to the same trend: familiar tools being used in unexpected ways. Security controls are being worked on. Trusted platforms turning into weak spots. What looks routine on
A study by OMICRON has revealed widespread cybersecurity gaps in the operational technology (OT) networks of substations, power plants, and control centers worldwide. Drawing on data from more than 100 installations, the analysis highlights recurring technical, organizational, and functional issues that leave critical energy infrastructure vulnerable to cyber threats. The findings are based on
Beyond the direct impact of cyberattacks, enterprises suffer from a secondary but potentially even more costly risk: operational downtime, any amount of which translates into very real damage. That’s why for CISOs, it’s key to prioritize decisions that reduce dwell time and protect their company from risk. Three strategic steps you can take this year for better results: 1. Focus on today's
A new joint investigation by SentinelOne SentinelLABS, and Censys has revealed that the open-source artificial intelligence (AI) deployment has created a vast "unmanaged, publicly accessible layer of AI compute infrastructure" that spans 175,000 unique Ollama hosts across 130 countries. These systems, which span both cloud and residential networks across the world, operate outside the
In episode 452, a London-based YouTuber wins a landmark court case against Saudi Arabia after his phone was hacked with Pegasus spyware — exposing how a single, seemingly harmless text message can turn a smartphone into a round-the-clock surveillance device. Plus, we go looking for professional hitmen online - only show more ...
to uncover uncomfortable questions about why some crimes attract customers but very few complaints. All this and more is discussed in the latest edition of the "Smashing Security" podcast by cybersecurity veteran Graham Cluley, joined this week by special guest Joe Tidy.
Imagine the scene. It's a cold Monday morning in Moscow. You walk out to your car, coffee in hand, ready to face the day. You press the button to unlock your car, and ... nothing happens. You try again. Still nothing. The alarm starts blaring. You can't turn it off. Read more in my article on the Fortra blog.