Former CISA Director Jen Easterly will become CEO of RSA Conference LLC and its flagship annual cybersecurity conference, RSAC announced today. Easterly will guide RSAC’s ambitious growth plans amid the growing convergence of AI and cybersecurity, the organization said. RSAC Conference became independent from show more ...
security vendor RSA in 2022 and rebranded as RSAC last year. Easterly left CISA, the U.S. Cybersecurity and Infrastructure Security Agency, amid the transition to the second Trump Administration a year ago. Since Easterly’s departure, the agency has faced staff cuts and departures, a polygraph controversy, and her would-be successor, Sean Plankey, has yet to be confirmed and was renominated for the role earlier this week. Jen Easterly Takes Over RSAC as AI and Security Converge In a press release today, RSAC said Easterly takes over at an important moment, “as AI and cybersecurity rapidly converge to reshape every aspect of the global technology ecosystem.” As CEO, Easterly will provide direction for RSAC's portfolio, which includes its annual cybersecurity conference in San Francisco; international programming; the Innovation Sandbox contest that recognizes emerging cybersecurity startups; a growing professional membership program; education initiatives; and programs aimed at improving AI security, secure software development, and global collaboration. "RSAC is not just a conference—it's the home of the global cybersecurity community," Easterly said in a statement. "We're at a pivotal moment where cybersecurity and AI have become inseparable, and the world needs a trusted platform to bring together the people, ideas, and technologies that will shape the next decade. I'm honored to lead RSAC into its next chapter—expanding our international reach, strengthening our innovation ecosystem, and working with partners around the world to help build a future where technology is truly secure by design." RSAC Expands Beyond Annual Conference Easterly expanded on her comments in a LinkedIn post, writing that “For 35 years, RSAC has been the place where defenders, practitioners, innovators, researchers, policymakers, founders, and engineers come together to understand what’s happening today...and to build what comes next.” She referenced RSAC’s rebranding and expanded mission, noting that “as of last year, our borders are not confined to the flagship event in San Francisco. We are building RSAC to become a year-round hub for continuous learning and collaboration for the global cybersecurity community, revolving around our world-class content and unique insights.” The West Point graduate and military veteran brings more than thirty years of experience to her new role, which also includes senior positions at the National Security Agency (NSA)—where she helped build the U.S. Cyber Command—and as a senior technology leader at Morgan Stanley. “Easterly is one of the most influential global voices on secure-by-design technology, AI as a force for reducing cyber risk, and the transformation of digital infrastructure through resilience and innovation,” RSAC said. Hugh Thompson, Executive Chairman of RSAC and longtime Program Committee Chairman of the RSAC Conference, stated that "there has never been a more important time for the cybersecurity and AI communities to come together. I am thrilled to partner with Jen, the team at RSAC, and our community, as we bring the world together for our 35th annual flagship event in March. Over the years some of the most important conversations in cybersecurity have happened at RSAC and I believe our 2026 conference will be the most impactful event we've ever had." RSAC 2026 Conference will take place at the Moscone Center in San Francisco March 23-26 and is expected to attract more than 40,000 attendees from around the world.
In 2025, cybersecurity researchers discovered several open databases belonging to various AI image-generation tools. This fact alone makes you wonder just how much AI startups care about the privacy and security of their users’ data. But the nature of the content in these databases is far more alarming. A large show more ...
number of generated pictures in these databases were images of women in lingerie or fully nude. Some were clearly created from children’s photos, or intended to make adult women appear younger (and undressed). Finally, the most disturbing part: some pornographic images were generated from completely innocent photos of real people — likely taken from social media. In this post, we’re talking about what sextortion is, and why AI tools mean anyone can become a victim. We detail the contents of these open databases, and give you advice on how to avoid becoming a victim of AI-era sextortion. What is sextortion? Online sexual extortion has become so common it’s earned its own global name: sextortion (a portmanteau of sex and extortion). We’ve already detailed its various types in our post, Fifty shades of sextortion. To recap, this form of blackmail involves threatening to publish intimate images or videos to coerce the victim into taking certain actions, or to extort money from them. Previously, victims of sextortion were typically adult industry workers, or individuals who’d shared intimate content with an untrustworthy person. However, the rapid advancement of artificial intelligence, particularly text-to-image technology, has fundamentally changed the game. Now, literally anyone who’s posted their most innocent photos publicly can become a victim of sextortion. This is because generative AI makes it possible to quickly, easily, and convincingly undress people in any digital image, or add a generated nude body to someone’s head in a matter of seconds. Of course, this kind of fakery was possible before AI, but it required long hours of meticulous Photoshop work. Now, all you need is to describe the desired result in words. To make matters worse, many generative AI services don’t bother much with protecting the content they’ve been used to create. As mentioned earlier, last year saw researchers discover at least three publicly accessible databases belonging to these services. This means the generated nudes within them were available not just to the user who’d created them, but to anyone on the internet. How the AI image database leak was discovered In October 2025, cybersecurity researcher Jeremiah Fowler uncovered an open database containing over a million AI-generated images and videos. According to the researcher, the overwhelming majority of this content was pornographic in nature. The database wasn’t encrypted or password-protected — meaning any internet user could access it. The database’s name and watermarks on some images led Fowler to believe its source was the U.S.-based company SocialBook, which offers services for influencers and digital marketing services. The company’s website also provides access to tools for generating images and content using AI. However, further analysis revealed that SocialBook itself wasn’t directly generating this content. Links within the service’s interface led to third-party products — the AI services MagicEdit and DreamPal — which were the tools used to create the images. These tools allowed users to generate pictures from text descriptions, edit uploaded photos, and perform various visual manipulations, including creating explicit content and face-swapping. The leak was linked to these specific tools, and the database contained the product of their work, including AI-generated and AI-edited images. A portion of the images led the researcher to suspect they’d been uploaded to the AI as references for creating provocative imagery. Fowler states that roughly 10,000 photos were being added to the database every single day. SocialBook denies any connection to the database. After the researcher informed the company of the leak, several pages on the SocialBook website that had previously mentioned MagicEdit and DreamPal became inaccessible and began returning errors. Which services were the source of the leak? Both services — MagicEdit and DreamPal — were initially marketed as tools for interactive, user-driven visual experimentation with images and art characters. Unfortunately, a significant portion of these capabilities were directly linked to creating sexualized content. For example, MagicEdit offered a tool for AI-powered virtual clothing changes, as well as a set of styles that made images of women more revealing after processing — such as replacing everyday clothes with swimwear or lingerie. Its promotional materials promised to turn an ordinary look into a sexy one in seconds. DreamPal, for its part, was initially positioned as an AI-powered role-playing chat, and was even more explicit about its adult-oriented positioning. The site offered to create an ideal AI girlfriend, with certain pages directly referencing erotic content. The FAQ also noted that filters for explicit content in chats were disabled so as not to limit users’ most intimate fantasies. Both services have suspended operations. At the time of writing, the DreamPal website returned an error, while MagicEdit seemed available again. Their apps were removed from both the App Store and Google Play. Jeremiah Fowler says earlier in 2025, he discovered two more open databases containing AI-generated images. One belonged to the South Korean site GenNomis, and contained 95,000 entries — a substantial portion of which being images of “undressed” people. Among other things, the database included images with child versions of celebrities: American singers Ariana Grande and Beyoncé, and reality TV star Kim Kardashian. How to avoid becoming a victim In light of incidents like these, it’s clear that the risks associated with sextortion are no longer confined to private messaging or the exchange of intimate content. In the era of generative AI, even ordinary photos, when posted publicly, can be used to create compromising content. This problem is especially relevant for women, but men shouldn’t get too comfortable either: the popular blackmail scheme of “I hacked your computer and used the webcam to make videos of you browsing adult sites” could reach a whole new level of persuasion thanks to AI tools for generating photos and videos. Therefore, protecting your privacy on social media and controlling what data about you is publicly available become key measures for safeguarding both your reputation and peace of mind. To prevent your photos from being used to create questionable AI-generated content, we recommend making all your social media profiles as private as possible — after all, they could be the source of images for AI-generated nudes. We’ve already published multiple detailed guides on how to reduce your digital footprint online or even remove your data from the internet, how to stop data brokers from compiling dossiers on you, and protect yourself from intimate image abuse. Additionally, we have a dedicated service, Privacy Checker — perfect for anyone who wants a quick but systematic approach to privacy settings everywhere possible. It compiles step-by-step guides for securing accounts on social media and online services across all major platforms. And to ensure the safety and privacy of your child’s data, Kaspersky Safe Kids can help: it allows parents to monitor which social media their child spends time on. From there, you can help them adjust privacy settings on their accounts so their posted photos aren’t used to create inappropriate content. Explore our guide to children’s online safety together, and if your child dreams of becoming a popular blogger, discuss our step-by-step cybersecurity guide for wannabe bloggers with them.
Researchers detailed how Intellexa, Predator's owner, uses failed deployments and thwarted infections to strengthen its commercial spyware and generate more effective attacks.
The upcoming Winter Games in the Italian Alps are attracting both hacktivists looking to reach billions of people and state-sponsored cyber-spies targeting the attending glitterati.
If confirmed, Rudd would take over two entities that have been without a permanent leader since a far-right provocateur initiated a push to force out its last chief and has seemingly sunk a number of other senior officials.
Google has agreed to pay $8.25 million to settle a class-action lawsuit centered on claims that it habitually and illegally collected data from devices belonging to children under age 13.
Germany and Israel have signed a cyber and security cooperation agreement — a deal that Berlin hopes will lead to its own version of Israel’s so-called “cyber dome.”
Elon Musk’s social media platform X announced it would be making changes to prevent its AI tool Grok from creating sexualized images of people without their consent, including what critics say are effectively child sexual abuse material.
Chinese hackers successfully breached multiple critical infrastructure organizations in North America over the last year using a combination of compromised credentials and exploitable servers, researchers at Cisco Talos found.
The police department said there “is no evidence indicating that APD systems have been compromised or that any APD data has been acquired by the threat actor.”
Microsoft on Wednesday announced that it has taken a "coordinated legal action" in the U.S. and the U.K. to disrupt a cybercrime subscription service called RedVDS that has allegedly fueled millions in fraud losses. The effort, per the tech giant, is part of a broader law enforcement effort in collaboration with law enforcement authorities that has allowed it to confiscate the malicious
Palo Alto Networks has released security updates for a high-severity security flaw impacting GlobalProtect Gateway and Portal, for which it said there exists a proof-of-concept (PoC) exploit. The vulnerability, tracked as CVE-2026-0227 (CVSS score: 7.7), has been described as a denial-of-service (DoS) condition impacting GlobalProtect PAN-OS software arising as a result of an improper check for
The internet never stays quiet. Every week, new hacks, scams, and security problems show up somewhere. This week’s stories show how fast attackers change their tricks, how small mistakes turn into big risks, and how the same old tools keep finding new ways to break in. Read on to catch up before the next wave hits. Unauthenticated RCE risk Security Flaw in Redis
As AI copilots and assistants become embedded in daily work, security teams are still focused on protecting the models themselves. But recent incidents suggest the bigger risk lies elsewhere: in the workflows that surround those models. Two Chrome extensions posing as AI helpers were recently caught stealing ChatGPT and DeepSeek chat data from over 900,000 users. Separately, researchers
It’s 2026, yet many SOCs are still operating the way they did years ago, using tools and processes designed for a very different threat landscape. Given the growth in volumes and complexity of cyber threats, outdated practices no longer fully support analysts’ needs, staggering investigations and incident response. Below are four limiting habits that may be preventing your SOC from evolving at
A maximum-severity security flaw in a WordPress plugin called Modular DS has come under active exploitation in the wild, according to Patchstack. The vulnerability, tracked as CVE-2026-23550 (CVSS score: 10.0), has been described as a case of unauthenticated privilege escalation impacting all versions of the plugin prior to and including 2.5.1. It has been patched in version 2.5.2. The plugin
Cybersecurity researchers have disclosed details of a new attack method dubbed Reprompt that could allow bad actors to exfiltrate sensitive data from artificial intelligence (AI) chatbots like Microsoft Copilot in a single click, while bypassing enterprise security controls entirely. "Only a single click on a legitimate Microsoft link is required to compromise victims," Varonis security
A critical misconfiguration in Amazon Web Services (AWS) CodeBuild could have allowed complete takeover of the cloud service provider's own GitHub repositories, including its AWS JavaScript SDK, putting every AWS environment at risk. The vulnerability has been codenamed CodeBreach by cloud security company Wiz. The issue was fixed by AWS in September 2025 following responsible disclosure on
Confusion reigns after claims that data linked to 17.5 million Instagram accounts is up for sale - sparked by a vague post, contradictory statements, and a flood of password reset emails nobody asked for. And we dig into Grok, Elon Musk’s AI chatbot, after it started generating sexualised images of women and show more ...
children - raising uncomfortable questions about guardrails, accountability, and why playing the censorship card doesn’t make the problem go away. All this, and much more, in episode 450 of the "Smashing Security" podcast with Graham Cluley, and special guest Monica Verma.
We can no longer say that artificial intelligence is a "future risk", lurking somewhere on a speculative threat horizon. The truth is that it is a fast-growing cybersecurity risk that organizations are facing today. That's not just my opinion, that's also the message that comes loud and clear from the show more ...
World Economic Forum's newly-published "Global Cybersecurity Outlook 2026." Read more in my article on the Fortra blog.