Microsoft said it disrupted a high-volume campaign in October after discovering a coordinated effort by the ransomware group known as Vanilla Tempest to weaponize fraudulently signed installers that impersonated Microsoft Teams. The company revoked more than 200 code-signing certificates the group had used to make show more ...
malicious binaries look legitimate, and Defender products now detect the fake installers, the Oyster backdoor and the Rhysida ransomware the actor used to extort victims. Microsoft’s telemetry first flagged the Vanilla Tempest, also tracked as VICE SPIDER and Vice Society, campaign in late September 2025 after it saw months of misuse of trusted signing infrastructure. Investigators observed attackers hosting counterfeit Teams installers on look-alike domains — for example, teams-download[.]buzz, teams-install[.]run and teams-download[.]top — and using search-engine poisoning to surface those pages to unsuspecting users. Running a fake MSTeamsSetup.exe delivered a loader that staged the fraudulently signed Oyster backdoor; Oyster in turn enabled data collection, lateral movement and final deployment of Rhysida ransomware. Security teams found the operational chain notable for its focus on trust infrastructure. The actors obtained signatures through a mix of compromised or abused signing services and third-party providers, Microsoft reported. The campaign used Trusted Signing and legitimate certificate authorities, including SSL[.]com, DigiCert and GlobalSign, to sign both the fake installers and post-compromise tools beginning in early September. Because the binaries carried legitimate signatures, the files bypassed some naïve allow-lists and lowered the bar for user execution. Microsoft said its AV detected the fake setup files, Oyster artifacts and Rhysida encryption activities, while its endpoint solution flagged the tactics, techniques and procedures (TTPs) Vanilla Tempest used during the attacks. The company revoked the misused certificates and pushed detection rules to customers, actions that Microsoft called essential to blunt the operation quickly. Ransomware Main Tool in Vanilla Tempest's Arsenal Vanilla Tempest has a long catalog of ransomware activity and extortion operations. Cybersecurity firm Cyble) tracked the group’s activity back to at least June 2021. Operators targeted education, healthcare and manufacturing — sectors where downtime and data theft generate urgent pressure to negotiate — and they have previously deployed families such as BlackCat, Quantum Locker and Zeppelin. Also read: Vice Society: A Growing Threat to Schools, Warns the FBI In recent months they pivoted toward a sustained Rhysida campaign; Microsoft’s findings show how the group layered social engineering, SEO poisoning and code-signing fraud to seed their intrusion vector. The attack chain Microsoft outlined matched a common pattern for modern ransomware operations. Compromise or mimic a trusted application, establish a stealthy foothold with a signed loader, escalate privileges and spread via remote tools, then encrypt and exfiltrate. In previously observed incidents, the threat actor has pushed remote administration tooling — examples include SimpleHelp and MeshAgent — to support reconnaissance and hands-on keying, then used living-off-the-land techniques and utilities such as PsExec and Impacket for lateral movement. The earlier campaigns also saw other tools being used for reconnaissance (Advanced Port Scanner, PowerSploit scripts) and for exfiltration or staging (Rclone). Detection guidance Microsoft shared included hunting for anomalous installers that invoked unsigned or atypically signed libraries, unexpected network connections to uncommon Teams download domains, new service installs, and process trees that spawned PowerShell with encoded command lines or initiated Rclone transfers. Microsoft also recommended auditing for unusual certificate activity in the organization — for example, new code-signing certificates issued to unknown entities or sudden signer changes for frequently used installers. Cyble researchers noted the operation illustrated two broader trends. First, attackers increasingly targeted the trust chain — certificates, legitimate installers and vendor branding — because breaking trust reduces the friction for initial compromise. Second, defenders must expand visibility beyond network and endpoint telemetry to include supply-chain signals like certificate transparency logs, content-delivery origin records and search-result poisoning indicators.
When Andrew Morton, Head of IT GRC & Assurance at CW Retail (Chemist Warehouse), walked into the office, third-party risk management (TPRM) was a bit all over the place—spreadsheets, generic questionnaires, and vendors assessed identically regardless of whether they handled customer credit cards or office show more ...
supplies. As an ISO 27001 Lead Auditor who reads the fine print on SOC 2 reports, Morton saw an opportunity to rebuild from the ground up. In this wide-ranging conversation, he reveals the three design choices that matter most, explains why executives glaze over at "questionnaires completed" metrics, and shares his biggest red flag when vetting new vendors. From fourth-party visibility to the most misunderstood clause in modern data processing agreements, Morton offers a masterclass in making TPRM both scalable and defensible. Edited excerpts of Andrew Morton's interview below: From Spreadsheets to Scale "Vendors were being asked the same set of questions regardless of their risk profile, and assurance was often taken at face value." What was the inflection point that forced you to re-architect TPRM at Chemist Warehouse, and what did your “target operating model” look like on day 1 vs. today? AM: Honestly, the inflection point was when I joined the company. It was clear from day one that our third-party risk management wasn’t fit for purpose - it was inconsistent, reactive, and lacked a defensible framework. Vendors were being asked the same set of questions regardless of their risk profile, and assurance was often taken at face value. I saw an opportunity to shift the program into something risk-based, scalable, and aligned with industry standards so that leadership could have real confidence in our vendor ecosystem. Design Choices that Mattered Most "Vendor tiering comes first because it’s the foundation - without knowing which vendors are critical, you can’t allocate resources intelligently." If you could only keep three design decisions in your TPRM stack—continuous external scanning, adaptive questionnaires, or vendor tiering—what stays and why? AM: Vendor tiering comes first because it’s the foundation - without knowing which vendors are critical, you can’t allocate resources intelligently. It’s what ensures high-risk providers get deep scrutiny while low-risk vendors don’t bog down the team. Adaptive questionnaires come next. They let us dig deeper only when the risk indicators justify it, which makes the process scalable and keeps the business engaged instead of frustrated by generic questionnaires. Independent assurance reports (SOC 2, ISO 27001, PCI, etc.) are my third choice because they let us leverage established, externally validated audits. They give us confidence in a vendor’s baseline controls without reinventing the wheel, and they free up capacity to focus on real risk areas. I’d actually put continuous external scanning just behind those three. It’s valuable, but without tiering, adaptive assessments, and assurance reports, scanning can generate noise without context. The three I chose give me a defensible, risk-based foundation - everything else builds on top of that. Fourth-Party Visibility that Actually Works "When it comes to vendors’ vendors, I go one layer deep and focus on critical sub-processors." How deep do you go on your vendors’ vendors? What’s your minimum viable view (e.g., critical sub-processors list, region & data-type mapping, alerting on material changes), and how do you enforce it contractually? AM: When it comes to vendors’ vendors, I go one layer deep and focus on critical sub-processors. My minimum viable view includes knowing who those sub-processors are, what regions they operate in, the types of data they handle, and being alerted to any material changes. Just as importantly, I look at whether the vendor has a mature third-party risk assessment process of their own, because I want assurance they’re applying the same standards downstream that we expect from them. Pre-Production Gates "Sometimes scanning surfaces outdated domains or low-value assets." You’ve talked about passive scanning in your earlier conversations. What’s your “go/no-go” policy for a new SaaS vendor if external posture looks weak but the business is pushing? AM: Passive scanning is a useful early signal, but it’s not an automatic no-go. If a vendor’s external posture looks weak, my first step is to validate with them - sometimes scanning surfaces outdated domains or low-value assets. If it’s confirmed, we take a risk-based approach: for critical vendors, weak posture is a red flag that may pause or even stop onboarding until compensating controls or remediation commitments are in place. For lower-tier vendors, we may accept the risk with conditions - for example, requiring stronger internal controls on our side or limiting the data shared. The no-go line is when the vendor is both critical to operations and unwilling to address or evidence improvements. At that point, I’d escalate to leadership with a clear risk statement: ‘Here’s what the business wants, here’s the security posture, here are the potential consequences.’ That way, the decision is transparent and defensible, even if it means saying no. Beyond Time-to-Assess "When we cut assessment time, the metrics that really resonated with execs were the ones tied directly to business exposure." You have spoken about cutting assessment time dramatically—great. Which risk metrics resonated most with execs (e.g., % critical vendors with open highs >30 days, time-to-remediate by tier, control coverage drift), and which fell flat? AM: When we cut assessment time, the metrics that really resonated with execs were the ones tied directly to business exposure. Things like the percentage of critical vendors with open high-severity findings older than 30 days, or the risk level by tier, gave them a clear view of where risk was lingering and whether vendors were responsive. What fell flat were the more operational or technical metrics - things like the number of questionnaires sent. That’s important to know for us internally for running the program, but executives tune out because this doesn’t translate to risk or business impact. The key is to frame metrics around exposure and risk. Assurance You Actually Trust "When a vendor presents an ISO 27001 certificate or SOC 2 report, I never just take the badge at face value. I treat assurance reports as one input, not a guarantee." You are an ISO 27001 Lead Auditor/Implementer, so, when a vendor presents an ISO cert or SOC 2, what do you verify beyond the badge—scope boundaries, carve-outs, sampling, last major NCs? AM: When a vendor presents an ISO 27001 certificate or SOC 2 report, I never just take the badge at face value. I go deeper into the scope boundaries - does the certification actually cover the systems and services we’re relying on, or just a data center or narrow business unit? I also look closely at carve-outs and exclusions - for example, if key cloud services or sub-processors aren’t covered, that’s a material gap. With SOC 2, I review the sampling approach and the audit period to make sure the testing was meaningful, not just point-in-time or limited in coverage. Finally, I always check whether there were any major non-conformities or exceptions noted, and how they were closed out. In short, I treat assurance reports as one input, not a guarantee - the detail behind the badge tells me whether I can rely on it or whether I need to dig deeper. Shifting Culture, Not Just Tools "I’d engage stakeholders earlier, co-design parts of the process so they feel ownership, and communicate in a way that links their priorities back to the shared goal." What did you learn about stakeholder change—procurement, legal, store ops—when you rolled out the new TPRM model? If you had to repeat it post-merger, what would you do differently? AM: Rolling out the new TPRM model reinforced that every stakeholder has different priorities and perspectives. But the underlying purpose is the same: to protect the business from risk while enabling it to operate effectively. If I had to do it again, I’d engage stakeholders earlier, co-design parts of the process so they feel ownership, and communicate in a way that links their priorities back to the shared goal. That alignment makes adoption smoother and ensures that, despite different lenses, everyone’s working toward the same outcome. Vendor Onboarding Efficiency "We shifted to a risk-tiered model with adaptive questionnaires and pre-vetted assurance reports. Low-risk vendors go through a lightweight process, while critical ones get deeper scrutiny." What are the biggest challenges you see when onboarding new third parties at scale, and how have you streamlined that process without slowing down the business? AM: The biggest challenges in onboarding third parties at scale are consistency, visibility, and speed. Every business unit wants to go live with their vendor yesterday, so security can sometimes be seen as slowing things down. You don’t want to treat all vendors the same, because that overwhelms the process and creates bottlenecks. To streamline, we shifted to a risk-tiered model with adaptive questionnaires and pre-vetted assurance reports. Low-risk vendors go through a lightweight process, while critical ones get deeper scrutiny. We also built in early checkpoints with procurement and legal, so security isn’t a last-minute hurdle. That’s allowed us to reduce onboarding friction, keep the business moving, and still be confident we’re focusing our effort where it matters most. Building Risk Tiers that Make Sense "A vendor handling PI, for example, will always sit in a higher tier, while a vendor with no data access and no system integration will land much lower." How do you classify vendors into critical, high, medium, and low-risk tiers in practice, and what criteria have proven most reliable in your experience? AM: We classify vendors into risk tiers using a structured model - for us it’s tiers 1 through 5. The criteria that have proven the most reliable are: Data classification - what types of data the vendor stores or accesses, especially sensitive or regulated data like PI/SI. System and infrastructure access - whether they interface with or have privileged access to our core/critical applications or infrastructure. Regulatory and contractual obligations - if the vendor falls under specific regimes like PCI, GDPR, or local privacy laws, they’re automatically in a higher tier), and Business criticality - whether their failure could materially disrupt operations or customer experience. These inputs together determine the tier. So, a vendor handling PI, for example, will always sit in a higher tier, while a vendor with no data access and no system integration will land much lower. This approach means we can defend our decisions, scale assessments, and ensure critical vendors get proportionate scrutiny without overwhelming the business. Balancing Questionnaires with Evidence "Self-attestation questionnaires are useful for coverage and efficiency - they give us a first view across the vendor landscape." How do you strike the balance between using self-attestation questionnaires versus validating controls with independent evidence when assessing third parties? AM: For me it’s about balance and proportionality. Self-attestation questionnaires are useful for coverage and efficiency - they give us a first view across the vendor landscape. But on their own they’re not reliable, especially for higher-tier vendors. That’s where independent evidence comes in - things like SOC 2 reports and/or ISO27001 certificates. Lower-tier vendors may only need to self-attest, mid-tier vendors provide self-attestation plus some supporting documentation, and higher-tier vendors must back it up with independent evidence. That way we scale the program, but still get defensible assurance where it matters most. Collaboration with Procurement and Legal "Procurement is on the front line. Legal ensures the right protections are baked into contracts." What role do procurement and legal teams play in strengthening third-party risk management, and how do you foster alignment across these functions? AM: Procurement and legal are key to making TPRM effective. Procurement is on the front line - they’re the ones who see new vendors first, so they help us embed risk assessments early instead of security being a last-minute hurdle. Legal ensures the right protections are baked into contracts - breach notification, sub-processor transparency, audit rights, data handling requirements. One of the things we’ve done to foster alignment is that we’ve created a simple flow chart that maps who does what, and when. By framing it as a shared purpose rather than separate processes, we’ve been able to work as one team. Communicating Risk to the Board My focus is always on clarity, and consequence so risks map directly to business impact. When reporting to senior leadership or the board, how do you frame third-party and supply-chain risks in terms they find most actionable? AM: I try to frame third-party risk for leadership in terms of business outcomes - like regulatory exposure, business disruption, or reputational harm - rather than telling them technical details. My focus is always on clarity, and consequence so risks map directly to business impact – that’s what tends to land or where the conversation will naturally want to go. Lessons Learned from Scaling "You can’t assess everyone the same way - tiering and a risk-based approach are critical to avoid bottlenecks." What were the biggest lessons you learned while scaling third-party risk management across hundreds of vendors, and what advice would you give to organizations just starting that journey? AM: The biggest lesson I learned scaling TPRM across hundreds of vendors is that you can’t assess everyone the same way - tiering and a risk-based approach are critical to avoid bottlenecks. Another was that stakeholder alignment matters as much as tools or processes. Procurement, legal, and the business all need to see TPRM as an enabler, not a blocker. Finally, I learned that while automation and adaptive questionnaires save time, you still need independent assurance like SOC 2 reports or ISO27001 certifications to validate. My advice to those starting out is to begin with a clear tiering model, early stakeholder buy-in, and simple, scalable processes - you can add sophistication later, but without those foundations, you’ll struggle at scale. Looking Ahead in GRC "Routine tasks like evidence collection, monitoring, and control testing will increasingly be handled by AI and automation." How do you see the discipline of GRC itself evolving over the next three to five years, especially with increasing automation and AI support? AM: I see GRC evolving into a more automated, insight-driven discipline over the next three to five years. Routine tasks like evidence collection, monitoring, and control testing will increasingly be handled by AI and automation, freeing teams to focus on strategic risk decisions and exception management. I also expect GRC to become more integrated across the enterprise, connecting IT, compliance, privacy, and third-party risk so decisions are informed by real-time data. Ultimately, the value will shift from just checking boxes to providing actionable insights that help the business make informed, risk-aware decisions faster. Rapid Fire One vendor control you’d mandate tomorrow if you could. AM: If I could mandate one vendor control tomorrow, it would be multi-factor authentication, especially for all administrative and privileged access. It’s a simple but highly effective control that dramatically reduces the likelihood of account compromise, applies across all vendor types, and immediately strengthens our security posture without adding unnecessary complexity. One metric you’d delete from TPRM dashboards. AM: If I could remove one metric from TPRM dashboards, it would be the number of questionnaires sent or completed. It’s useful internally to show the volume of work and the team’s effort, but it doesn’t actually reflect risk or control effectiveness. Executives respond better to metrics tied to business impact - like open high-severity findings - because that’s what drives informed decisions. Most misunderstood clause in modern DPAs. AM: The most misunderstood clause in modern DPAs in my opinion is typically the sub-processor notification and approval section. Misalignment here can introduce downstream risks, especially for critical data or cross-border processing, so it’s important to clarify expectations up front and ensure the clause is actionable, not just boilerplate. Your “Red Flag” in a vendor’s first 5 minutes. AM: Beyond transparency, the other key red flag I watch for is reluctance to commit contractually to basic security obligations - like notifying us of sub-processor changes or breaches. If a vendor hesitates on these points, it can signal deeper gaps in controls or governance, and it prompts a much closer review before proceeding.
Satellite links contain a surprising amount of unencrypted traffic – and perhaps even more surprising is the fact that the researchers who discovered that unencrypted traffic did it using about $650 of consumer-grade equipment. In a paper published this week, researchers from the University of California San Diego show more ...
and the University of Maryland College Park detailed their efforts to scan the geosynchronous (GEO) satellite links that provide IP backhaul to remote critical infrastructure, telecom, government, military, and commercial users. "We perform the first broad scan of IP traffic on 39 GEO satellites across 25 distinct longitudes with 411 transponders using consumer-grade equipment," said the paper authored by UCSD’s Wenyi Morty Zhang and other researchers. "We found 50% of GEO links contained cleartext IP traffic," they said, noting that "while link-layer encryption has been standard practice in satellite TV for decades, IP links typically lacked encryption at both the link and network layers." Unencrypted satellite traffic detected by the researchers included cellular backhaul traffic from major service providers, including cleartext call and text contents, job scheduling and industrial control system (ICS) data for utility infrastructure, military asset tracking, inventory management for global retail stores, and in-flight Wi-Fi. Google’s Vinoth Deivasigamani shared the researchers’ work in a LinkedIn post and noted, “While it is important to work on futuristic threats such as Quantum cryptanalysis, backdoors in standardized cryptographic protocols, etc. - the unfortunate reality is that the vast majority of real-world attacks happen because basic protection is not enabled. Lets not take our eyes off the basics.” First Widespread Study of Satellite IP Traffic Security GEO satellites have been the main means of delivering reliable high-speed communication to remote sites for decades, the researchers said. There are 590 GEO satellites orbiting the planet and thousands of GEO network links, they said. Each satellite may carry traffic for dozens of networks on its transponders, covering a diameter of “thousands of kilometers” or as much as a third of the Earth’s surface. “Unfortunately, GEO satellites have been shown to be particularly susceptible to interception attacks,” they said. Enthusiasts readily share open databases of satellite coordinates and transponders, “and the popularity of satellite television has given rise to high-quality free software for finding and decoding GEO satellite signals.” The researchers’ goal was to “demonstrate the feasibility of an attacker whose goal is to observe satellite traffic visible from their position by passively scanning as many GEO transmissions from a single vantage point on Earth as possible. This form of widescale interception has previously been assumed to only be feasible with state actor-grade equipment and software.” They said their research demonstrates that “a low-resource attacker” using low-cost commercially off-the-shelf (COTS) equipment “can reliably intercept and decode hundreds of links from a single vantage point.” “[W]hile content scrambling is standard for satellite TV, it is surprisingly unlikely to be used for private networks using GEO satellite to backhaul IP network traffic from remote areas,” they said. “Our study provides concrete evidence that network-layer encryption protocols like IPSec are far from standard on internal networks, unlike on the Internet where TLS is default, a finding that has been until now essentially impossible for external researchers to legally measure.” Satellite Data Study Raises Security, Privacy Concerns The researchers detailed a range of findings, from the exposure of consumer data to military communications. In cellular networks, satellite backhaul is commonly used to connect remote cell towers to the core network, transmitting control plane and user data like voice calls, SMS, and Internet traffic, they said. They found unencrypted cellular backhaul traffic “from multiple telecommunications providers with multiple tower connections per provider.” They observed unencrypted (DNS, ICMP, SIP, SNMP) and encrypted (IPSec and TLSv1.2) traffic from “sea vessels owned by the US military.” They detailed a 10-month disclosure process alerting organizations ranging from major cellular carriers and the U.S. Military to financial companies - and more revelations will follow. “Pending ongoing disclosure, a future version of this document will contain further details on other unencrypted infrastructure and industrial data we observed, including utilities, maritime vessels, and offshore oil and gas platforms,” they said. “There is a clear mismatch between how satellite customers expect data to be secured and how it is secured in practice; the severity of the vulnerabilities we discovered has certainly revised our own threat models for communications,” the researchers said. “Cell phone traffic is carefully encrypted at the radio layer between phone and tower to protect it against local eavesdroppers; it is shocking to discover that these private conversations were then broadcast to large portions of the continent, and that these security issues were not limited to isolated mistakes. “Similarly, there has been a concerted effort over the past decade or two to encrypt web traffic because of widespread concern about government eavesdropping through tapping fiber-optic cables or placing equipment in Internet exchange points; it is also shocking to discover that this traffic may simply be broadcast to a continent-sized satellite footprint.”
Spanish fashion retailer Mango has confirmed a data breach after one of its external marketing service providers suffered unauthorized access to limited customer information. The company emphasized that its corporate systems were not compromised and that financial or login details remain secure. The Mango data breach show more ...
adds to a growing list of cybersecurity incidents hitting major global retailers in 2025. In its official statement, Mango said the exposed data included customers’ first names, countries, postal codes, email addresses, and phone numbers. The company clarified that last names, banking information, credit card details, or passwords were not affected in the breach. “Mango’s infrastructure and corporate systems have not been compromised,” the company said, assuring customers that normal operations continue. Upon discovering the breach, Mango immediately activated its security protocols and notified the Spanish Data Protection Agency (AEPD) and relevant authorities as required under data protection laws. The retailer also urged customers to remain cautious of suspicious emails or phone calls and avoid sharing personal details with unknown sources. For assistance, Mango has made its customer service email and helpline available to address any concerns. Responds Swiftly to Contain Mango Data Breach According to the company, the Mango data breach was limited to marketing-related data held by an external provider. This incident did not involve Mango’s main network or systems handling sensitive information. The fashion retailer said it took “immediate action” to contain the issue and ensure no further exposure. Mango reiterated its commitment to privacy, stating, “We regret any inconvenience this specific incident may have caused. The protection of our customers’ data remains a top priority.” [caption id="attachment_106085" align="aligncenter" width="660"] Source: X[/caption] The Spanish Data Protection Agency (AEPD) has been informed, and Mango continues to cooperate fully with authorities as investigations continue. Retail Cybersecurity Under Pressure Amid Global Attacks The Mango data breach comes amid a series of high-profile retail cyberattacks across Europe and the United States this year. Just weeks earlier, luxury fashion house Louis Vuitton disclosed a cyberattack — the third within 90 days — that exposed customer data from its global and Korean operations. The LVMH cyberattack, confirmed on July 2, 2025, affected personal information but not payment data. In May, Victoria’s Secret also reported a security incident that forced the company to temporarily take down its U.S. website while investigations were ongoing. Meanwhile, UK logistics firm Peter Green Chilled, a supplier to supermarkets like Tesco and Sainsbury’s, experienced a cyberattack that disrupted operations. Luxury retailer Harrods was another recent victim, confirming a Harrods cyberattack in April 2025 that prompted precautionary restrictions on internet access at its sites. Although customer services remained active, the incident highlight the increasing pressure on retail cybersecurity worldwide. Maintains Strong Business Performance Despite Mango Data Breach Despite the recent Mango data breach, company's business continues to show strong growth. The company reported a turnover of €1.728 billion in the first half of 2025, marking a 12% increase year-over-year and a 14% growth at constant exchange rates. The retailer invested around €110 million in strategic projects during this period, with 70% allocated to new store openings and refurbishments. With a presence in 120 countries and 2,925 points of sale worldwide, Mango’s international business now represents 78% of total turnover. Its top-performing markets include Spain, France, Turkey, Germany, and the United States. Ongoing Focus on Customer Trust and Cyber Resilience As the Mango data breach investigation continues, the retailer is reinforcing its cybersecurity measures and reviewing third-party security policies to prevent similar incidents in the future. The company said it remains committed to transparency and the protection of customer data. “MANGO makes our Customer Service email address (personaldata@mango.com) and telephone number (900 150 543) available for any additional questions, and we regret any inconvenience this specific incident may have caused you,” reads company’s statement. “As always, we want to thank you for your trust and commitment to our brand,” statement concluded.
Capita has been handed a record ransomware fine of £14 million by the Information Commissioner’s Office (ICO) after a 2023 cyberattack exposed the personal data of 6.6 million people. The Capita ransomware fine marks the largest penalty ever issued by the ICO for a ransomware-related breach and highlights serious show more ...
shortcomings in the company’s cybersecurity defences. The ICO investigation revealed that Capita’s data breach in 2023 resulted from inadequate security measures that left the systems of the UK’s largest outsourcing firm open to attack. Hackers stole nearly one terabyte of information, including pension data, employee details, and sensitive financial records. The regulator fined Capita plc £8 million and its pensions arm, Capita Pension Solutions Limited £6 million, bringing the total penalty to £14 million. Although this is less than the initial £45 million fine proposed by the ICO, it remains a landmark decision in the UK’s approach to ransomware and data protection enforcement. How the Ransomware Attack Unfolded The UK ransomware attack on Capita began in March 2023 when an employee accidentally downloaded a malicious file. Although a high-priority security alert was triggered within minutes, Capita failed to quarantine the infected device for more than two days. This delay allowed attackers to move across Capita’s network, gain administrator access, and steal massive amounts of data between March 29 and 30, 2023. The next day, ransomware was deployed, locking Capita out of its own systems. The ICO fine on Capita follows an extensive investigation that found several failures in its incident response. Despite repeated internal warnings about system vulnerabilities, the company failed to implement stronger administrative controls, allowing hackers to escalate privileges and access critical systems. ICO’s Findings and Regulatory Response According to the Information Commissioner’s Office, Capita lacked adequate technical and organisational safeguards to protect personal data. Key failings included: No proper tiering for administrative accounts, which enabled lateral movement by attackers. Delayed response to critical alerts — the compromised device was isolated 58 hours after detection. Infrequent penetration testing, with no regular reassessment of high-risk systems. Poor sharing of risk findings across departments, leaving vulnerabilities unaddressed. John Edwards, the UK Information Commissioner, said the Capita cybersecurity failures represented a major breach of trust. “Capita failed in its duty to protect the data entrusted to it by millions of people. The scale of this breach and its impact could have been prevented had sufficient security measures been in place,” he said. Edwards warned that businesses cannot afford to be complacent. “With so many cyberattacks in the headlines, our message is clear: every organisation, no matter how large, must take proactive steps to keep people’s data secure. Cyber criminals don’t wait, and neither should businesses.” Response and Settlement After the Capita Ransomware Fine Following the Capita data breach 2023, the company offered affected individuals 12 months of free credit monitoring through Experian and set up a dedicated call centre. Over 260,000 people activated the monitoring service. The ICO acknowledged that Capita cooperated fully during the investigation and made improvements to its cybersecurity posture after the attack. These actions contributed to reducing the total penalty from £45 million to £14 million. Capita accepted responsibility for the breach and agreed not to appeal the decision, finalising the Capita ransomware fine in a voluntary settlement with the ICO. Lessons for Businesses The ICO fine on Capita serves as a strong reminder that even established firms are not immune to cyber threats. The regulator urged all organisations to follow the National Cyber Security Centre’s (NCSC) guidance, apply the principle of least privilege, and ensure timely response to alerts. The Capita case reinforces that cybersecurity failures can lead not only to reputational damage but also to record-breaking financial penalties. With ransomware attacks continuing to rise, the message from regulators is clear — investing in security today can prevent severe consequences tomorrow.
The deal, which builds on LevelBlue’s recent acquisition of Trustwave and Aon, aims to provide customers with a broad portfolio of extended detection and response (XDR), managed detection and response (MDR), and forensic services.
Hackers are also increasingly turning to other methods to obtain credentials. Microsoft tracked surges in the use of infostealer malware by criminals and an increase of IT scams where cybercriminals call a company’s help desk and simply ask for password resets.
A German member of the European Parliament has filed a complaint urging authorities to investigate Hungarian Prime Minister Viktor Orbán for allegedly ordering the country’s secret service to break into his phone with spyware.
Google security researchers said they observed a Pyongyang-backed hacking group, tracked as UNC5342, deploying a method known as EtherHiding — a way of embedding malicious code inside smart contracts on decentralized networks such as Ethereum and BNB Smart Chain.
A former adviser to Boris Johnson said China had breached sensitive British government systems in 2020. Current and former officials firmly rebutted those claims.
Under the new partnership, law enforcement agencies which use Flock Safety products can ask Ring owners to provide images for “evidence collection and investigative work,” according to a blog post on the Ring website.
The Dairy Farmers of America said cybercriminals breached company systems in June, gaining access to the information of employees and members of the cooperative.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Wednesday added a critical security flaw impacting Adobe Experience Manager to its Known Exploited Vulnerabilities (KEV) catalog, based on evidence of active exploitation. The vulnerability in question is CVE-2025-54253 (CVSS score: 10.0), a maximum-severity misconfiguration bug that could result in arbitrary code execution.
The online world is changing fast. Every week, new scams, hacks, and tricks show how easy it’s become to turn everyday technology into a weapon. Tools made to help us work, connect, and stay safe are now being used to steal, spy, and deceive. Hackers don’t always break systems anymore — they use them. They hide inside trusted apps, copy real websites, and trick people into giving up control
Scaling the SOC with AI - Why now? Security Operations Centers (SOCs) are under unprecedented pressure. According to SACR’s AI-SOC Market Landscape 2025, the average organization now faces around 960 alerts per day, while large enterprises manage more than 3,000 alerts daily from an average of 28 different tools. Nearly 40% of those alerts go uninvestigated, and 61% of security teams admit
Cybersecurity researchers have disclosed details of a new campaign that exploited a recently disclosed security flaw impacting Cisco IOS Software and IOS XE Software to deploy Linux rootkits on older, unprotected systems. The activity, codenamed Operation Zero Disco by Trend Micro, involves the weaponization of CVE-2025-20352 (CVSS score: 7.7), a stack overflow vulnerability in the Simple
Penetration testing helps organizations ensure IT systems are secure, but it should never be treated in a one-size-fits-all approach. Traditional approaches can be rigid and cost your organization time and money – while producing inferior results. The benefits of pen testing are clear. By empowering “white hat” hackers to attempt to breach your system using similar tools and techniques to
A threat actor with ties to the Democratic People's Republic of Korea (aka North Korea) has been observed leveraging the EtherHiding technique to distribute malware and enable cryptocurrency theft, marking the first time a state-sponsored hacking group has embraced the method. The activity has been attributed by Google Threat Intelligence Group (GTIG) to a threat cluster it tracks as UNC5342,
A financially motivated threat actor codenamed UNC5142 has been observed abusing blockchain smart contracts as a way to facilitate the distribution of information stealers such as Atomic (AMOS), Lumma, Rhadamanthys (aka RADTHIEF), and Vidar, targeting both Windows and Apple macOS systems. "UNC5142 is characterized by its use of compromised WordPress websites and 'EtherHiding,' a technique used
An investigation into the compromise of an Amazon Web Services (AWS)-hosted infrastructure has led to the discovery of a new GNU/Linux rootkit dubbed LinkPro, according to findings from Synacktiv. "This backdoor features functionalities relying on the installation of two eBPF [extended Berkeley Packet Filter] modules, on the one hand to conceal itself, and on the other hand to be remotely
A critical infrastructure hack hits the headlines - involving default passwords, boasts on Telegram, and a finale that will make a few cyber-crooks wish the ground would swallow them whole. Meanwhile we dig into the bit we don't talk about enough: the human cost of defending companies from hackers - stress, show more ...
burnout, and how better leadership culture can help security ake teams safer and saner. All this and more is discussed in episode 439 of "Smashing Security" podcast with cybersecurity veteran Graham Cluley, and his special guest Annabel Berry.
In a significant crackdown against online cybercriminals, German authorities have successfully dismantled a network of fraudulent cryptocurrency investment sites that has targeted millions of unsuspecting people across Europe. Read more in my article on the Hot for Security blog.