The modern authentication ecosystem runs on a fragile assumption: that requests for one-time passwords are genuine. That assumption is now under sustained pressure. What began in the early 2020s as loosely shared scripts for irritating phone numbers has evolved into a coordinated ecosystem of SMS and OTP bombing tools show more ...
engineered for scale, speed, and persistence. Recent research from Cyble Research and Intelligence Labs (CRIL) examined approximately 20 of the most actively maintained repositories reveals a sharp technical evolution continuing through late 2025 and into 2026. These are no longer simple terminal-based scripts. They are cross-platform desktop applications, Telegram-integrated automation tools, and high-performance frameworks capable of orchestrating large-scale SMS and OTP bombing and voice-bombing campaigns across multiple regions. Importantly, the findings reflect patterns observed within a defined research sample and should be interpreted as indicative trends rather than a complete census of the broader ecosystem. Even within that limited scope, the scale is striking. From Isolated Scripts to Organized API Exploitation SMS and OTP bombing campaigns operate by abusing legitimate authentication endpoints. Attackers repeatedly trigger password reset flows, registration verifications, or login challenges to flood a victim’s device with legitimate SMS messages or automated calls. The result is harassment, disruption, and in some cases, MFA fatigue. Across the 20 repositories analyzed, approximately 843 vulnerable API endpoints were catalogued. These endpoints belonged to organizations spanning telecommunications, financial services, e-commerce, ride-hailing platforms, and government portals. Each shared a common weakness: inadequate rate limiting, insufficient CAPTCHA enforcement, or both. The regional targeting pattern was highly uneven. Roughly 61.68% of observed endpoints, about 520, were associated with infrastructure in Iran. India accounted for 16.96%, or approximately 143 endpoints. Additional activity focused on Turkey, Ukraine, and other parts of Eastern Europe and South Asia. [caption id="" align="aligncenter" width="612"] Distribution of Observed Endpoints (Source: Cyble)[/caption] The abuse lifecycle typically begins with API discovery. Attackers manually test login and signup flows, scan common paths such as /api/send-otp or /auth/send-code, reverse-engineer mobile apps to extract hardcoded API references, or rely on community-maintained endpoint lists shared through public repositories and forums. [caption id="" align="aligncenter" width="563"] SMS/OTP Bombing Abuse Lifecycle (Source: Cyble)[/caption] Once identified, these endpoints are integrated into multi-threaded attack tools capable of issuing simultaneous requests at scale. The Rise of Automation and SSL Bypass Techniques The technical stack behind SMS and OTP bombing tools has matured considerably. [caption id="" align="aligncenter" width="489"] Technology Stack Distribution (Source: Cyble)[/caption] Maintainers now provide implementations across seven programming languages and frameworks, lowering the barrier to entry for attackers with minimal coding knowledge. Modern tools incorporate: Multi-threading for parallel API abuse Proxy rotation to evade IP-based controls Request randomization to simulate human behavior Automated retries and failure handling Real-time reporting dashboards More concerning is the widespread use of SSL bypass mechanisms. Approximately 75% of analyzed repositories disable SSL certificate validation to circumvent basic security controls. Instead of trusting properly validated SSL connections, these tools intentionally ignore certificate errors, allowing interception or manipulation of traffic without interruption. SSL bypass has become one of the most prevalent evasion techniques observed. Additionally, 58.3% of repositories randomize User-Agent headers to evade signature-based detection. Around 33% exploit static or hardcoded reCAPTCHA tokens, defeating poorly implemented bot protections. The ecosystem is no longer confined to SMS alone. Voice-bombing campaigns, automated calls triggered through telephony APIs, have been integrated into several tools, expanding the harassment vector beyond text messages. Commercial Web Services and Data Harvesting Parallel to open-source development, a commercial layer has emerged. Web-based SMS and OTP bombing platforms offer point-and-click interfaces accessible from any browser. Marketed deceptively as “prank tools” or “SMS testing services,” remove all technical barriers. These services represent an escalation in accessibility. Unlike repository-based tools requiring local execution, web platforms abstract away configuration, proxy management, and API integration. However, they operate on a dual-threat model. Phone numbers entered into these platforms are frequently harvested. Collected data may be reused for spam campaigns, sold as lead lists, or integrated into fraud operations. In effect, users expose both their targets and themselves to long-term exploitation. Financial and Operational Impact For individuals, SMS and OTP bombing can degrade device performance, bury legitimate communications, exhaust SMS storage limits, drain battery life, and create MFA fatigue that increases the risk of accidental approval of malicious login attempts. The addition of voice-bombing campaigns further intensifies disruption. For organizations, the impact extends beyond inconvenience. Financially, each OTP message costs between $0.05 and $0.20. A single attack generating 10,000 messages can cost $500 to $2,000. Unprotected API endpoints subjected to sustained abuse can push monthly SMS bills into five-figure territory. Operationally, legitimate users may be unable to receive verification codes. Customer support teams become overwhelmed. Delivery delays affect all customers' needs. In regulated sectors, failure to ensure secure and reliable authentication of flows may create compliance exposure. Reputational damage compounds the issue. Public perception quickly associates spam-like behavior with poor security controls.
India’s technology ambitions are no longer limited to policy announcements, they are now translating into capital flows, institutional reforms, and global positioning. At the center of this transformation is the IndiaAI Mission, a flagship initiative that is reshaping AI in India while influencing private sector show more ...
investment and deep tech growth across multiple domains. Information submitted in the Lok Sabha on February 11, 2026, by Minister of Electronics and IT Ashwini Vaishnaw outlines how government-backed reforms and funding mechanisms are strengthening India’s AI and space technology ecosystem. For global observers, the scale and coordination of these efforts signal a strategic push to position India as a long-term technology powerhouse. IndiaAI Mission Lays Foundation for AI in India Launched in March 2024 with an outlay of ₹10,372 crore, the IndiaAI Mission aims to build a comprehensive AI ecosystem. In less than two years, the initiative has delivered measurable progress. More than 38,000 GPUs have been onboarded to create a common compute facility accessible to startups and academic institutions at affordable rates. Twelve teams have been shortlisted to develop indigenous foundational models or Large Language Models (LLMs), while 30 applications have been approved to build India-specific AI solutions. Talent development remains central to the IndiaAI Mission. Over 8,000 undergraduate students, 5,000 postgraduate students, and 500 PhD scholars are currently being supported. Additionally, 27 India Data and AI Labs have been established, with 543 more identified for development. India’s AI ecosystem is also earning global recognition. The Stanford Global AI Vibrancy 2025 report ranks India third worldwide in AI competitiveness and ecosystem vibrancy. The country is also the second-largest contributor to GitHub AI projects—evidence of a strong developer community driving AI in India from the ground up. Private Sector Investment in AI Gains Speed Encouraged by the IndiaAI Mission and broader reforms, private sector investment in AI is rising steadily. According to the Stanford AI Index Report 2025, India’s cumulative private investment in AI between 2013 and 2024 reached approximately $11.1 billion. Recent announcements underscore this momentum. Google revealed plans to establish a major AI Hub in Visakhapatnam with an investment of around $15 billion—its largest commitment in India so far. Tata Group has also announced an $11 billion AI innovation city in Maharashtra. These developments suggest that AI in India is moving beyond research output toward large-scale commercial infrastructure. The upcoming India AI Impact Summit 2026, to be held in New Delhi, will further position India within the global AI debate. Notably, it will be the first time the global AI summit series takes place in the Global South, signaling a shift toward more inclusive technology governance. Deep Tech Push Backed by RDI Fund and Policy Reforms Beyond AI, the government is reinforcing the broader deep tech sector through funding and policy clarity. A ₹1 lakh crore Research, Development and Innovation (RDI) Fund under the Anusandhan National Research Foundation (ANRF) has been announced to support high-risk, high-impact projects. The National Deep Tech Startup Policy addresses long-standing challenges in funding access, intellectual property, infrastructure, and commercialization. Under Startup India, deep tech firms now enjoy extended eligibility periods and higher turnover thresholds for tax benefits and government support. These structural changes aim to strengthen India’s Gross Expenditure on Research and Development (GERD), currently at 0.64% of GDP. Encouragingly, India’s position in the Global Innovation Index has climbed from 81st in 2015 to 38th in 2025—an indicator that reforms are yielding measurable outcomes. Space Sector Reforms Expand India’s Global Footprint Parallel to AI in India, the government is also expanding its ambitions in space technology. The Indian Space Policy 2023 clearly defines the roles of ISRO, IN-SPACe, and private industry, opening the entire space value chain to commercial participation. IN-SPACe now operates as a single-window agency authorizing non-government space activities and facilitating access to ISRO’s infrastructure. A ₹1,000 crore venture capital fund and a ₹500 crore Technology Adoption Fund are supporting early-stage and scaling space startups. Foreign Direct Investment norms have been liberalized, permitting up to 100% FDI in satellite manufacturing and components. Through NewSpace India Limited (NSIL), the country is expanding its presence in the global commercial launch market, particularly for small and medium satellites. The government’s collaboration between ISRO and the Department of Biotechnology in space biotechnology—including microgravity research and space bio-manufacturing—signals how interdisciplinary innovation is becoming a national priority. A Strategic Inflection Point for AI in India Taken together, the IndiaAI Mission, private sector investment in AI, deep tech reforms, and space sector liberalization form a coordinated architecture. This is not merely about technology adoption—it is about long-term capability building. For global readers, India’s approach offers an interesting case study: sustained public investment paired with regulatory clarity and private capital participation. While challenges such as research intensity and commercialization gaps remain, the trajectory is clear. The IndiaAI Mission has become more than a policy initiative, it is emerging as a structural driver of AI in India and a signal of the country’s broader technological ambitions in the decade ahead.
In the past six months, Taiwan’s government agencies have reported 637 cybersecurity incidents, according to the latest data released by the Cybersecurity Academy (CSAA). The findings, published in its Cybersecurity Weekly Report, reveal not just the scale of digital threats facing Taiwan’s public sector, but also show more ...
four recurring attack patterns that reflect broader global trends targeting government agencies. For international observers, the numbers are significant. Out of a total of 723 cybersecurity incidents reported by government bodies and select non-government organizations during this period, 637 cases involved government agencies alone. The majority of these—410 cases—were classified as illegal intrusion, making it the most prevalent threat category. These cybersecurity incidents provide insight into how threat actors continue to exploit both technical vulnerabilities and human behaviour within public institutions. Illegal Intrusion Leads the Wave of Cybersecurity Incidents Illegal intrusion remains the leading category among reported cybersecurity incidents affecting government agencies. While the term may sound broad, it reflects deliberate attempts by attackers to gain unauthorized access to systems, often paving the way for espionage, data theft, or operational disruption. The CSAA identified four recurring attack patterns behind these incidents. The first involves the distribution of malicious programs disguised as legitimate software. Attackers impersonate commonly used applications, luring employees into downloading infected files. Once installed, these malicious programs establish abnormal external connections, creating backdoors for future control or data exfiltration. This tactic is particularly concerning for government agencies, where employees frequently rely on specialized or internal tools. A single compromised endpoint can provide attackers with a foothold into wider networks, increasing the scale of cybersecurity incidents. USB Worm Infections and Endpoint Vulnerabilities The second major pattern behind these cybersecurity incidents involves worm infections spread through portable media devices such as USB drives. Though often considered an old-school technique, USB-based attacks remain effective—especially in environments where portable media is routinely used for operational tasks. When infected devices are plugged into systems, malicious code can automatically execute, triggering endpoint intrusion and abnormal system behavior. Such breaches can lead to lateral movement within networks and unauthorized external communications. This pattern underscores a key reality: technical sophistication is not always necessary. In many cybersecurity incidents, attackers succeed by exploiting routine workplace habits rather than zero-day vulnerabilities. Social Engineering and Watering Hole Attacks Target Trust The third pattern involves social engineering email attacks, frequently disguised as administrative litigation or official document exchanges. These phishing emails are crafted around business topics highly relevant to government agencies, increasing the likelihood that recipients will open attachments or click malicious links. Such cybersecurity incidents rely heavily on human psychology. The urgency and authority embedded in administrative-themed emails make them particularly effective. Despite years of awareness campaigns, phishing remains one of the most successful entry points for attackers globally. The fourth pattern, known as watering hole attacks, adds another layer of complexity. In these cases, attackers compromise legitimate websites commonly visited by government officials. During normal browsing, malicious commands are silently executed, resulting in endpoint compromise and abnormal network behavior. Watering hole attacks demonstrate how cybersecurity incidents can originate from seemingly trusted digital environments. Even cautious users can fall victim when legitimate platforms are weaponized. Critical Infrastructure Faces Operational Risks Beyond government agencies, cybersecurity incidents reported by non-government organizations primarily affected critical infrastructure providers, particularly in emergency response, healthcare, and communications sectors. Interestingly, many of these cases involved equipment malfunctions or damage rather than direct cyberattacks. System operational anomalies led to service interruptions, while environmental factors such as typhoons disrupted critical services. These incidents highlight an important distinction: not all disruptions stem from malicious activity. However, the operational impact can be equally severe. The Cybersecurity Research Institute (CRI) emphasized that equipment resilience, operational continuity, and environmental risk preparedness are just as crucial as cybersecurity protection. In an interconnected world, digital security and physical resilience must go hand in hand. Strengthening Endpoint Protection and Cyber Governance In response to the rise in cybersecurity incidents, experts recommend a dual approach—technical reinforcement and management reform. From a technical perspective, endpoint protection and abnormal behavior monitoring must be strengthened. Systems should be capable of detecting malicious programs, suspicious command execution, abnormal connections, and risky portable media usage. Enhanced browsing and attachment access protection can further reduce the risk of malware downloads during routine operations. From a governance standpoint, ongoing education is essential. Personnel must remain alert to risks associated with fake software, social engineering email attacks, and watering hole attacks. Clear management policies regarding portable media usage, software sourcing, and external website access should be embedded into cybersecurity governance frameworks. The volume of cybersecurity incidents reported in just six months sends a clear message: digital threats targeting public institutions are persistent, adaptive, and increasingly strategic. Governments and critical infrastructure providers must move beyond reactive responses and build layered defenses that address both technology and human behavior.
The Central Government has formally brought AI-generated content within India’s regulatory framework for the first time. Through notification G.S.R. 120(E), issued by the Ministry of Electronics and Information Technology (MeitY) and signed by Joint Secretary Ajit Kumar, amendments were introduced to the Information show more ...
Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised rules take effect from February 20, 2026. The move represents a new shift in the Indian cybersecurity and digital governance policy. While the Information Technology Act, 2000, has long addressed unlawful online conduct, these amendments explicitly define and regulate “synthetically generated information” (SGI), placing AI-generated content under structured compliance obligations. What the Law Now Defines as “Synthetically Generated Information” The notification inserts new clauses into Rule 2 of the 2021 Rules. It defines “audio, visual or audio-visual information” broadly to include any audio, image, photograph, video, sound recording, or similar content created, generated, modified, or altered through a computer resource. More critically, clause (wa) defines “synthetically generated information” as content that is artificially or algorithmically created or altered in a manner that appears real, authentic, or true and depicts or portrays an individual or event in a way that is likely to be perceived as indistinguishable from a natural person or real-world occurrence. This definition clearly encompasses deep-fake videos, AI-generated voiceovers, face-swapped images, and other forms of AI-generated content designed to simulate authenticity. The framing is deliberate: the concern is not merely digital alteration, but deception, content that could reasonably be mistaken for reality. At the same time, the amendment carves out exceptions. Routine or good-faith editing, such as color correction, formatting, transcription, compression, accessibility improvements, translation, or technical enhancement, does not qualify as synthetically generated information, provided the underlying substance or meaning is not materially altered. Educational materials, draft templates, or conceptual illustrations also fall outside the SGI category unless they create a false document or false electronic record. This distinction attempts to balance innovation in Information Technology with protection against misuse. New Duties for Intermediaries The amendments substantially revise Rule 3, expanding intermediary obligations. Platforms must inform users, at least once every three months and in English or any Eighth Schedule language, that non-compliance with platform rules or applicable laws may lead to suspension, termination, removal of content, or legal liability. Where violations relate to criminal offences, such as those under the Bharatiya Nagarik Suraksha Sanhita, 2023, or the Protection of Children from Sexual Offences Act, 2012, mandatory reporting requirements apply. A new clause (ca) introduces additional obligations for intermediaries that enable or facilitate the creation or dissemination of synthetically generated information. These platforms must inform users that directing their services to create unlawful AI-generated content may attract penalties under laws including the Information Technology Act, the Bharatiya Nyaya Sanhita, 2023, the Representation of the People Act, 1951, the Indecent Representation of Women (Prohibition) Act, 1986, the Sexual Harassment of Women at Workplace Act, 2013, and the Immoral Traffic (Prevention) Act, 1956. Consequences for violations may include immediate content removal, suspension or termination of accounts, disclosure of the violator’s identity to victims, and reporting to authorities where offences require mandatory reporting. The compliance timelines have also been tightened. Content removal in response to valid orders must now occur within three hours instead of thirty-six hours. Certain grievance response windows have been reduced from fifteen days to seven days, and some urgent compliance requirements now demand action within two hours. Due Diligence and Labelling Requirements for AI-generated Content A new Rule 3(3) imposes explicit due diligence obligations for AI-generated content. Intermediaries must deploy reasonable and appropriate technical measures, including automated tools, to prevent users from creating or disseminating synthetically generated information that violates the law. This includes content containing child sexual abuse material, non-consensual intimate imagery, obscene or sexually explicit material, false electronic records, or content related to explosive materials or arms procurement. It also includes deceptive portrayals of real individuals or events intended to mislead. For lawful AI-generated content that does not violate these prohibitions, the rules mandate prominent labelling. Visual content must carry clearly visible notices. Audio content must include a prefixed disclosure. Additionally, such content must be embedded with permanent metadata or other provenance mechanisms, including a unique identifier linking the content to the intermediary computer resource, where technically feasible. Platforms are expressly prohibited from enabling the suppression or removal of these labels or metadata. Enhanced Obligations for Social Media Intermediaries Rule 4 introduces an additional compliance layer for significant social media intermediaries. Before allowing publication, these platforms must require users to declare whether content is synthetically generated. They must deploy technical measures to verify the accuracy of that declaration. If confirmed as AI-generated content, it must be clearly labelled before publication. If a platform knowingly permits or fails to act on unlawful synthetically generated information, it may be deemed to have failed its due diligence obligations. The amendments also align terminology with India’s evolving criminal code, replacing references to the Indian Penal Code with the Bharatiya Nyaya Sanhita, 2023. Implications for Indian Cybersecurity and Digital Platforms The February 2026 amendment reflects a decisive step in Indian cybersecurity policy. Rather than banning AI-generated content outright, the government has opted for traceability, transparency, and technical accountability. The focus is on preventing deception, protecting individuals from reputational harm, and ensuring rapid response to unlawful synthetic media. For platforms operating within India’s Information Technology ecosystem, compliance will require investment in automated detection systems, content labelling infrastructure, metadata embedding, and accelerated grievance redressal workflows. For users, the regulatory signal is clear: generating deceptive synthetic media is no longer merely unethical; it may trigger direct legal consequences. As AI tools continue to scale, the regulatory framework introduced through G.S.R. 120(E) marks India’s formal recognition that AI-generated content is not a fringe concern but a central governance challenge in the digital age.
The Olympic Games are more than just a massive celebration of sports; they’re a high-stakes business. Officially, the projected economic impact of the Winter Games — which kicked off on February 6 in Italy — is estimated at 5.3 billion euros. A lion’s share of that revenue is expected to come from fans show more ...
flocking in from around the globe — with over 2.5 million tourists predicted to visit Italy. Meanwhile, those staying home are tuning in via TV and streaming. According to the platforms, viewership ratings are already hitting their highest peaks since 2014. But while athletes are grinding for medals and the world is glued to every triumph and heartbreak, a different set of “competitors” has entered the arena to capitalize on the hype and the trust of eager fans. Cyberscammers of all stripes have joined an illegal race for the gold, knowing full well that a frenzy is a fraudster’s best friend. Kaspersky experts have tracked numerous fraudulent schemes targeting fans during these Winter Games. Here’s how to avoid frustration in the form of fake tickets, non-existent merch, and shady streams, so you can keep your money and personal data safe. Tickets to nowhere The most popular scam on this year’s circuit is the sale of non-existent tickets. Usually, there are far fewer seats at the rinks and slopes than there are fans dying to see the main events. In a supply-and-demand crunch, folks scramble for any chance to snag those coveted passes, and that’s when phishing sites — clones of official vendors — come to the “rescue”. Using these, bad actors fish for fans’ payment details to either resell them on the dark web or drain their accounts immediately. This is what a fraudulent site selling fake Olympic tickets looks like Remember: tickets for any Olympic event are sold only through the authorized Olympic platform or its listed partners. Any third-party site or seller outside the official channel is a scammer. We’re putting that play in the penalty box! A fake goalie mitt, a counterfeit stick… Dreaming of a Sydney Sweeney — sorry, Sidney Crosby — jersey? Or maybe you want a tracksuit with the official Games logo? Scammers have already set up dozens of fake online stores just for you! To pull off the heist, they use official logos, convincing photos, and padded rave reviews. You pay, and in return, you get… well, nothing but a transaction alert and your card info stolen. A fake online store for Olympic merchandise Naive shoppers are being lured with gifts: "free" mugs and keychains featuring the Olympic mascot And a hefty "discount" on pins I want my Olympic TV! What if you prefer watching the action from the comfort of your couch rather than trekking from stadium to stadium, but you’re not exactly thrilled about paying for a pricey streaming subscription? Maybe there’s a free stream out there? The bogus streaming service warns you right away that you can't watch just like that — you have to register. But hey, it's free! Another "media provider" fishes for emails to build spam lists or for future phishing... ...But to watch the "free" broadcast, you have to provide your personal data and credit card info Sure thing! Five seconds of searching and your screen is flooded with dozens of “cheap”, “exclusive”, or even “free” live streams. They’ve got everything from figure skating to curling. But there’s a catch: for some reason — even though it’s supposedly free — a pop-up appears asking for your credit card details. You type them in and hit “Play”, but instead of the long-awaited free skate program, you end up on a webcam ad site or somewhere even sketchier. The result: no show for you. At best, you were just used for traffic arbitrage; at worst, they now have access to your bank account. Either way, it’s a major bummer. Defensive tactics Scammers have been ripping off sports fans for years, and their payday depends entirely on how well they can mimic official portals. To stay safe, fans should mount a tiered defense: install reliable security software to block phishing, and keep a sharp eye on every URL you visit. If something feels even slightly off, never, ever enter your personal or payment info. Stick to authorized channels for tickets. Steer clear of third-party resellers and always double-check info on the official Olympic website. Use legitimate streaming services. Read the reviews and don’t hand over your credit card details to unverified sites. Be wary of Olympic merch and gift vendors. Don’t get baited by “exclusive” offers or massive discounts from unknown stores. Only buy from official retail partners. Avoid links in emails, direct messages, texts, or ads offering free tickets, streams, promo codes, or prize giveaways. Deploy a robust security solution. For instance, Kaspersky Premium automatically shuts down phishing attempts and blocks dangerous websites, malicious ads, and credit card skimmers in real time. Want to see how sports fans were targeted in the past? Check out our previous posts: Summer scams in Paris How to watch soccer safely Soccer Cyberthreats
Drawing on years of adversary tradecraft, SpecterOps experts work alongside customers to analyze and eliminate attack paths, protect critical assets, and stay ahead of emerging threats.
Google on Thursday said it observed the North Korea-linked threat actor known as UNC2970 using its generative artificial intelligence (AI) model Gemini to conduct reconnaissance on its targets, as various hacking groups continue to weaponize the tool for accelerating various phases of the cyber attack life cycle, enabling information operations, and even conducting model extraction attacks. "The
Cybersecurity researchers have discovered a fresh set of malicious packages across npm and the Python Package Index (PyPI) repository linked to a fake recruitment-themed campaign orchestrated by the North Korea-linked Lazarus Group. The coordinated campaign has been codenamed graphalgo in reference to the first package published in the npm registry. It's assessed to be active since May 2025. "
Threat activity this week shows one consistent signal — attackers are leaning harder on what already works. Instead of flashy new exploits, many operations are built around quiet misuse of trusted tools, familiar workflows, and overlooked exposures that sit in plain sight. Another shift is how access is gained versus how it’s used. Initial entry points are getting simpler, while post-compromise
A new 2026 market intelligence study of 128 enterprise security decision-makers (available here) reveals a stark divide forming between organizations – one that has nothing to do with budget size or industry and everything to do with a single framework decision. Organizations implementing Continuous Threat Exposure Management (CTEM) demonstrate 50% better attack surface visibility, 23-point
A significant chunk of the exploitation attempts targeting a newly disclosed security flaw in Ivanti Endpoint Manager Mobile (EPMM) can be traced back to a single IP address on bulletproof hosting infrastructure offered by PROSPERO. Threat intelligence firm GreyNoise said it recorded 417 exploitation sessions from 8 unique source IP addresses between February 1 and 9, 2026. An estimated 346
Apple on Wednesday released iOS, iPadOS, macOS Tahoe, tvOS, watchOS, and visionOS updates to address a zero-day flaw that it said has been exploited in sophisticated cyber attacks. The vulnerability, tracked as CVE-2026-20700 (CVSS score: 7.8), has been described as a memory corruption issue in dyld, Apple's Dynamic Link Editor. Successful exploitation of the vulnerability could allow an
A coordinated cyberattack that targeted Poland's energy infrastructure in late December 2025 has prompted cybersecurity agencies to issue urgent warnings to critical national infrastructure operators on both sides of the Atlantic. Read more in my article on the Fortra blog.
A 29-year-old Polish man has been charged in connection with a data breach that exposed the personal details of around 2.5 million customers of the popular Polish e-commerce website Morele.net. Read more in my article on the Hot for Security blog.
AI bots are having existential crises, inventing religions, and allegedly plotting against humanity... or so the internet would have you believe. We dig into Moltbook, the “AI-only” social network that sent Twitter into a meltdown, attracted breathless talk of the singularity, and turned out to be far less show more ...
Terminator and far more humans role-playing as bots. Plus we discuss why "vibe coding" your app might be a catastrophically bad idea, when security researchers can easily peek inside rifle through your private messages, API keys, and databases. Also this week we learn that pro-Russian hackers are circling the Winter Olympics - or is it the Jamaican Bobsleigh team? All this and more is discussed in episode 454 of the "Smashing Security" podcast with cybersecurity veteran Graham Cluley, and special guest Iain Thomson.