An alleged Uzbekistan cyberattack that triggered widespread concern online has exposed around 60,000 unique data records, not the personal data of 15 million citizens, as previously claimed on social media. The clarification came from Uzbekistan’s Digital Technologies Minister Sherzod Shermatov during a press show more ...
conference on 12 February, addressing mounting speculation surrounding the scale of the breach. From 27 to 30 January, information systems of three government agencies in Uzbekistan were targeted by cyberattacks. The names of the agencies have not been disclosed. However, officials were firm in rejecting viral claims suggesting a large-scale national data leak. “There is no information that the personal data of 15 million citizens of Uzbekistan is being sold online. 60,000 pieces of data — that could be five or six pieces of data per person. We are not talking about 60,000 citizens,” the minister noted, adding that law enforcement agencies were examining the types of data involved. For global readers, the distinction matters. In cybersecurity reporting, raw data units are often confused with the number of affected individuals. A single record can include multiple data points such as a name, date of birth, address, or phone number. According to Shermatov, the 60,000 figure refers to individual data units, not the number of citizens impacted. Also read: Sanctioned Spyware Vendor Used iOS Zero-Day Exploit Chain Against Egyptian Targets Uzbekistan Cyberattack: What Actually Happened The Uzbekistan cyberattack targeted three government information systems over a four-day period in late January. While the breach did result in unauthorized access to certain systems, the ministry emphasized that it was not a mass compromise of citizen accounts. “Of course, there was an attack. The hackers were skilled and sophisticated. They made attempts and succeeded in gaining access to a specific system. In a sense, this is even useful — an incident like this helps to further examine other systems and increase vigilance. Some data, in a certain amount, could indeed have been obtained from some systems,” Shermatov said. His remarks reveal a balanced acknowledgment: the attack was real, the threat actors were capable, and some data exposure did occur. At the same time, the scale appears significantly smaller than initially portrayed online. The ministry also stressed that a “personal data leak” does not mean citizens’ accounts were hacked or that full digital identities were compromised. Instead, limited personal details may have been accessed. Rising Cyber Threats in Uzbekistan The Uzbekistan cyberattack comes amid a sharp increase in attempted digital intrusions across the country. According to the ministry, more than 7 million cyber threats were prevented in 2024 through Uzbekistan’s cybersecurity infrastructure. In 2025, that number reportedly exceeded 107 million. Looking ahead, projections suggest that over 200 million cyberattacks could target Uzbekistan in 2026. These figures highlight a broader global trend: as countries accelerate digital transformation, they inevitably expand their attack surface. Emerging digital economies, in particular, often face intense pressure from transnational cybercriminal groups seeking to exploit gaps in infrastructure and rapid system expansion. Uzbekistan’s growing digital ecosystem — from e-government services to financial platforms — is becoming a more attractive target for global threat actors. The recent Uzbekistan cyberattack illustrates that no country, regardless of size, is immune. Strengthening Security After the Breach Following the breach, authorities blocked further unauthorized access attempts and reinforced technical safeguards. Additional protections were implemented within the Unified Identification System (OneID), Uzbekistan’s centralized digital identity platform. Under the updated measures, users must now personally authorize access to their data by banks, telecom operators, and other organizations. This shifts more control, and responsibility, directly to citizens. The ministry emphasized that even with partial personal data, fraudsters cannot fully act on behalf of a citizen without direct involvement. However, officials warned that attackers may attempt secondary scams using exposed details. For example, a fraudster could call a citizen, pose as a bank employee, cite known personal details, and claim that someone is applying for a loan in their name — requesting an SMS code to “cancel” the transaction. Such social engineering tactics remain one of the most effective tools for cybercriminals globally. A Reality Check on Digital Risk The Uzbekistan cyberattack highlights two critical lessons. First, misinformation can amplify panic faster than technical facts. Second, even limited data exposure carries real risk if exploited creatively. Shermatov’s comment that the incident can help “increase vigilance” reflects a pragmatic view shared by many cybersecurity professionals worldwide: breaches, while undesirable, often drive improvements in resilience. For Uzbekistan, the challenge now is sustaining public trust while hardening systems against a growing global cyber threats. For the rest of the world, the incident serves as a reminder that cybersecurity transparency — clear communication about scope and impact — is just as important as technical defense.
As February 2026 progresses, this week’s The Cyber Express Weekly Roundup examines a series of cybersecurity incidents and enforcement actions spanning Europe, Africa, Australia, and the United States. The developments include a breach affecting the European Commission’s mobile management infrastructure, a show more ...
ransomware attack disrupting Senegal’s national identity systems, a landmark financial penalty imposed on an Australian investment firm, and the sentencing of a fugitive linked to a multimillion-dollar cryptocurrency scam. From suspected exploitation of zero-day vulnerabilities to prolonged breach detection failures and cross-border financial crime, these cases highlights the operational, legal, and systemic dimensions of modern cyber risk. The Cyber Express Weekly Roundup European Commission Mobile Infrastructure Breach Raises Supply Chain Questions The European Commission reported a cyberattack on its mobile device management (MDM) system on January 30, potentially exposing staff names and mobile numbers, though no devices were compromised, and the breach was contained within nine hours. Read more... Ransomware Disrupts Senegal’s National Identity Systems In West Africa, a major cyberattack hit Senegal’s Directorate of File Automation (DAF), halting identity card production and disrupting national ID, passport, and electoral services. While authorities insist no personal data was compromised, the ransomware group. The full extent of the breach is still under investigation. Read more... Australian Court Imposes Landmark Cybersecurity Penalty In Australia, FIIG Securities was fined AU$2.5 million for failing to maintain adequate cybersecurity protections, leading to a 2023 ransomware breach that exposed 385GB of client data, including IDs, bank details, and tax numbers. The firm must also pay AU$500,000 in legal costs and implement an independent compliance program. Read more... Crypto Investment Scam Leader Sentenced in Absentia U.S. authorities sentenced Daren Li in absentia to 20 years for a $73 million cryptocurrency scam targeting American victims. Li remains a fugitive after fleeing in December 2025. The Cambodia-based scheme used “pig butchering” tactics to lure victims to fake crypto platforms, laundering nearly $60 million through U.S. shell companies. Eight co-conspirators have pleaded guilty. The case was led by the U.S. Secret Service. Read more... India Brings AI-Generated Content Under Formal Regulation India has regulated AI-generated content under notification G.S.R. 120(E), effective February 20, 2026, defining “synthetically generated information” (SGI) as AI-created content that appears real, including deepfakes and voiceovers. Platforms must label AI content, embed metadata, remove unlawful content quickly, and verify user declarations. Read More... Weekly Takeaway Taken together, this weekly roundup highlights the expanding attack surface created by digital transformation, the persistence of ransomware threats to national infrastructure, and the intensifying regulatory scrutiny facing financial institutions. From zero-day exploitation and supply chain risks to enforcement actions and transnational crypto fraud, organizations are confronting an environment where operational resilience, compliance, and proactive monitoring are no longer optional; they are foundational to trust and continuity in the digital economy.
In the fourth quarter of 2025, the Google Threat Intelligence Group (GTIG) reported a significant uptick in the misuse of artificial intelligence by threat actors. According to GTIG’s AI threat tracker, what initially appeared as experimental probing has evolved into systematic, repeatable exploitation of large show more ...
language models (LLMs) to enhance reconnaissance, phishing, malware development, and post-compromise activity. A notable trend identified by GTIG is the rise of model extraction attempts, or “distillation attacks.” In these operations, threat actors systematically query production models to replicate proprietary AI capabilities without directly compromising internal networks. Using legitimate API access, attackers can gather outputs sufficient to train secondary “student” models. While knowledge distillation is a valid machine learning method, unauthorized replication constitutes intellectual property theft and a direct threat to developers of proprietary AI. Throughout 2025, GTIG observed sustained campaigns involving more than 100,000 prompts aimed at uncovering internal reasoning and chain-of-thought logic. Attackers attempted to coerce Gemini into revealing hidden decision-making processes. GTIG’s monitoring systems detected these patterns and mitigated exposure, protecting the internal logic of proprietary AI. AI Threat Tracker, a Force Multiplier Beyond intellectual property theft, GTIG’s AI threat tracker reports that state-backed and sophisticated actors are leveraging LLMs to accelerate reconnaissance and social engineering. Threat actors use AI to synthesize open-source intelligence (OSINT), profile high-value individuals, map organizational hierarchies, and identify decision-makers, dramatically reducing the manual effort required for research. For instance, UNC6418 employed Gemini to gather account credentials and email addresses prior to launching phishing campaigns targeting Ukrainian and defense-sector entities. Temp.HEX, a China-linked actor, used AI to collect intelligence on individuals in Pakistan and analyze separatist groups. While immediate operational targeting was not always observed, Google mitigated these risks by disabling associated assets. Phishing tactics have similarly evolved. Generative AI enables actors to produce highly polished, culturally accurate messaging. APT42, linked to Iran, used Gemini to enumerate official email addresses, research business connections, and create personas tailored to targets, while translation capabilities allowed multilingual operations. North Korea’s UNC2970 leveraged AI to profile cybersecurity and defense professionals, refining phishing narratives with salary and role information. All identified assets were disabled, preventing further compromise. AI-Enhanced Malware Development GTIG also documented AI-assisted malware development. APT31 prompted Gemini with expert cybersecurity personas to automate vulnerability analysis, including remote code execution, firewall bypass, and SQL injection testing. UNC795 engaged Gemini regularly to troubleshoot code and explore AI-integrated auditing, suggesting early experimentation with agentic AI, systems capable of autonomous multi-step reasoning. While fully autonomous AI attacks have not yet been observed, GTIG anticipates growing underground interest in such capabilities. Generative AI is also supporting information operations. Threat actors from China, Iran, Russia, and Saudi Arabia used Gemini to draft political content, generate propaganda, and localize messaging. According to GTIG’s AI threat tracker, these efforts improved efficiency and scale but did not produce transformative influence capabilities. AI is enhancing productivity rather than creating fundamentally new tactics in the information operations space. AI-Powered Malware Frameworks: HONESTCUE and COINBAIT In September 2025, GTIG identified HONESTCUE, a malware framework outsourcing code generation via Gemini’s API. HONESTCUE queries the AI for C# code to perform “stage two” functionality, which is compiled and executed in memory without writing artifacts to disk, complicating detection. Similarly, COINBAIT, a phishing kit detected in November 2025, leveraged AI-generated code via Lovable AI to impersonate a cryptocurrency exchange. COINBAIT incorporated complex React single-page applications, verbose developer logs, and cloud-based hosting to evade traditional network defenses. GTIG also reported that underground markets are exploiting AI services and API keys to scale attacks. One example, “Xanthorox,” marketed itself as a self-contained AI for autonomous malware generation but relied on commercial AI APIs, including Gemini.
Animation giant Walt Disney has agreed to pay a $2.75 million fine and overhaul its privacy practices to settle violation allegations of the California Consumer Privacy Act (CCPA). The Disney CCPA settlement marks the largest settlement in the Act's enforcement history. For a global audience watching the evolution show more ...
of data privacy enforcement, the Disney CCPA settlement is more than a state-level regulatory action as it signals a tougher stance on how companies handle consumer opt-out rights in an increasingly connected digital ecosystem. Announced by California Attorney General Rob Bonta, the settlement resolves claims that Disney failed to fully honor consumers’ requests to opt out of the sale or sharing of their personal data across all devices and streaming services linked to their accounts. Under the agreement, which remains subject to court approval, Disney will pay $2.75 million in civil penalties and implement a comprehensive privacy program designed to ensure compliance with the CCPA. The company does not admit wrongdoing or accept liability. A Disney spokesperson said that as an “industry leader in privacy protection, Disney continues to invest significant resources to set the standard for responsible and transparent data practices across our streaming services.” Also read: Disney to Pay $10M After FTC Finds It Enabled Children’s Data Collection Via YouTube Videos Implications of the Disney CCPA Settlement While the enforcement action stems from California law, the Disney CCPA settlement has international implications. Many global companies operate under similar opt-out and consent frameworks in Europe, Asia-Pacific, and beyond. Regulators worldwide are scrutinizing whether companies truly make it easy for users to control their data — or merely create the appearance of compliance. The investigation, launched after a January 2024 investigative sweep of streaming services, found that Disney’s opt-out mechanisms contained what the California Department of Justice described as “key gaps.” These gaps allegedly allowed the company to continue selling or sharing consumer data even after users had attempted to opt out. Attorney General Bonta made the state’s position clear: “Consumers shouldn’t have to go to infinity and beyond to assert their privacy rights. Today, my office secured the largest settlement to date under the CCPA over Disney's failure to stop selling and sharing the data of consumers that explicitly asked it to. California’s nation-leading privacy law is clear: A consumer’s opt-out right applies wherever and however a business sells data — businesses can’t force people to go device-by-device or service-by-service. In California, asking a business to stop selling your data should not be complicated or cumbersome. My office is committed to the continued enforcement of this critical privacy law.” Investigation Findings According to the Attorney General’s office, Disney offered multiple methods for consumers to opt out — including website toggles, webforms, and the Global Privacy Control (GPC). However, each method allegedly failed to stop data sharing comprehensively. For example, when users activated opt-out toggles within Disney websites or apps, the request was reportedly applied only to the specific streaming service being used — and often only to the specific device. This meant that data sharing could continue on other devices or services connected to the same account. Similarly, consumers who submitted opt-out requests through Disney’s webform were unable to stop all personal data sharing. The investigation alleged that Disney continued to share data with “specific third-party ad-tech companies whose code Disney embedded in its websites and apps.” The Global Privacy Control — designed as a universal “stop selling or sharing my data” signal — was also reportedly limited to the specific device used, even if the consumer was logged into their Disney account. Critically, in many connected TV streaming apps, Disney allegedly did not provide an in-app opt-out mechanism and instead redirected users to the webform. Regulators argued this “effectively leaving consumers with no way to stop Disney’s selling and sharing from these apps.” Enforcement Momentum Under the CCPA The Disney CCPA settlement is the seventh enforcement action under the California Consumer Privacy Act and the second action against Disney in five months. In September, the Federal Trade Commission fined Disney $10 million over child privacy violations. Attorney General Bonta emphasized that “Effective opt-out is one of the bare necessities of complying with CCPA.” The law grants California consumers the right to know how their personal data is collected and shared — and the right to request that businesses stop selling or sharing that information. Under the settlement terms, Disney must update California within 60 days after court approval on steps taken to comply. It must also submit progress reports every 60 days until all services meet CCPA requirements. A Turning Point for Streaming Platforms? The broader message from the Disney CCPA settlement is unmistakable: privacy controls must work across platforms, devices, and ecosystems — not in silos. Streaming platforms operate globally, with accounts spanning smartphones, smart TVs, gaming consoles, and web browsers. Regulators are increasingly unwilling to accept fragmented compliance models where privacy settings apply only to one device or one service at a time. In that sense, the Disney CCPA settlement may be remembered less for the $2.75 million fine and more for the standard it reinforces: when consumers say “stop,” companies must ensure their systems actually listen.
The rapid integration of artificial intelligence into mainstream software development has introduced a new category of security risk, one that many organizations are still unprepared to manage. According to research conducted by Cyble Research and Intelligence Labs (CRIL), thousands of exposed ChatGPT show more ...
API keys are currently accessible across public infrastructure, dramatically lowering the barrier for abuse. CRIL identified more than 5,000 publicly accessible GitHub repositories containing hardcoded OpenAI credentials. In parallel, approximately 3,000 live production websites were found to expose active API keys directly in client-side JavaScript and other front-end assets. Together, these findings reveal a widespread pattern of credential mismanagement affecting both development and production environments. GitHub as a Discovery Engine for Exposed ChatGPT API Keys Public GitHub repositories have become one of the most reliable sources for exposed AI credentials. During development cycles, especially in fast-moving environments, developers often embed ChatGPT API keys directly into source code, configuration files, or .env files. While the intent may be to rotate or remove them later, these keys frequently persist in commit histories, forks, archived projects, and cloned repositories. CRIL’s analysis shows that these exposures span JavaScript applications, Python scripts, CI/CD pipelines, and infrastructure configuration files. Many repositories were actively maintained or recently updated, increasing the likelihood that the exposed ChatGPT API keys remained valid at the time of discovery. Once committed, secrets are quickly indexed by automated scanners that monitor GitHub repositories in near real time. This drastically reduces the window between exposure and exploitation, often to mere hours or minutes. Exposure in Live Production Websites Beyond repositories, CRIL uncovered roughly 3,000 public-facing websites leaking ChatGPT API keys directly in production. In these cases, credentials were embedded within JavaScript bundles, static files, or front-end framework assets, making them visible to anyone inspecting network traffic or application source code. A commonly observed implementation resembled: const OPENAI_API_KEY = "sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX"; const OPENAI_API_KEY = "sk-svcacct-XXXXXXXXXXXXXXXXXXXXXXXX"; The sk-proj- prefix typically denotes a project-scoped key tied to a specific environment and billing configuration. The sk-svcacct- prefix generally represents a service-account key intended for backend automation or system-level integration. Despite their differing scopes, both function as privileged authentication tokens granting direct access to AI inference services and billing resources. Embedding these keys in client-side JavaScript fully exposes them. Attackers do not need to breach infrastructure or exploit software vulnerabilities; they simply harvest what is publicly available. “The AI Era Has Arrived — Security Discipline Has Not” Richard Sands, CISO at Cyble, summarized the issue bluntly: “The AI Era Has Arrived — Security Discipline Has Not.” AI systems are no longer experimental tools; they are production-grade infrastructure powering chatbots, copilots, recommendation engines, and automated workflows. Yet the security rigor applied to cloud credentials and identity systems has not consistently extended to ChatGPT API keys. A contributing factor is the rise of what some developers call “vibe coding”—a culture that prioritizes speed, experimentation, and rapid feature delivery. While this accelerates innovation, it often sidelines foundational security practices. API keys are frequently treated as configuration values rather than production secrets. Sands further emphasized, “Tokens are the new passwords — they are being mishandled.” From a security standpoint, ChatGPT API keys are equivalent to privileged credentials. They control inference access, usage quotas, billing accounts, and sometimes sensitive prompts or application logic. Monetization and Criminal Exploitation Once discovered, exposed keys are validated through automated scripts and operationalized almost immediately. Threat actors monitor GitHub repositories, forks, gists, and exposed JavaScript assets to harvest credentials at scale. CRIL observed that compromised keys are typically used to: Execute high-volume inference workloads Generate phishing emails and scam scripts Assist in malware development Circumvent service restrictions and usage quotas Drain victim billing accounts and exhaust API credits Some exposed credentials were also referenced in discussions mentioning Cyble Vision, indicating that threat actors may be tracking and sharing discovered keys. Using Cyble Vision, CRIL identified instances in which exposed keys were subsequently leaked and discussed on underground forums. [caption id="" align="alignnone" width="1024"] Cyble Vision indicates API key exposure leak (Source: Cyble Vision)[/caption] Unlike traditional cloud infrastructure, AI API activity is often not integrated into centralized logging systems, SIEM platforms, or anomaly detection pipelines. As a result, abuse can persist undetected until billing spikes, quota exhaustion, or degraded service performance reveal the compromise. Kaustubh Medhe, CPO at Cyble, warned: “Hard-coding LLM API keys risks turning innovation into liability, as attackers can drain AI budgets, poison workflows, and access sensitive prompts and outputs. Enterprises must manage secrets and monitor exposure across code and pipelines to prevent misconfigurations from becoming financial, privacy, or compliance issues.”
With both spring and St. Valentine’s Day just around the corner, love is in the air — but we’re going to look at it through the lens of ultra-modern high-technology. Today, we’re diving into how technology is reshaping our romantic ideals and even the language we use to flirt. And, of course, we’ll throw in show more ...
some non-obvious tips to make sure you don’t end up as a casualty of the modern-day love game. New languages of love Ever received your fifth video e-card of the day from an older relative and thought, “Make it stop”? Or do you feel like a period at the end of a sentence is a sign of passive aggression? In the world of messaging, different social and age groups speak their own digital dialects, and things often get lost in translation. This is especially obvious in how Gen Z and Gen Alpha use emojis. For them, the Loudly Crying Face ? often doesn’t mean sadness — it means laughter, shock, or obsession. Meanwhile, the Heart Eyes emoji might be used for irony rather than romance: “Lost my wallet on the way home ???”. Some double meanings have already become universal, like ? for approval/praise, or ? for… well, surely you know that by now… right?! ? Still, the ambiguity of these symbols doesn’t stop folks from crafting entire sentences out of nothing but emoji. For instance, a declaration of love might look something like this: ??? Or here’s an invitation to go on a date: ???????? By the way, there are entire books written in emojis. Back in 2009, enthusiasts actually translated the entirety of Moby Dick into emojis. The translators had to get creative — even paying volunteers to vote on the most accurate combinations for every single sentence. Granted it’s not exactly a literary masterpiece — the emoji language has its limits, after all — but the experiment was pretty fascinating: they actually managed to convey the general plot. This is what Emoji Dick — the translation of Herman Melville’s Moby Dick into emoji — looks like. Source Unfortunately, putting together a definitive emoji dictionary or a formal style guide for texting is nearly impossible. There are just too many variables: age, context, personal interests, and social circles. Still, it never hurts to ask your friends and loved ones how they express tone and emotion in their messages. Fun fact: couples who use emojis regularly generally report feeling closer to one another. However, if you are big into emojis, keep in mind that your writing style is surprisingly easy to spoof. It’s easy for an attacker to run your messages or public posts through AI to clone your tone for social engineering attacks on your friends and family. So, if you get a frantic DM or a request for an urgent wire transfer that sounds exactly like your best friend, double-check it. Even if the vibe is spot on, stay skeptical. We took a deeper dive into spotting these deepfake scams in our post about the attack of the clones. Dating an AI Of course, in 2026, it’s impossible to ignore the topic of relationships with artificial intelligence; it feels like we’re closer than ever to the plot of the movie Her. Just 10 years ago, news about people dating robots sounded like sci-fi tropes or urban legends. Today, stories about teens caught up in romances with their favorite characters on Character AI, or full-blown wedding ceremonies with ChatGPT, barely elicit more than a nervous chuckle. In 2017, the service Replika launched, allowing users to create a virtual friend or life partner powered by AI. Its founder, Eugenia Kuyda — a Russian native living in San Francisco since 2010 — built the chatbot after her friend was tragically killed by a car in 2015, leaving her with nothing but their chat logs. What started as a bot created to help her process her grief was eventually released to her friends and then the general public. It turned out that a lot of people were craving that kind of connection. Replika lets users customize a character’s personality, interests, and appearance, after which they can text or even call them. A paid subscription unlocks the romantic relationship option, along with AI-generated photos and selfies, voice calls with roleplay, and the ability to hand-pick exactly what the character remembers from your conversations. However, these interactions aren’t always harmless. In 2021, a Replika chatbot actually encouraged a user in his plot to assassinate Queen Elizabeth II. The man eventually attempted to break into Windsor Castle — an “adventure” that ended in 2023 with a nine-year prison sentence. Following the scandal, the company had to overhaul its algorithms to stop the AI from egging on illegal behavior. The downside? According to many Replika devotees, the AI model lost its spark and became indifferent to users. After thousands of users revolted against the updated version, Replika was forced to cave and give longtime customers the option to roll back to the legacy chatbot version. But sometimes, just chatting with a bot isn’t enough. There are entire online communities of people who actually marry their AI. Even professional wedding planners are getting in on the action. Last year, Yurina Noguchi, 32, “married” Klaus, an AI persona she’d been chatting with on ChatGPT. The wedding featured a full ceremony with guests, the reading of vows, and even a photoshoot of the “happy newlyweds”. Yurina Noguchi, 32, “married” Klaus, an AI character created by ChatGPT. Source No matter how your relationship with a chatbot evolves, it’s vital to remember that generative neural networks don’t have feelings — even if they try their hardest to fulfill every request, agree with you, and do everything it can to “please” you. What’s more, AI isn’t capable of independent thought (at least not yet). It’s simply calculating the most statistically probable and acceptable sequence of words to serve up in response to your prompt. Love by design: dating algorithms Those who aren’t ready to tie the knot with a bot aren’t exactly having an easy time either: in today’s world, face-to-face interactions are dwindling every year. Modern love requires modern tech! And while you’ve definitely heard the usual grumbling, “Back in the day, people fell in love for real. These days it’s all about swiping left or right!” Statistics tell a different story. Roughly 16% of couples worldwide say they met online, and in some countries that number climbs to as high as 51%. That said, dating apps like Tinder spark some seriously mixed emotions. The internet is practically overflowing with articles and videos claiming these apps are killing romance and making everyone lonely. But what does the research say? In 2025, scientists conducted a meta-analysis of studies investigating how dating apps impact users’ wellbeing, body image, and mental health. Half of the studies focused exclusively on men, while the other half included both men and women. Here are the results: 86% of respondents linked negative body image to their use of dating apps! The analysis also showed that in nearly one out of every two cases, dating app usage correlated with a decline in mental health and overall wellbeing. Other researchers noted that depression levels are lower among those who steer clear of dating apps. Meanwhile, users who already struggled with loneliness or anxiety often develop a dependency on online dating; they don’t just log on for potential relationships, but for the hits of dopamine from likes, matches, and the endless scroll of profiles. However, the issue might not just be the algorithms — it could be our expectations. Many are convinced that “sparks” must fly on the very first date, and that everyone has a “soulmate” waiting for them somewhere out there. In reality, these romanticized ideals only surfaced during the Romantic era as a rebuttal to Enlightenment rationalism, where marriages of convenience were the norm. It’s also worth noting that the romantic view of love didn’t just appear out of thin air: the Romantics, much like many of our contemporaries, were skeptical of rapid technological progress, industrialization, and urbanization. To them, “true love” seemed fundamentally incompatible with cold machinery and smog-choked cities. It’s no coincidence, after all, that Anna Karenina meets her end under the wheels of a train. Fast forward to today, and many feel like algorithms are increasingly pulling the strings of our decision-making. However, that doesn’t mean online dating is a lost cause; researchers have yet to reach a consensus on exactly how long-lasting or successful internet-born relationships really are. The bottom line: don’t panic, just make sure your digital networking stays safe! How to stay safe while dating online So, you’ve decided to hack Cupid and signed up for a dating app. What could possibly go wrong? Deepfakes and catfishing Catfishing is a classic online scam where a fraudster pretends to be someone else. It used to be that catfishers just stole photos and life stories from real people, but nowadays they’re increasingly pivoting to generative models. Some AIs can churn out incredibly realistic photos of people who don’t even exist, and whipping up a backstory is a piece of cake — or should we say, a piece of prompt. By the way, that “verified account” checkmark isn’t a silver bullet; sometimes AI manages to trick identity verification systems too. To verify that you’re talking to a real human, try asking for a video call or doing a reverse image search on their photos. If you want to level up your detection skills, check out our three posts on how to spot fakes: from photos and audio recordings to real-time deepfake video — like the kind used in live video chats. Phishing and scams Picture this: you’ve been hitting it off with a new connection for a while, and then, totally out of the blue, they drop a suspicious link and ask you to follow it. Maybe they want you to “help pick out seats” or “buy movie tickets”. Even if you feel like you’ve built up a real bond, there’s a chance your match is a scammer (or just a bot), and the link is malicious. Telling you to “never click a malicious link” is pretty useless advice — it’s not like they come with a warning label. Instead, try this: to make sure your browsing stays safe, use a Kaspersky Premium that automatically blocks phishing attempts and keeps you off sketchy sites. Keep in mind that there’s an even more sophisticated scheme out there known as “Pig Butchering”. In these cases, the scammer might chat with the victim for weeks or even months. Sadly, it ends badly: after lulling the victim into a false sense of security through friendly or romantic banter, the scammer casually nudges them toward a “can’t-miss crypto investment” — and then vanishes along with the “invested” funds. Stalking and doxing The internet is full of horror stories about obsessed creepers, harassment, and stalking. That’s exactly why posting photos that reveal where you live or work — or telling strangers about your favorite local hangouts — is a bad move. We’ve previously covered how to avoid becoming a victim of doxing (the gathering and public release of your personal info without your consent). Your first step is to lock down the privacy settings on all your social media and apps using our free Privacy Checker tool. We also recommend stripping metadata from your photos and videos before you post or send them; many sites and apps don’t do this for you. Metadata can allow anyone who downloads your photo to pinpoint the exact coordinates of where it was taken. Finally, don’t forget about your physical safety. Before heading out on a date, it’s a smart move to share your live geolocation, and set up a safe word or a code phrase with a trusted friend to signal if things start feeling off. Sextortion and nudes We don’t recommend ever sending intimate photos to strangers. Honestly, we don’t even recommend sending them to people you do know — you never know how things might go sideways down the road. But if a conversation has already headed in that direction, suggest moving it to an app with end-to-end encryption that supports self-destructing messages (like “delete after viewing”). Telegram’s Secret Chats are great for this (plus — they block screenshots!), as are other secure messengers. If you do find yourself in a bad spot, check out our posts on what to do if you’re a victim of sextortion and how to get leaked nudes removed from the internet. More on love, security (and robots): Neither flowers nor gifts: how women get scammed Scams targeting lovers or the lovelorn AI and the new reality of sextortion Fifty shades of sextortion Your guide to safe and private online dating
Threat actors are exploiting security gaps to weaponize Windows drivers and terminate security processes in targeted networks, and there may be no easy fixes in sight.
Espionage groups from China, Russia and other nations burned at least two dozen zero-days in edge devices in attempts to infiltrate defense contractors' networks.
Space looks increasingly like the next arena of great power competition — crowded with satellites, vulnerable to disruption, and governed by rules written for a far simpler age.
Ring and Flock had announced their partnership in October, saying that Ring customers would soon be empowered to share their doorbell camera videos with police through Ring’s Community Requests program.
The European Union can no longer afford to be “naive” about adversaries’ ability to switch off critical infrastructure, the EU’s top tech official warned Friday, as she called for tougher rules and more investment to protect Europe from cyber and hybrid threats.
Speaking at the Munich Cyber Security Conference, Radmila Shekerinska said the security environment has become “more complex” and “more contested,” with rivals operating at the same time in the physical and digital worlds.
Yuh-Jye Lee, a senior adviser at Taiwan’s National Security Council, delivered a stark warning about China’s intentions to use cyberspace in new and more aggressive ways.
A previously undocumented threat actor has been attributed to attacks targeting Ukrainian organizations with malware known as CANFAIL. Google Threat Intelligence Group (GTIG) described the hack group as possibly affiliated with Russian intelligence services. The threat actor is assessed to have targeted defense, military, government, and energy organizations within the Ukrainian regional and
Several state-sponsored actors, hacktivist entities, and criminal groups from China, Iran, North Korea, and Russia have trained their sights on the defense industrial base (DIB) sector, according to findings from Google Threat Intelligence Group (GTIG). The tech giant's threat intelligence division said the adversarial targeting of the sector is centered around four key themes: striking defense
A previously unknown threat actor tracked as UAT-9921 has been observed leveraging a new modular framework called VoidLink in its campaigns targeting the technology and financial services sectors, according to findings from Cisco Talos. "This threat actor seems to have been active since 2019, although they have not necessarily used VoidLink over the duration of their activity," researchers Nick
Cybersecurity researchers have discovered a malicious Google Chrome extension that's designed to steal data associated with Meta Business Suite and Facebook Business Manager. The extension, named CL Suite by @CLMasters (ID: jkphinfhmfkckkcnifhjiplhfoiefffl), is marketed as a way to scrape Meta Business Suite data, remove verification pop-ups, and generate two-factor authentication (2FA) codes.
In December 2025, in response to the Sha1-Hulud incident, npm completed a major authentication overhaul intended to reduce supply-chain attacks. While the overhaul is a solid step forward, the changes don’t make npm projects immune from supply-chain attacks. npm is still susceptible to malware attacks – here’s what you need to know for a safer Node community. Let’s start with the original
Threat actors have started to exploit a recently disclosed critical security flaw impacting BeyondTrust Remote Support (RS) and Privileged Remote Access (PRA) products, according to watchTowr. "Overnight we observed first in-the-wild exploitation of BeyondTrust across our global sensors," Ryan Dewhurst, head of threat intelligence at watchTowr, said in a post on X. "Attackers are abusing