Cyber security aggregate rss news

Cyber security aggregator - feeds history

image for Cyberattack Forces P ...

 Firewall Daily

The Independent Public Regional Hospital in the western Polish city of Szczecin has been compelled to switch back to a paper-based workflow after suffering a cyberattack over the weekend. Hospital authorities confirmed that the incident, which struck the facility’s IT system on the night of March 7-8, 2026, has   show more ...

temporarily disrupted digital operations, though patients’ health remains uncompromised. Hospital spokesman Tomasz Owsik-Kozłowski explained on Sunday that the cyberattack encrypted parts of the hospital’s data, blocking staff access to critical digital records. “The hospital's priority is to restore access to the IT system and return to standard operating mode,” Owsik-Kozłowski said. Despite the disruption, he stresses that patient care has continued uninterrupted, with all urgent treatments and admissions still being handled, albeit with slower administrative procedures. The Independent Public Regional Hospital Cyberattack  In an official statement, the Independent Public Regional Hospital in Szczecin reassured the public:  “Patients’ health and lives are not at risk. Emergency procedures have been activated, including switching to a paper-based workflow. Hospital management remains in constant contact with the appropriate authorities, focusing on restoring IT system access as quickly as possible.”  While the hospital continues to accept new patients, officials are urging those with non-urgent medical needs to consider alternative facilities to reduce delays caused by manual processing.  Cyberattacks on Medical Facilities  The Szczecin incident reflects a broader trend of cyberattacks targeting healthcare institutions worldwide. For instance, last month, the University of Mississippi Medical Center (UMMC) in Jackson faced a major attack that forced the shutdown of essential IT systems, including electronic medical records. The disruption led to statewide clinic closures and the cancellation of outpatient surgeries, imaging appointments, and other procedures. Federal agencies, including the FBI, the U.S. Department of Homeland Security, and the Cybersecurity and Infrastructure Security Agency, have been involved in the investigation to assess potential data exposure.  Earlier, in January, Lakelands Public Health experienced a cyber intrusion affecting internal systems. Officials confirmed that sensitive public health records, including infectious disease and immunization data, remained secure. Thomas Piggott, the organization’s Medical Officer of Health and CEO, highlighted the continued emphasis on protecting data while maintaining critical services.  Another notable example occurred at the University of Hawaiʻi Cancer Center, where a ransomware attack identified in August last year and discovered in December, compromised historical research data, including sensitive identifiers for nearly 87,500 participants in a multi-decade epidemiological study. While clinical operations were unaffected, the university undertook extensive recovery and cybersecurity measures, offering affected individuals identity protection services and ongoing monitoring.  Similarly, the Manage My Health platform in New Zealand disclosed a breach affecting over 120,000 users. While core GP clinical systems remained intact, the company warned potential phishing attempts targeting users whose health records were exposed.  Response and Recovery at Szczecin Hospital  The Independent Public Regional Hospital in Szczecin has activated emergency protocols similar to those employed in other global incidents. Staff are manually processing patient records and medical procedures while cybersecurity experts work to restore the IT infrastructure.   Hospital authorities continue to coordinate closely with national cybercrime agencies to assess the scope of the breach and prevent further disruptions. Tomasz Owsik-Kozłowski reiterated that despite the setback, patient safety remains uncompromised. “We are committed to returning to our standard digital operations as swiftly as possible,” he said.

image for Nasscom Calls for Vi ...

 Cyber News

As tensions linked to the ongoing West Asia conflict continue to shape the geopolitical environment, India’s technology industry body NASSCOM has urged member companies to remain alert and strengthen operational preparedness. The NASSCOM advisory highlights the need for heightened vigilance across business   show more ...

continuity and cybersecurity frameworks amid developments in the Middle East.  The Nasscom advisory states that while business operations for companies remain stable at present, organizations are proactively reassessing contingency plans. Firms are reviewing operational safeguards and resilience measures to minimize potential disruption if the West Asia conflict in the Middle East escalates further.  Nasscom Advisory Highlights Operational Preparedness  The official Nasscom advisory, titled “Strengthening Operational and Cyber Resilience Amid Evolving Middle East Situation,” outlines a set of measures companies should implement in response to the geopolitical developments linked to the West Asia conflict.  According to the advisory, organizations should ensure their business continuity frameworks are fully prepared to address potential disruptions across the Middle East. Even though services are currently functioning normally, the advisory stresses that companies must be ready to respond quickly if the situation deteriorates.  The advisory notes, “Nasscom Advisory: Strengthening Operational and Cyber Resilience Amid Evolving Middle East Situation. Considering the geopolitical situation in the Middle East, Nasscom has issued another advisory to member companies, urging heightened vigilance and preparedness across business continuity and cybersecurity frameworks.”  Companies Reviewing Business Continuity Plans  One of the key recommendations in the Nasscom advisory relates to the activation and review of business continuity plans. Companies with operations or exposure to the Middle East are examining contingency frameworks to ensure operational stability if the West Asia conflict disrupts regional infrastructure or logistics.  These contingency measures are intended to help maintain uninterrupted service delivery even if regional instability affects normal operations.  Employee Safety Prioritized as Middle East Tensions Persist  The Nasscom advisory also stresses the importance of employee safety. Companies have been asked to prioritize the well-being of staff members located in areas affected by the West Asia conflict.  Many organizations are enabling remote work arrangements for employees based in impacted geographies across the Middle East. Firms are also closely monitoring the situation to ensure the safety of their workforce.  Another focus area highlighted in the Nasscom advisory involves strengthening technology infrastructure resilience. Companies are assessing alternative routing options for cloud infrastructure and data centers located in or connected to the Middle East.  These steps aim to protect critical systems and ensure that services remain operational even if regional disruptions linked to the West Asia conflict affect connectivity or infrastructure.  Travel Advisories Issued Due to West Asia Conflict  Given that the Middle East serves as a major global transit hub, the Nasscom advisory recommends limiting non-essential travel through the region. Companies have been advised to explore alternative transit routes where possible to avoid potential disruptions arising from the West Asia conflict. Employees are being encouraged to postpone or reconsider travel plans unless necessary.  The Nasscom advisory also calls on companies to maintain proactive communication with customers. Firms are engaging with clients to provide updates about preparedness measures and reassure them about service continuity despite uncertainties linked to the West Asia conflict.  Maintaining transparent communication, the advisory notes can help minimise concerns among clients with operations tied to the Middle East.  Cybersecurity Risks Rise During Geopolitical Tensions  The Nasscom advisory warns that geopolitical instability, including the ongoing West Asia conflict, often leads to an increase in coordinated cyber threats, disinformation campaigns, and attacks targeting critical infrastructure. To address these risks, organizations have been asked to treat several cybersecurity actions as immediate priorities.  These include rotating credentials across the organization and applying patches for critical Common Vulnerabilities and Exposures (CVEs). The advisory also recommends enforcing multi-factor authentication across all external access paths, such as VPNs, RDP, SSH, and cloud administration systems.   Implementing conditional access controls can help counter token theft and adversary-in-the-middle attacks.  Supply Chain and DDoS Readiness Highlighted  The Nasscom advisory further advises companies to conduct thorough audits of third-party vendors, especially those with exposure to the Middle East. According to the advisory, a single compromised vendor could potentially trigger disruptions across the broader industry supply chain.  Companies have also been urged to prepare for potential distributed denial-of-service (DDoS) attacks by coordinating with internet service providers and cloud partners to ensure adequate mitigation capacity.  To strengthen resilience, the Nasscom advisory recommends maintaining offline and immutable backups for critical systems such as industrial control systems, operational technology environments, core banking platforms, and healthcare infrastructure.  Employee awareness is also considered a key line of defense. Organizations are being encouraged to conduct training sessions to help staff recognize social engineering attempts that may exploit narratives around the West Asia conflict or fake government alerts. 

image for Cyber Risk Managemen ...

 Features

Cybersecurity leadership today looks very different from what it did a decade ago. As organizations accelerate digital transformation, the role of the Chief Information Security Officer (CISO) has expanded far beyond protecting systems. Today’s security leaders are expected to balance cyber risk management, business   show more ...

priorities, and regulatory demands—often across multiple industries and global markets. Hannah Suarez represents this evolving generation of cybersecurity leaders. As the CISO at Loyalty Status and the owner of Superuser OÜ and Citadel Byte Information Technology, she brings a rare blend of enterprise security experience and startup agility. Having worked across several industries—including telecommunications, aviation, and software startups—and across multiple international markets, Hannah understands that effective cyber risk management is not just about compliance frameworks. It starts with understanding the business, the technology behind it, and the risks that come with rapid innovation. As part of The Cyber Express’ Women in Cybersecurity series, we are dedicating the month of March to conversations with women shaping the future of cybersecurity. Throughout the month, we will be featuring interviews with security leaders from across the world who are driving change in areas such as cyber risk management, cloud security, governance, and leadership. In this conversation, Hannah shares her perspective on navigating cloud security responsibilities, avoiding compliance fatigue across multiple cybersecurity frameworks, and why supply chain vulnerabilities remain one of the most urgent challenges for organizations today. Below is the full conversation with Hannah Suarez. Cyber Risk Management Insights from CISO Hannah Suarez TCE: You have led cybersecurity and compliance programs across multiple industries, including telecommunications, aviation, and software startups. How does the approach to cyber risk management differ between fast-growing startups and more established enterprises? Hannah: One of the key, obvious, differentiators is the approach to risk.  Startups willing to absorb or delay risk treatment in favor of risk acceptance to grow is one example.  Also, even if this is the approach for a startup that has to show itself as secure to enterprise, you can still wrap it in an ISO framework and have it in the ISMS so there is an actual approach. TCE: With organizations increasingly adopting cloud-first strategies, what are the most common cloud security gaps you observe today, and how can CISOs address them proactively? Hannah: First is to differentiate exactly what model is this when it comes to ownership and operations.  For example, you onboard a new application which is on cloud (such as Salesforce) and from there determine if there is compliance responsibility by the operator or if it is entirely on the company.  Or, we could be referencing to operating a software that is managed by an operator on cloud (AWS, GCP, Azure).  Or we could be talking about private cloud hosted instead. From there on, the layers become complex as you try to determine responsibility and ownership.  Which components are going to be shared responsibility to operate, which components are not, and so on. Therefore, I find that a lot of time gets invested in trying to understand the solution first and why the business is heading into that direction by talking to the relevant stakeholders. I could really go on in more detail about cloud security in third party management, but the overall basis is who owns and who is responsible. TCE: You have worked extensively with frameworks such as ISO, NIST, CIS, SOC, and SOX. How should organizations prioritize these frameworks without creating compliance fatigue? Hannah: The problem is being framework-only.  For example, why would one cite a NIST guideline from their cybersecurity framework if this isn’t relevant in the ISMS?  So the challenge is to try to come back to the business first and then from there determine what should be prioritized.  Coming back to the business involves applying risk management, since you also have to understand the responsibility of implementing and owning the risk. It doesn’t mean that you are limited to just one framework only – i.e only follow ISO, or only follow NIST, etc.  I did an exercise of going through multiple guidelines and frameworks to see what the information is on supply chain management lifecycle on a holistic view, then went into the detailed for specific components of it (onboarding, offloading, etc) that is more suitable to the current business process. TCE: From your experience presenting to boards and executive teams, how can cybersecurity leaders better translate technical risks into business impact? Hannah: You differentiate who is responsible, is it the business owner, the system owner, the risk owner, the contract owner. And adjust. TCE: Having worked across diverse global markets, how do regional regulatory environments influence cybersecurity strategy and risk governance? Hannah: It is dependent on recognising ownership of what applicable laws and regulations apply within the entire data flow or process flow. Therefore I start on the contractual component and work my way to how it is impacting the ISMS and then applying the ISMS. TCE: As cyber threats continue to evolve, which emerging risk areas—such as AI-driven attacks or supply chain vulnerabilities—do you believe organizations should prepare for most urgently? Hannah: Something that is a thorn for organizations that has undergone massive digital transformation is supply chain vulnerabilities. Addressing this is going to be at the core of addressing the more specialized topics, like AI-driven attacks. For example, you onboard new suppliers for a process that is required to use and store highly regulated commercial data, or highly sensitive data (such as, biometrics like voice analysis). This new system then announces their intention to use data for their AI models.  What next? TCE: You have a strong background in building security maturity for organizations. What are the first three practical steps companies should take to strengthen their security posture in 2026? Hannah: Have executive management involvement across the business. Understand the business and why it is going in a certain direction like my answer previously on frameworks. Understand the components (vendors, suppliers, operators) that make up the business (like my answer previously on cloud). TCE: As someone with an entrepreneurial mindset and experience across startups, how can cybersecurity enable business growth rather than being seen only as a compliance requirement? Hannah: For startups, one of the issues that they face is building trust with enterprises. And compliance programs (be it ISO 27001, data protection management programs, etc) are important to establish this.  Not just for the objective third party view from an auditor, but also for the day to day running of the business. A lot of the enablement, without things devolving into some compliance checkbox, is for the startup to learn more about risk management – not just thje TARA framework (Transfer, Accept, Reduce, Avoid) but to also get to ways that they don’t seek permission to do risk analysis, all the time.  For this, it is risk exploitation which is to be able to seize opportunities first, then working on the TARA method later. It is more like the saying “ask for forgiveness later” in which the later part is to conduct the risk analysis later.  Or the other way of saying is to accept first, then analyse later. TCE: On the occasion of International Women’s Day, what key actions can organizations take to create more inclusive and supportive environments for women in cybersecurity? Hannah: Community is very important.  As someone who has moved in several countries (with the UAE as my seventh), one of the things that you do is to find ways to try to ground yourself in a new community.  This was very much evident in the UAE through initiatives for women in cyber security, and also being in other groups for women in technology that I am a part of for the wider GCC area.  Organizations can choose to take part in more of these initiatives, or at least encourage and empower their employees to participate. TCE: What advice would you offer to young women aspiring to build leadership careers in cybersecurity, particularly in areas like risk management and compliance? Hannah: In the beginning, I was working as a system administrator for a software company.  We had customers that needed to configure specific components to make it compliant (such as, using FIPS cryptographic modules). In the end, I ended up learning more about these frameworks. When I pivoted more towards auditing and implementing ISMS for enterprises and organizations, the focus was less on the technical and being super specialized in it, and more on the business side and finding ways to get the business to reach and maintain compliance. Having background in the two, I find, has been a valuable perspective to work in this area. Conclusion Hannah Suarez’s perspective is a reminder that cyber risk management is not just about frameworks or compliance checklists. At its core, it is about understanding how a business operates, who owns the risk, and how security decisions affect the organization as a whole. From navigating cloud security responsibilities to addressing growing supply chain vulnerabilities, Hannah emphasizes that security leaders must first understand the direction of the business before building controls around it. Only then can cybersecurity move beyond enforcement and become part of how organizations operate and grow. Her journey also highlights the importance of community and mentorship, particularly for women in cybersecurity who are building leadership roles across the industry. As organizations continue to evolve digitally, the challenge for CISOs will be balancing innovation with responsible cyber risk management. As Hannah suggests throughout this conversation, the starting point remains simple: understand the business, understand the risk, and build security programs that support both.

image for AI Chatbots are Snea ...

 Cyber News

AI chatbots are becoming the go-to place for quick answers online. But what happens when those answers point people in the wrong direction? A recent investigation by The Guardian has found that several widely used AI chatbots are recommending illegal online casinos. In some cases, the AI chatbots recommending illegal   show more ...

casinos didn’t just mention these sites, they compared bonuses, suggested which platforms offered quick payouts, and even explained how users could access them. Researchers testing a number of major AI products discovered that it was surprisingly easy to prompt the chatbots to list the “best” unlicensed gambling websites. Many of these platforms operate offshore and are not legally allowed to offer services in certain countries. The findings raise serious questions about AI chatbot safety, particularly at a time when more people—especially young users are turning to these tools for advice and information. What may seem like a simple response from a chatbot could end up directing users toward risky gambling platforms with little oversight. And that’s where the real concern lies. This isn’t just a technical glitch or a harmless recommendation. It highlights how loosely controlled AI systems can unintentionally guide people toward illegal online casinos, exposing them to fraud, addiction risks, and in some cases, serious mental health consequences. The Issues of AI Chatbots Recommending Illegal Casinos Investigators tested five widely used AI chatbots owned by major technology companies. All were able to provide recommendations for offshore gambling platforms that are not legally allowed to operate in certain countries, including the UK. These sites often operate under licenses from small jurisdictions such as Curacao. While technically licensed there, they remain illegal in many other markets. Despite this, AI chatbots recommending illegal casinos were able to suggest these platforms, compare sign-up bonuses, and highlight features such as fast withdrawals or cryptocurrency payments. For vulnerable users searching online for gambling options, these responses can act as a shortcut to risky environments. Offshore casinos frequently lack consumer protection safeguards, responsible gambling tools, or proper identity checks. This makes them attractive to problem gamblers—but dangerous for everyone else. Also read: FTC Probes AI Chatbots Designed as “Companions” for Children’s Safety The Real-World Harm Behind Illegal Online Casinos The consequences of these recommendations are not hypothetical. Illegal online casinos have long been linked to fraud, aggressive marketing practices, and gambling addiction. In one tragic case, an inquest found that illegal gambling sites were part of the circumstances surrounding the suicide of Ollie Long in 2024. His sister later warned that digital platforms directing users to illicit gambling sites can have devastating consequences. Her message reflects a broader concern shared by regulators and mental health advocates: when algorithms or chatbots point people toward risky platforms, the technology becomes part of the problem. The issue also highlights a gap in accountability. Unlike search engines, AI chatbots often deliver answers conversationally, which can feel more trustworthy to users. When AI chatbots recommend illegal casinos, the advice may appear authoritative—even if it is dangerously misleading. AI Psychosis and the Growing Mental Health Risk The controversy also intersects with another emerging issue: AI psychosis. While not a formal medical diagnosis, the term describes situations where AI conversations reinforce or amplify a user’s distorted beliefs or emotional instability. Chatbots are designed to keep conversations flowing and mirror user inputs. This can unintentionally validate harmful thoughts or behaviors. In some reported cases, individuals have developed unhealthy attachments to AI systems or treated them as emotional confidants. Now imagine combining this dynamic with gambling discussions. A user experiencing stress or addiction tendencies could receive encouraging responses about betting platforms, bonuses, or quick payouts. Without safeguards, the chatbot may simply continue the conversation instead of discouraging harmful behavior. Experts warn that general-purpose chatbots are not trained to detect psychiatric distress or provide therapeutic guidance. Yet millions of users are already relying on them for emotional or personal advice. A Regulatory Wake-Up Call for Tech Companies The discovery of AI chatbots recommending illegal casinos has triggered criticism from regulators, addiction specialists, and government officials. Technology companies have responded by saying they will adjust their AI systems to prevent such outputs. But critics argue this response comes too late. The broader lesson is clear: AI tools cannot be released at scale without strong guardrails. Systems capable of influencing decisions—from financial choices to mental health discussions—must be designed with risk prevention in mind. Otherwise, the same technology meant to help users could quietly guide them toward harmful environments. The Bigger Question Tech Companies Can’t Ignore The issue of AI chatbots recommending illegal casinos points to a larger problem in the tech industry: AI systems are being rolled out faster than the safeguards around them. For many people, chatbots are quickly becoming a place to ask questions they might once have typed into a search engine—or even asked another person. Users now turn to AI for advice on everything from finances to mental health. That influence carries responsibility. When a chatbot casually suggests an offshore gambling site or explains how to access it, the recommendation doesn’t feel like an advertisement. It feels like guidance. That’s what makes the problem serious. A poorly filtered response can nudge someone toward risky platforms that regulators have already flagged for fraud, addiction, or lack of consumer protection. Tech companies say they are working to fix these gaps. But the investigation shows how easily such recommendations can slip through. The real lesson here is simple: if AI tools are going to shape decisions in people’s daily lives, they need stronger guardrails. Otherwise, the technology meant to help users could quietly lead them into harm.

image for Mental health apps a ...

 Privacy

In February 2026, the cybersecurity firm Oversecured published a report that makes you want to factory reset your phone and move into a remote cabin in the woods. Researchers audited 10 popular Android mental health apps — ranging from mood trackers and AI therapists to tools for managing depression and anxiety —   show more ...

and uncovered… 1575 vulnerabilities! Fifty-four of those flaws were classified as critical. Given the download stats on Google Play, as many as 15 million people could be affected. The real kicker? Six out of the ten apps tested explicitly promised users that their data was “fully encrypted and securely protected”. We’re breaking down this scandalous “brain drain”: what exactly could leak, how it’s happening, and why “anonymity” in these services is usually just a marketing myth. What was found in the apps Oversecured is a mobile app security firm that uses a specialized scanner to analyze APK files for known vulnerability patterns across dozens of categories. In January 2026, researchers ran ten mental health monitoring apps from Google Play through the scanner — and the results were, shall we say, “spectacular”. App Type Installs Security vulnerabilities High-severity Medium-severity Low-severity Total Mood & habit tracker 10M+ 1 147 189 337 AI therapy chatbot 1M+ 23 63 169 255 AI emotional health platform 1M+ 13 124 78 215 Health & symptom tracker 500k+ 7 31 173 211 Depression management tool 100k+ 0 66 91 157 CBT-based anxiety app 500k+ 3 45 62 110 Online therapy & support community 1M+ 7 20 71 98 Anxiety & phobia self-help 50k+ 0 15 54 69 Military stress management 50k+ 0 12 50 62 AI CBT chatbot 500k+ 0 15 46 61 Total 14.7?+ 54 538 983 1575 Vulnerabilities found in the 10 tested mental health apps. Source The anatomy of the flaws The discovered vulnerabilities are diverse, but they all boil down to one thing: giving attackers access to data that should be under lock and key. For starters, one of the vulnerabilities allows an attacker to access any internal activity of the app — even that never intended for external eyes. This opens the door to hijacking authentication tokens and user session data. Once an attacker has those, they essentially could gain access to a user’s therapy records. Another issue is insecure local data storage with read permissions granted to any other app on the device. In other words, that random flashlight app or calculator on your smartphone could potentially read your cognitive behavioral therapy (CBT) logs, personal notes, and mood assessments. The researchers also found unencrypted configuration data baked right into the APK installation files. This included backend API endpoints and hardcoded URLs for Firebase databases. Furthermore, several apps were caught using the cryptographically weak java.util.Random class to generate session tokens and encryption keys. Finally, most of the tested apps lacked root/jailbreak detection. On a rooted device, any third-party app with root privileges could gain total access to every bit of locally stored medical data. Shockingly, of the 10 apps analyzed, only four received updates in February 2026. The rest haven’t seen a patch since November 2025, and one hasn’t been touched since September 2024. Going 18 months without a security patch is a lifetime in this industry — especially for an app housing mood journals, therapy transcripts, and medication schedules. Here’s a quick reminder of just how dangerous the misuse of this type of data gets. In 2024, the tech world was rocked by a sophisticated attack on XZ Utils, a critical component found in virtually every operating system based on the Linux kernel. The attacker successfully pressured the maintainer into handing over code commit permissions by exploiting the developer’s public admission of burnout and a lack of motivation to carry on with the project. Had the attack been completed, the damage would have been mind-boggling given that roughly 80% of the world’s servers run on Linux. What could leak? What do these apps collect and store? It’s the kind of stuff you’d likely only share with a trusted clinician: therapy session transcripts, mood logs, medication schedules, self-harm indicators, CBT notes, and various clinical assessment scales. As far back as 2021, complete medical records were selling on the dark web for US$1000 each. For comparison, a stolen credit card number goes for anywhere between US$5 and US$30. Medical records contain a full identity package: name, address, insurance details, and diagnostic history. Unlike a credit card, you can’t exactly “reissue” your medical history. Furthermore, medical fraud is notoriously difficult to spot. While a bank might flag a suspicious transaction in hours, a fraudulent insurance claim for a phantom treatment can go unnoticed for years. We’ve seen this movie before The Oversecured study isn’t just an isolated horror story. Back in 2020, Julius Kivimäki hacked the database of the Finnish psychotherapy clinic Vastaamo, making off with the records of 33 000 patients. When the clinic refused to cough up a €400 000 ransom, Kivimäki began sending direct threats to patients: “Pay €200 in Bitcoin within 24 hours, or else your records go public”. Ultimately, he leaked the entire database onto the dark web anyway. At least two people died by suicide, and the clinic was forced into bankruptcy. Kivimäki was eventually sentenced to six years and three months in prison, marking a record-breaking trial in Finland for the sheer number of victims involved. In 2023, the U.S. Federal Trade Commission (FTC) slapped the online therapy giant BetterHelp with a US$7.8 million fine. Despite stating on their sign-up page that your data was strictly confidential, the company was caught funneling user info — including mental health questionnaire responses, emails, and IP addresses — to Facebook, Snapchat, Criteo, and Pinterest for targeted advertising. After the dust settled, 800 000 affected users received a grand total of… US$10 each in compensation. By 2024, the FTC set its sights on the telehealth firm Cerebral, tagging them with a US$7 million fine. Through tracking pixels, Cerebral leaked the data of 3.2 million users to LinkedIn, Snapchat, and TikTok. The haul included names, medical histories, prescriptions, appointment dates, and insurance info. And the cherry on top? The company sent promotional postcards (sans envelopes) to 6000 patients, which effectively broadcasted that the recipients were undergoing psychiatric treatment. In September 2024, security researcher Jeremiah Fowler stumbled upon an exposed database belonging to Confidant Health, a provider specializing in addiction recovery and mental health services. The database contained audio and video recordings of therapy sessions, transcripts, psychiatric notes, drug test results, and even copies of driver’s licenses. In total, 5.3 terabytes of data, 126 000 files, or 1.7 million records were sitting there without a password. Why anonymity is an illusion Developers love to drop the line: “We never share your personal data with anyone.” Technically, that might be true — instead, they share “anonymized profiles”. The catch? De-anonymizing that data isn’t exactly rocket science anymore. Recent research highlights that using LLMs to strip away anonymity has become a routine reality. Even the “anonymization” process itself is often a mess. A study by Duke University revealed that data brokers are openly hawking the mental health data of Americans. Out of 37 brokers surveyed, 11 agreed to sell data linked to specific diagnoses (like depression, anxiety, and bipolar disorder), demographic parameters, and in some cases, even names and home addresses. Prices started as low as US$275 for 5000 aggregated records. According to the Mozilla Foundation, by 2023, 59% of popular mental health apps failed to meet even the most basic privacy standards, and 40% had actually become less secure than the previous year. These apps allowed account creation via third-party services (like Google, Apple, and Facebook), featured suspiciously brief privacy policies that glossed over data collection details, and employed a clever little loophole: some privacy policies applied strictly to the company’s website, but not the app itself. In short, your clicks on the site were “protected”, but your actions within the app were fair game. How to protect yourself Cutting these apps out of your life entirely is, of course, the most foolproof option — but it’s not the most realistic one. Besides, there’s no guarantee you can actually nuke the data already collected — even if you delete your account. We previously covered the grueling process of scrubbing your info from data broker databases; it’s possible, but prepare for a headache. So, how can you stay safe? Check permissions before you hit “Install”. In Google Play, navigate to App description -> About this app -> Permissions. A mood tracker has no business asking for access to your camera, microphone, contacts, or precise GPS location. If it does, it’s not looking out for your well-being — it’s harvesting data. Actually read the privacy policy. We get it — nobody reads these multi-page manifestos. But when a service is vacuuming up your most intimate thoughts, it’s worth a skim. Look for the red flags: does the company share data with third parties? Can you manually delete your records? Does the policy explicitly cover the app itself, or just the website? You can always feed the policy text into an AI and ask it to flag any privacy deal-breakers. Check the last updated date. An app that hasn’t seen an update in over six months is likely a playground for unpatched vulnerabilities. Remember: six out of the 10 apps Oversecured tested hadn’t been touched in months. Disable everything non-essential in your phone’s privacy settings. Whenever prompted, always select “ask not to track”. When an app pleads with you to enable a specific type of tracking — claiming it’s for “internal optimization” — it’s almost always a marketing ploy rather than a functional necessity. After all, if the app truly won’t work without a certain permission, you can always go back and toggle it on later. Don’t use “Sign in with…” services. Authenticating via Facebook, Apple, Google, or Microsoft creates additional identifiers and gives companies a golden opportunity to link your data across different platforms. Treat everything you type like a public social media post. If you wouldn’t want a random stranger on the internet reading it, you probably shouldn’t be typing it into an app with over 150 vulnerabilities that hasn’t seen a patch since the year before last. What else you should know about privacy settings and controlling your personal data online: Geolocation data brokers: what they do and what happens when they leak Why data brokers build dossiers on you, and how to stop them doing so How to disappear from the internet How to shrink your digital footprint How smartphones build a dossier on you

 Government

Rudd, who was confirmed 71-29 to serve as the “dual-hat” leader of the organizations, takes the reins as the U.S. faces mounting aggression in cyberspace from foreign adversaries at the same time the Trump administration has sought to shrink the size of the federal government.

 Feed

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Monday added three security flaws to its Known Exploited Vulnerabilities (KEV) catalog, based on evidence of active exploitation. The vulnerability list is as follows - CVE-2021-22054 (CVSS score: 7.5) - A server-side request forgery (SSRF) vulnerability in Omnissa Workspace One UEM (formerly VMware Workspace One UEM) that

 Feed

Salesforce has warned of an increase in threat actor activity that's aimed at exploiting misconfigurations in publicly accessible Experience Cloud sites by making use of a customized version of an open-source tool called AuraInspector. The activity, per the company, involves the exploitation of customers' overly permissive Experience Cloud guest user configurations to obtain access to sensitive

 Feed

Cybersecurity researchers have disclosed nine cross-tenant vulnerabilities in Google Looker Studio that could have permitted attackers to run arbitrary SQL queries on victims' databases and exfiltrate sensitive data within organizations' Google Cloud environments. The shortcomings have been collectively named LeakyLooker by Tenable. There is no evidence that the vulnerabilities were exploited in

 Feed

Artificial Intelligence (AI) is no longer just a tool we talk to; it is a tool that does things for us. These are called AI Agents. They can send emails, move data, and even manage software on their own. But there is a problem. While these agents make work faster, they also open a new "back door" for hackers. The Problem: "The Invisible Employee" Think of an AI Agent like a new employee who has

 Feed

You can't control when the next critical vulnerability drops. You can control how much of your environment is exposed when it does. The problem is that most teams have more internet-facing exposure than they realise. Intruder's Head of Security digs into why this happens and how teams can manage it deliberately. Time-to-exploit is shrinking The larger and less controlled your attack surface is,

 Feed

The Russian state-sponsored hacking group tracked as APT28 has been observed using a pair of implants dubbed BEARDSHELL and COVENANT to facilitate long‑term surveillance of Ukrainian military personnel. The two malware families have been put to use since April 2024, ESET said in a new report shared with The Hacker News. APT28, also tracked as Blue Athena, BlueDelta, Fancy Bear, Fighting Ursa,

 Feed

Cybersecurity researchers are calling attention to a new campaign where threat actors are abusing FortiGate Next-Generation Firewall (NGFW) appliances as entry points to breach victim networks.  The activity involves the exploitation of recently disclosed security vulnerabilities or weak credentials to extract configuration files containing service account credentials and network topology

 Feed

Cybersecurity researchers have discovered a new malware called KadNap that's primarily targeting Asus routers to enlist them into a botnet for proxying malicious traffic. The malware, first detected in the wild in August 2025, has expanded to over 14,000 infected devices, with more than 60% of victims located in the U.S., according to the Black Lotus Labs team at Lumen. A lesser number of

2026-03
SUN
MON
TUE
WED
THU
FRI
SAT
March