Cyber security aggregate rss news

Cyber security aggregator - feeds history

image for Ransomware Payments  ...

 Cyber News

Fewer organizations are paying the ransom when confronted with a ransomware attack – but those that do make ransomware payments are paying much more. That’s one of the takeaways from ExtraHop’s new 2025 Global Threat Landscape Report, which also looked at the riskiest attack surfaces, dwell times, initial attack   show more ...

vectors, and more. The report, which the NDR vendor conducted with Censuswide, is based on a July 2025 survey of 1,800 security and IT decision-makers in midsize and large organizations in seven countries. Average Ransom Payment Tops $3.6 Million The survey found that while organizations are experiencing fewer ransomware incidents – and fewer are paying ransoms – those organizations that do pay are paying $1.1 million more than they did last year, up from $2.5 million to more than $3.6 million, an increase of more than 40%. While 70% of respondents said their organization paid a ransom, there was an overall decline in the number of ransomware payments for the first time, and the number of organizations that say that they didn’t pay a ransom tripled from 9% last year to 30% this year. Also on the plus side, the organizations overall reported fewer ransomware incidents, with their organizations experiencing between five and six ransomware incidents each within the previous 12 months, down roughly 25% from nearly eight incidents in 2024. However, the percentage of organizations hit with 20 or more ransomware incidents tripled, rising to 3% year-over-year. Healthcare and government organizations were among those facing a greater number of attacks. Cyble’s ransomware data, which is based on ransomware group claims on their dark web data leak sites, show that ransomware attacks are up 50% so far this year from the same period of 2024. The average ransom amount varied by country, with UAE organizations, for example, facing an average of seven ransomware incidents, with paid ransoms averaging $5.4 million. Australia organizations, on the other hand, experienced the fewest ransomware incidents in the report, averaging just four per year, and ransomware payments averaged $2.5 million. The healthcare sector had the highest payouts at $7.5 million, followed by the government sector (just under $7.5 million) and the finance sector ($3.8 million). Respondents also struggled with ransomware detection, as more than 30% of respondents didn’t detect that they were being targeted by ransomware until data exfiltration had begun. Riskiest Attack Surfaces and Entry Points The report found that the public cloud, third-party risks, and GenAI were the riskiest attack surfaces (chart below). [caption id="attachment_106198" align="aligncenter" width="808"] Riskiest attack surfaces (ExtraHop)[/caption] “As organizations rapidly adopt emerging technologies, navigate complex device interdependencies, and manage sprawling supply chains, their IT infrastructures become inherently more complex,” the report said. “This escalating complexity inevitably leads to a larger attack surface.” Phishing and social engineering were the most common initial point of entry for attackers at 33.7%, followed by software vulnerabilities (19.4%), third-party/supply chain compromise (13.4%), and compromised credentials (12.2%) (chart below). [caption id="attachment_106199" align="aligncenter" width="827"] Initial attack vectors (ExtraHop)[/caption]

image for Lumma Stealer Slowed ...

 Cyber News

The prolific threat actors behind the Lumma Stealer malware have been slowed by an underground doxxing campaign in recent months. Coordinated law enforcement action earlier this year didn’t do much to slow down the infostealer’s spread, but a recent doxxing campaign appears to have had an impact, according to   show more ...

researchers at Trend Micro. “In September 2025, we noted a striking decline in new command and control infrastructure activity associated with Lummastealer ... as well as a significant reduction in the number of endpoints targeted by this notorious malware,” threat analyst Junestherry Dela Cruz wrote in a recent post. Fueling the drop has been an underground exposure campaign targeting a key administrator, developer and other members of the group, which Trend tracks as “Water Kurita.” Lumma Stealer Doxxing Campaign Began in August The Lumma Stealer doxxing campaign began in late August and continued into October, and on September 17, Lumma Stealer’s Telegram accounts were also compromised. “Allegedly driven by competitors, this campaign has unveiled personal and operational details of several supposed core members, leading to significant changes in Lummastealer’s infrastructure and communications,” Dela Cruz wrote. “This development is pivotal, marking a substantial shake-up in one of the most prominent information stealer malware operations of the year. ... The exposure of operator identities and infrastructure details, regardless of their accuracy, could have lasting repercussions on Lummastealer’s viability, customer trust, and the broader underground ecosystem.” The disclosures included highly sensitive details of five alleged Lumma Stealer operators, such as passport numbers, bank account information, email addresses, and links to online and social media profiles, and were leaked on a website called "Lumma Rats." While the campaign may have come from a rival, Dela Cruz said “the campaign’s consistency and depth suggest insider knowledge or access to compromised accounts and databases.” “The exposure campaign was accompanied by threats, accusations of betrayal within the cybercriminal community, and claims that the Lumma Stealer team had prioritized profit over the operational security of their clients,” Dela Cruz wrote. While the researcher noted that the accuracy of the doxed information hasn’t been verified, the accompanying decline in Lumma Stealer activity suggests that the group “has been severely affected—whether through loss of key personnel, erosion of trust, or fear of further exposure.” Vidar, StealC Gain from Lumma Stealer’s Decline Lumma Stealer’s decline has been a boon for rival infostealers like Vidar and StealC, Dela Cruz noted, “with many users reporting migrations to these platforms due to Lumma Stealer’s instability and loss of support.” Lumma’s decline has also hit pay-per-install (PPI) services like Amadey that are widely used to deliver infostealer payloads, and rival malware developers have stepped up their marketing efforts, “fueling rapid innovation and intensifying competition among MaaS [Malware as a Service] providers, raising the likelihood of new, stealthier infostealer variants entering the market,” Dela Cruz said. According to Cyble dark web data, Vidar and Redline are the infostealers most rivaling Lumma in volume on dark web marketplaces selling stolen credentials, with StealC, Acreed, Risepro, Rhadamanthys and Metastealer among other stealer logs commonly seen on the dark web. As for Lumma Stealer, Dela Cruz noted that being a top cybercrime group isn’t exactly a secure - pardon the pun - position to be in, as RansomHub found out earlier this year. “[B]eing number one means facing scrutiny and attacks from both defenders and competitors alike,” the researcher noted.

image for Over 120,000 Bitcoin ...

 Firewall Daily

A severe vulnerability in the random number generation method of the widely used open-source Bitcoin library Libbitcoin Explorer has led to the exposure of more than 120,000 Bitcoin private keys, putting many digital assets at risk. The flaw, rooted in a predictable pseudo-random number generator, impacted multiple   show more ...

wallet platforms and may explain several historical, unexplained fund losses.  The issue was publicly analyzed by crypto wallet provider OneKey, which confirmed that the vulnerability did not affect its own systems. The company also conducted a detailed assessment of how widespread the problem may be across the ecosystem.  A Flawed Random Number Generator  At the heart of the breach was the Libbitcoin Explorer (bx) 3.x series. This tool, popular among developers for generating wallet seeds and keys, relies on the Mersenne Twister-32 algorithm for random number generation, a method that is not cryptographically secure.  Crucially, the Mersenne Twister-32 implementation was seeded only with system time. As a result, the seed space was limited to just 2³² possible values. This made it feasible for attackers to brute-force potential seeds by estimating when a wallet was created. Once the seed was reconstructed, it became possible to reproduce the same pseudo-random number sequence and derive the corresponding private keys.  According to OneKey’s published report on the incident, a high-performance personal computer could enumerate all possible seeds in a matter of days, making large-scale theft not only plausible but likely already in progress by the time the vulnerability came to light.  Affected Wallets and Software Versions  The security risk is not confined to a single platform. Several software implementations that utilized Libbitcoin Explorer 3.x or components built on it were vulnerable. These include:  Trust Wallet Extension versions 0.0.172 to 0.0.183  Trust Wallet Core versions up to (but not including) 3.1.1  Any wallet, hardware or software, that integrated Libbitcoin Explorer or older versions of Trust Wallet Core could be affected. OneKey’s investigation also links this vulnerability to previous incidents such as the “Milk Sad” case, where users saw their wallets emptied despite relying on seemingly secure, air-gapped setups.  OneKey Confirms Its Wallets Are Secure  OneKey confirmed that its wallet products, both hardware and software, are not impacted by the flaw. The company uses certified True Random Number Generators (TRNGs), ensuring entropy sources are both unpredictable and secure.  All current OneKey hardware wallets are equipped with a Secure Element (SE) chip that includes a built-in TRNG. This system is entirely hardware-based and does not rely on system time or software-based entropy. According to OneKey, their SE chip has received EAL6+ certification, aligning with international cryptographic standards.  Even legacy OneKey hardware wallets meet stringent security benchmarks. They use internal TRNGs that comply with NIST SP800-22 and FIPS 140-2 guidelines, two well-established standards for randomness of quality and cryptographic strength.  Software Wallets  OneKey’s desktop and browser extension wallets use a Chromium-based WebAssembly PRNG interface, which taps into the host operating system's Cryptographically Secure Pseudo-Random Number Generator (CSPRNG). These CSPRNGs meet current cryptographic standards and are considered secure.  On mobile platforms, the OneKey wallet directly utilizes the system-level CSPRNG APIs provided by Android and iOS, ensuring the entropy is derived from secure, certified sources.  However, the company notes that the overall randomness quality in software wallets is still dependent on the security of the user’s device and operating system. “If the operating system, browser kernel, or device hardware is compromised, the entropy source could be weakened,” the team stated.  As a precaution, OneKey advises users to favor hardware wallets for long-term storage of digital assets. They strongly discourage importing mnemonic phrases generated in software environments into hardware wallets, as this practice could carry over compromised entropy.  The OneKey security team has performed rigorous evaluations of entropy across its products using NIST and FIPS methodologies, with all results meeting cryptographic randomness standards. The company has made detailed test reports and certifications available via its Help Center. 

image for CISA Warns of Active ...

 Firewall Daily

The Cybersecurity and Infrastructure Security Agency (CISA) has issued an urgent alert regarding the active exploitation of a high-severity Windows vulnerability, tracked as CVE-2025-33073. This flaw, rooted in the Server Message Block (SMB) protocol, enables attackers to escalate privileges to SYSTEM level on   show more ...

vulnerable Windows devices, potentially granting full control over affected systems.  The Technicalities of CVE-2025-33073 SMB Flaw  CVE-2025-33073 is a privilege escalation vulnerability found in the Windows SMB client, affecting a wide range of Microsoft operating systems, including all Windows Server versions, Windows 10, and Windows 11 up to the 24H2 update. Microsoft disclosed the flaw on June 10, 2025, as part of its Patch Tuesday updates, alongside a security bulletin describing the issue as an improper access control weakness (classified under CWE-284).  The vulnerability allows an authorized attacker to elevate privileges remotely without requiring user interaction, making it especially dangerous. Once exploited, attackers can gain SYSTEM-level privileges, effectively allowing them to take over the targeted device.  How the Exploit Works  The exploitation method involves tricking a victim’s Windows machine into connecting to a malicious SMB server controlled by the attacker. According to Microsoft, "an attacker could execute a specially crafted malicious script to coerce the victim's machine to connect back to the attack system using SMB and authenticate." This connection enables the attacker to exploit improper access controls within the SMB protocol, leading to elevated privileges.  In practice, this means that an attacker does not necessarily need direct access to the system but can trigger the vulnerability over the network by luring users to connect to malicious SMB servers. This method amplifies the risk of remote attacks, especially within corporate networks where SMB is widely used for file sharing and communications.  Severity and Impact  The Common Vulnerability Scoring System (CVSS) rates CVE-2025-33073 as an 8.8 (base) with a 7.9 environmental score, indicating a high level of severity. The flaw has the following characteristics:  Attack Vector: Network  Attack Complexity: Low  Privileges Required: Low  User Interaction: None  Scope: Unchanged  Confidentiality, Integrity, Availability Impact: High  Given these factors, the vulnerability poses a direct risk to affected Windows systems.  CISA’s Response and Federal Directive  In response to reports of active exploitation, CISA has added CVE-2025-33073 to its Known Exploited Vulnerabilities Catalog. This inclusion triggers a compliance requirement for Federal Civilian Executive Branch (FCEB) agencies, mandating them to patch affected systems by November 10, 2025, as per Binding Operational Directive (BOD) 22-01. This directive aims to reduce the attack surface and protect government infrastructure from escalating cyber threats.  While Microsoft’s original advisory did not confirm active exploitation at the time of patch release, CISA’s statement indicates that threat actors have since begun leveraging this SMB flaw in real-world attacks, highlighting the urgency for organizations to apply security updates promptly.  Researchers Behind the Discovery  Microsoft credited multiple security researchers and firms with uncovering the CVE-2025-33073 vulnerability, underscoring the collaborative nature of cybersecurity discovery. Notable contributors include Keisuke Hirata, Wilfried Bécard, Stefan Walter, Daniel Isern, James Forshaw, RedTeam Pentesting GmbH, Cameron Stish, and Ahamada M'Bamb. Their combined efforts led to the timely identification and remediation of this critical Windows SMB flaw.  CVE-2025-33073 represents a cybersecurity risk for Windows users, given its ability to elevate privileges remotely via the SMB protocol. With confirmed active exploitation by threat actors, organizations, especially those running Windows Server, Windows 10, and Windows 11 systems, are strongly urged to apply Microsoft’s June 2025 patches immediately. Failure to do so could lead to unauthorized SYSTEM-level access and potentially devastating network breaches. 

image for Russian State-Sponso ...

 Firewall Daily

Following the public disclosure of its LOSTKEYS malware in May 2025, the Russian state-sponsored threat group known as COLDRIVER, also tracked under aliases such as UNC4057, Star Blizzard, and Callisto, has rapidly evolved its cyber operations. According to research from the Google Threat Intelligence Group (GTIG),   show more ...

the group abandoned LOSTKEYS just five days after its exposure and began deploying new malware strains that demonstrate a significant escalation in development speed and operational aggression. COLDRIVER, a persistent threat group targeting high-profile individuals associated with NGOs, policy think tanks, and political dissidents, has shown adaptability and persistence in the face of increased scrutiny. GTIG reports that the group's latest efforts involve a chain of related malware families, delivered via a mechanism mimicking a CAPTCHA prompt, an evolution of its earlier COLDCOPY lures. NOROBOT and the Infection Chain  The main part of the campaign is NOROBOT, a malicious DLL file first distributed using a lure called “ClickFix.” This technique impersonates a CAPTCHA challenge, prompting users to verify that they are "not a robot", hence the malware name. Once the user runs the file via rundll32, NOROBOT initiates a sequence that connects to a hardcoded command-and-control (C2) server to retrieve the next stage of the malware. GTIG notes that NOROBOT has undergone continuous updates between May and September 2025. Initial versions were fetched and installed in a full Python 3.8 environment, which was then used to run a backdoor dubbed YESROBOT. This method left obvious traces, such as the Python installation, that could trigger alerts. As a result, COLDRIVER later replaced YESROBOT with a more streamlined and stealthier PowerShell-based backdoor: MAYBEROBOT.  NOROBOT’s earlier iterations relied on cryptographic obfuscation, splitting AES keys across various components. For instance, part of the key was stored in the Windows Registry, while the rest was embedded in downloaded Python scripts like libsystemhealthcheck.py. These files, hosted on domains such as inspectguarantee[.]org, were essential to decrypt and activate the final backdoor.  YESROBOT: A Short-Lived Backdoor  YESROBOT, a minimal Python backdoor, was observed only twice over a two-week window in late May 2025. Commands were AES-encrypted and issued over HTTPS, with system identifiers included in the User-Agent string. However, its limitations, such as the need for a full Python interpreter and lack of extensibility, led COLDRIVER to abandon it quickly.  GTIG believes YESROBOT served as a stopgap solution, hastily deployed after LOSTKEYS was exposed. The effort to maintain operational continuity suggests that COLDRIVER was under pressure to re-establish footholds on previously compromised systems.  MAYBEROBOT: COLDRIVER's New Standard  In early June 2025, GTIG identified a simplified version of NOROBOT that bypassed the need for Python altogether. This new variant fetched a single PowerShell command, which established persistence via a logon script and delivered a heavily obfuscated script known as MAYBEROBOT (also referred to as SIMPLEFIX by Zscaler).  MAYBEROBOT supports three functions:  Download and execute code from a specified URL.  Run commands using cmd.exe.  Execute PowerShell blocks.  It communicates with the C2 server using a custom protocol, sending acknowledgments and command outputs to predefined paths. Although minimal in built-in functionality, MAYBEROBOT's architecture is more adaptable and stealthy compared to YESROBOT.  GTIG assesses that this evolution marks a deliberate shift by COLDRIVER toward a more flexible toolset that avoids detection by skipping Python installation and minimizing suspicious behavior.  COLDRIVER’s Continuous Malware Evolution  From June through September 2025, GTIG observed COLDRIVER continuously refining NOROBOT and its associated delivery chains. These changes include:  Rotating file names and infrastructure.  Modifying DLL export names and paths.  Adjusting complexity to balance between stealth and operational control.  Interestingly, while NOROBOT has seen multiple iterations, MAYBEROBOT has remained largely unchanged, suggesting the group is confident in its current capabilities. 

image for How to use DeepSeek  ...

 Privacy

We’ve previously written about why neural networks are not the best choice for private conversations. Popular chatbots like ChatGPT, DeepSeek, and Gemini collect user data for training by default, so developers can see all our secrets: every chat you have with the chatbot is stored on company servers. This is   show more ...

precisely why it’s essential to understand what data each neural network collects, and how to set them up for maximum privacy. In our previous post, we covered configuring ChatGPT’s privacy and security in abundant detail. Today, we examine the privacy settings in China’s answer to ChatGPT — DeepSeek. Curiously, unlike in ChatGPT, there aren’t that many at all. What data DeepSeek collects All data from your interactions with the chatbot, images and videos included Details you provide in your account IP address and approximate location Information about your device: type, model, and operating system The browser you’re using Information about errors What’s troubling is that the company doesn’t specify how long it keeps personal data, operating instead on the principle of “retain it as long as needed”. The privacy policy states that the data retention period varies depending on why the data is collected, yet no time limit is mentioned. Is this not another reason to avoid sharing sensitive information with these neural networks? After all, dataset leaks containing users’ personal data have become an everyday occurrence in the world of AI. If you want to keep your IP address private while you work with DeepSeek, use a Kaspersky Security Cloud. Be wary of free VPN apps: threat actors frequently use them to create botnets (networks of compromised devices). Your smartphone or computer, and by extension, you yourself, could thus become unwitting accomplices in actual crimes. Who gets your data DeepSeek is a company under Chinese jurisdiction, so not only the developers but also Chinese law enforcement — as required by local laws — may have access to your chats. Researchers have also discovered that some of the data ends up on the servers of China Mobile — the country’s largest mobile carrier. However, DeepSeek is hardly an outlier here: ChatGPT, Gemini, and other popular chatbots just as easily and casually share user data upon a request from law enforcement. Disabling DeepSeek’s training on your data The first thing to do — a now-standard step when setting up any chatbots — is to disable training on your data. Why could this pose a threat to your privacy? Sometimes, large language models (LLMs) can accidentally disclose real data from the training set to other users. This happens because neural networks don’t distinguish between confidential and non-confidential information. Whether it’s a name, an address, a password, a piece of code, or a photo of kittens — it makes little difference to the AI. Although DeepSeek’s developers claim to have taught the chatbot not to disclose personal data to other users, there’s no guarantee this will never happen. Furthermore, the risk of dataset leaks is always there. The web-based version and the mobile app for DeepSeek have different settings, and the available options vary slightly. First of all, note that the web version only offers three interface languages: English, Chinese, and System. The System option is supposed to use the language set as the default in your browser or operating system. Unfortunately, this doesn’t always work reliably with all languages. Therefore, if you need the ability to switch DeepSeek’s interface to a different language, we recommend using the mobile app, which has no issues displaying the selected user interface language. It’s important to note that your choice of UI language doesn’t affect the language you use to communicate with DeepSeek. You can chat with the bot in any language it supports. The chatbot itself proudly claims to support more than 100 languages — from common to rare. DeepSeek web version settings To access the data management settings, open the left sidebar, click the three dots next to your name at the bottom, select Settings, and then navigate to the Data tab in the window that appears. We suggest you disable the option labeled Improve the model for everyone to reduce the likelihood that your chats with DeepSeek will end up in its training datasets. If you want the model to stop learning from the data you shared with it before turning off this option, you’ll need to email privacy@deepseek.com, and specify the exact data or chats. Disabling DeepSeek training on your data in the web-based version DeepSeek mobile app settings In the DeepSeek mobile app, you also open the left sidebar, click the three dots next to your name at the bottom, and reveal the Settings menu. In the menu, open the Data controls section and turn off Improve the model for everyone. Disabling DeepSeek training on your data in the app Managing DeepSeek chats All your chats with DeepSeek — both in the web version and in the mobile app — are collected in the left sidebar. You can rename any chat by giving it a descriptive title, share it with anyone by creating a public link, or delete a specific chat entirely. Sharing DeepSeek chats The ability to share a chat might seem extremely convenient, but remember that it poses risks to your privacy. Let’s say you used DeepSeek to plan a perfect vacation, and now you want to share the itinerary with your travel companions. You could certainly create a public link in DeepSeek and send it to your friends. However, anyone who gets hold of that link can read your plan and learn, among other things, that you’ll be away from home on specific dates. Are you sure this is what you want? If you’re using the chatbot for confidential projects (which is not advisable in the first place, as it’s better to use a locally running version of DeepSeek for this kind of data, but more on this later), sharing the chat, even with a colleague, is definitely not a good idea. In the case of ChatGPT, similar shared chats were at one point indexed by search engines — allowing anyone to find and read them. If you absolutely must send the content of a chat to someone else, it’s easier to copy it by clicking the designated button below the message in the chat window, and then to use a conventional method like email or a messaging app to send it, rather than share it with the entire world. If, despite our warnings, you still wish to share your conversation via a public link, this is currently only possible in the web version of DeepSeek. To create a link to a chat, click the three dots next to the chat name in the left sidebar, select Share, and then, on the main chat board, check the boxes next to the messages you want to share, or check the Select all box at the bottom. After this, click Create public link. Sharing DeepSeek chats in the web version You can view all the chats you have shared and, if necessary, delete their public links in the web version, by going to Settings -> Data -> Shared links -> Manage. Managing shared DeepSeek chats in the web version Deleting old DeepSeek chats Why should you delete old DeepSeek chats? The fewer chats you have saved, the lower the risk that your confidential data could become accessible to unauthorized parties if your account is compromised, or if there’s a bug in the LLM itself. Unlike ChatGPT, DeepSeek doesn’t remember or use data from your past chats in new ones, so deleting them won’t impact your future use of the neural network. However, you can resume a specific chat with DeepSeek at any time by selecting it in the sidebar. Therefore, before deleting a chat, consider whether you might need it again later. To delete a specific chat: in the web version, click the three dots next to the chat in the left sidebar; in the mobile app, press and hold the chat name. In the window that appears, select Delete. To delete your entire conversation history: in the web version, go to Settings -> Data -> Delete all chats -> Delete all; in the application, go to Settings -> Data controls -> Delete all chats. Bear in mind that this only removes the chats from your account without deleting your data from DeepSeek’s servers. If you want to save the results of your chats with DeepSeek, in the web version, first go to Settings -> Data -> Export data -> Export. Wait for the archive to be prepared, and then download it. All data is exported in the JSON format. This feature is not available in the mobile app. Managing your DeepSeek account When you first access DeepSeek, you have two options: either sign up with your email and create a password, or log in with a Google account. From a security and privacy standpoint, the first option is better — especially if you create a strong, unique password for your account: you can use a tool like Kaspersky Password Manager to generate and safely store one. You can subsequently log in with the same account in other browsers and on different devices. Your chat history will be accessible from any device linked to your account. So, if someone learns or steals your DeepSeek credentials, they’ll be able to review all your chats. Sadly, DeepSeek doesn’t yet support two-factor authentication or passkeys. If you’ve even the slightest suspicion that your DeepSeek account credentials have been compromised, we recommend taking the following steps. Start by logging out of your account on all devices. In the web version, navigate to Settings -> Profile -> Log out of all devices -> Log out. In the app, the path is Settings -> Data controls -> Log out of all devices. Next, you need to change your password, but DeepSeek doesn’t offer a direct path to do so once you’re logged in. To reset your password, go to the DeepSeek web version or mobile app, select the password login option, and click Forgot password?. DeepSeek will request your email address, send a verification code to that email, and allow you to reset the old password and create a new one. Deploying DeepSeek locally Privacy settings for the DeepSeek web version and mobile app are extremely limited and leave much to be desired. Fortunately, DeepSeek is an open-source language model. This means anyone can deploy the neural network locally on their computer. In this scenario, the AI won’t train on your data, and your information won’t end up on the company’s servers or with third parties. However, there’s a significant downside: when running the AI locally, you’ll be limited to the pre-trained model, and won’t be able to ask the chatbot to find up-to-date information online. The simplest way to deploy DeepSeek locally is by using the LM Studio application. It allows you to work with models offline, and doesn’t collect any information from your chats with the AI. Download the application, click the search icon, and look for the model you need. The application will likely offer many different versions of the same model. Searching LM Studio for DeepSeek models These versions differ in the number of parameters, denoted by the letter B. The more parameters a model has, the more mathematical computations it can perform, and the better it performs; consequently, the more resources it requires to run smoothly. For comparison, a modern laptop with 16–32GB of RAM is sufficient for lighter models (7B–13B), but for the largest version, with 70 billion parameters, you’d need to own an entire data center. LM Studio will alert you if the model is too heavy for your device. LM Studio warning you that the model may be too large for your device It’s important to understand that local AI use is not a panacea in terms of privacy and security. It doesn’t hurt to periodically check that LM Studio (or another similar application) is not connecting to external servers. For example, you can use the netstat command for that. If you’re not familiar with netstat, simply ask the chatbot to tell you about today’s news. If the chatbot is running locally as designed, the response definitely won’t include any current events. Furthermore, you mustn’t forget about protecting the devices themselves: malware on your computer can intercept your data. Use Kaspersky Premium: it allows you to examine and block hidden connections, and will alert you to the presence of malicious software. More on secure AI use: Privacy settings in ChatGPT How phishers and scammers use AI The pros and cons of AI-powered browsers Should you disable Microsoft Recall in 2025? Trojans masquerading as DeepSeek and Grok clients How AI can leak your private data

 Feed

A new malware attributed to the Russia-linked hacking group known as COLDRIVER has undergone numerous developmental iterations since May 2025, suggesting an increased "operations tempo" from the threat actor. The findings come from Google Threat Intelligence Group (GTIG), which said the state-sponsored hacking crew has rapidly refined and retooled its malware arsenal merely five days following

 Feed

A European telecommunications organization is said to have been targeted by a threat actor that aligns with a China-nexus cyber espionage group known as Salt Typhoon. The organization, per Darktrace, was targeted in the first week of July 2025, with the attackers exploiting a Citrix NetScaler Gateway appliance to obtain initial access. Salt Typhoon, also known as Earth Estries, FamousSparrow,

 Feed

Artificial intelligence (AI) holds tremendous promise for improving cyber defense and making the lives of security practitioners easier. It can help teams cut through alert fatigue, spot patterns faster, and bring a level of scale that human analysts alone can’t match. But realizing that potential depends on securing the systems that make it possible. Every organization experimenting with AI in

 Feed

Meta on Tuesday said it's launching new tools to protect Messenger and WhatsApp users from potential scams. To that end, the company said it's introducing new warnings on WhatsApp when users attempt to share their screen with an unknown contact during a video call so as to prevent them from giving away sensitive information like bank details or verification codes. On Messenger, users can opt to

 Feed

Cybersecurity researchers have shed light on the inner workings of a botnet malware called PolarEdge. PolarEdge was first documented by Sekoia in February 2025, attributing it to a campaign targeting routers from Cisco, ASUS, QNAP, and Synology with the goal of corralling them into a network for an as-yet-undetermined purpose. The TLS-based ELF implant, at its core, is designed to monitor

 AI

In episode 73 of The AI Fix, AI now writes more web content than humans and more books by ex-British prime ministers than ex-British prime ministers. Mark eats a dodgy prawn, Google discovers a new pathway to treating cancer, a lawyer gets skewered for using AI over and over again, and a US general declares that   show more ...

he's outsourced his brain to ChatGPT. Also in this episode, Graham discovers that LLMs show all the characteristics of pathological gambling, and Mark explains why AI training is like eating a prawn buffet. All this and much more is discussed in the latest edition of "The AI Fix" podcast by Graham Cluley and Mark Stockley.

 Data loss

Former US national security adviser John Bolton is the latest in a line of Donald Trump's critics to find themselves on the sharp end of charges from the US Department of Justice. Bolton, who left the White Hose in 2021 and wrote a tell-all memoir describing Trump as unfit for office and "stunningly   show more ...

uninformed," has been charged with mishandling classified information. Specifically, prosecutors allege that Bolton improperly retained and transmitted classified information to members of his family, via an AOL account. Read more in my article on the Hot for Security blog.

2025-10
Aggregator history
Tuesday, October 21
WED
THU
FRI
SAT
SUN
MON
TUE
OctoberNovember