Cyber security aggregate rss news

Cyber security aggregator - feeds history

image for CISA Adds Microsoft, ...

 Cyber News

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) added five CVEs to its Known Exploited Vulnerabilities (KEV) catalog today, including Microsoft, Apple and Oracle vulnerabilities. The vulnerabilities flagged by CISA include: CVE-2022-48503, an 8.8-severity vulnerability in multiple Apple products that   show more ...

could lead to arbitrary code execution when processing web content. The issue was addressed with improved bounds checks. CVE-2025-33073, an 8.8-rated Microsoft Windows SMB Client Improper Access Control vulnerability that Microsoft had labeled as less likely to be exploited in its June Patch Tuesday update. CVE-2025-61884, a 7.5-severity Oracle E-Business Suite Server-Side Request Forgery (SSRF) vulnerability that Oracle issued an emergency patch for on October 11. CVE-2025-2746 and CVE-2025-2747, which are both 9.8-rated password authentication bypass issues in Kentico Xperience Staging Sync Server. Oracle Vulnerabilities Under Attack CISA doesn’t provide details on how vulnerabilities are being exploited, but the October 11 Oracle E-Business Suite CVE-2025-61884 vulnerability announcement followed an ongoing campaign by the CL0P ransomware group to exploit CVE-2025-61882, a 9.8-severity remote code execution (RCE) flaw in Oracle E-Business Suite that had reportedly been exploited at least since August 9, with “suspicious activity” occurring a month before that. CISA added CVE-2025-61882 to its KEV database on October 6. CVE-2025-61882 was reportedly weaponized by the CL0P ransomware group in a widespread extortion campaign that included a high volume of emails sent to executives at numerous organizations, claiming the theft of sensitive data from the victims’ Oracle E-Business Suite environments, according to Google Threat Intelligence. CL0P (aka CLOP) has since claimed at least four victims from the Oracle campaign on its Tor data leak site: Harvard University, American Airlines’ Envoy Air subsidiary, and two additional victims that remain unconfirmed. The Scattered LAPSUS$ Hunters threat group posted proof-of-exploit (PoC) code for CVE-2025-61882 to its Telegram channel on October 3, claiming that they had originated the exploit instead of CL0P, according to Cyble dark web researchers; that PoC release from the Scattered LAPSUS$ threat group preceded Oracle’s patch for CVE-2025-61882 by one day. Microsoft CVE-2025-33073 Vulnerability Discovered by 8 Researchers At the time of the June Patch Tuesday update, Microsoft gave credit for discovering CVE-2025-33073 to eight researchers: Keisuke Hirata of CrowdStrike, Wilfried Bécard of Synacktiv, Cameron Stish of GuidePoint Security, Ahamada M'Bamba of BNP Paribas, Stefan Walter and Daniel Isern of SySS GmbH, RedTeam Pentesting GmbH, and James Forshaw of Google Project Zero. Stish’s GuidePoint blog post on CVE-2025-33073 provides some interesting background on the vulnerability. According to Microsoft, an attacker who successfully exploited the vulnerability could gain SYSTEM privileges. When multiple attack vectors can be used, Microsoft assigns a score based on the scenario with the highest risk. In one scenario for the vulnerability, Microsoft said an attacker could convince a victim to connect to an attacker-controlled malicious application server, such as an SMB server. “Upon connecting, the malicious server could compromise the protocol,” Microsoft said. “To exploit this vulnerability, an attacker could execute a specially crafted malicious script to coerce the victim machine to connect back to the attack system using SMB and authenticate,” Microsoft said. “This could result in elevation of privilege.”

image for China Alleges NSA Cy ...

 Cyber News

China claims it has “irrefutable evidence” that the U.S. National Security Agency (NSA) launched a two-year cyberattack campaign on China's National Time Service Center (NTSC). In a WeChat post, China’s Ministry of State Security (MSS) said an attack on the high-precision keeper of "Beijing Time"   show more ...

could have led to “network communication failures, financial system disruptions, power outages, transportation disruptions, and space launch failures,” and also could have wreaked havoc with international time. The MSS post details what it claims was a more than two-year NSA cyberattack operation involving “42 specialized cyberattack weapons.” Alleged NSA Cyberattack Exploited SMS Vulnerability The MSS claims that the NSA campaign was “long-planned and systematic.” Beginning on March 25, 2022, China alleges that the NSA exploited a vulnerability in the SMS service of an “overseas mobile phone brand” to gain control of mobile phones of multiple NTSC staff members. A year later, beginning on April 18, 2023, the NSA launched multiple attacks using stolen credentials to infiltrate NTSC systems and “spy on the center's network systems,” the MSS post said in translation. From August 2023 to June 2024, the NSA “deployed a new cyber warfare platform and activated 42 specialized cyberattack weapons to launch a high-intensity cyberattack” against multiple internal NTSC network systems, the MSS post claimed. The NSA “also attempted to penetrate the high-precision ground-based timing system, potentially disabling it.” The MSS did not provide any details on the “42 specialized cyberattack weapons.” The NSA cyberattacks were often launched late at night or early morning Beijing time, and used VPNs in the U.S., Europe, and Asia to conceal the source of the attacks, the MSS said. The U.S. intelligence agency also used “forged digital certificates” to bypass antivirus software, and used “high-strength” encryption algorithms “to completely erase traces of the attacks.” China said it responded by “securing evidence” of the attacks - which it did not provide in the post - disrupting the attack chain. and improving defensive measures to stop potential threats. MSS Takes Issue with U.S. Claims of Chinese Cyber Threats China accused the U.S. of a multi-year campaign “continuously carrying out cyberattacks targeting China, Southeast Asia, Europe, and South America. They have infiltrated and controlled critical infrastructure, stolen vital intelligence, and monitored key personnel.” The MSS also charged that the U.S. has “exploited its technological base” in the Philippines, Japan, and Taiwan to conceal its involvement and shift the blame for cyberattacks elsewhere. U.S. cyber officials in recent years have alleged that Chinese cyber operations pose a significant threat to U.S. critical infrastructure – a claim the MSS took issue with in the WeChat post. “[T]he US has repeatedly hyped up the ‘China cyber threat’ theory, coercing other countries to hype up so-called ‘Chinese hacker attacks,’ sanctioning Chinese companies, and prosecuting Chinese citizens in an effort to confuse the public and distort the truth,” the MSS post said. “Ironclad facts have proven that the US is the true ‘Matrix’ and the greatest source of chaos in cyberspace.” The Cyber Express has reached out to the NSA for comment and will update this article with any response.

image for How to configure pri ...

 Privacy

When we interact with artificial intelligence, we often share a significant amount of personal information without giving it much thought. This information can range from dietary preferences and marital status to our home address and even social security number. To ensure the security and privacy of this highly   show more ...

sensitive information, it’s essential to understand exactly what the AI does with your data: where it stores it and whether it uses it for training. In this post, we take a closer look at the data collection policy of one of the most popular AI apps, ChatGPT, and explain how to configure it to maximize your privacy and security to the extent that OpenAI allows it. This is a long guide — but a comprehensive one. Table of contents What data ChatGPT collects about users Why ChatGPT collects data, and whether it’s used for model training How you can prohibit ChatGPT from using various types of your data for training What ChatGPT remembers about you, and how you can manage AI memory How you can enable Temporary Chats in ChatGPT, and why you should use them How to configure ChatGPT to work with local apps on your device The risks of connecting third-party online services to ChatGPT, and how to disable these connections How to protect your ChatGPT account from being hacked What data ChatGPT collects about you OpenAI, the owner and developer of ChatGPT, maintains two privacy policies. The specific policy that applies to users depends on the region where that individual registered their account: Privacy policy for individuals in the European Economic Area, the United Kingdom, and Switzerland Privacy policy for individuals in all other regions Because these policies are similar, we’ll first cover the common elements, and then discuss the differences. By default, OpenAI collects an extensive array of personal information and technical data about devices from all ChatGPT users. Account information: name, login credentials, date of birth, billing information, and transaction history User content: prompts as well as uploaded files, images, and audio Communication information: contact details the user provided when reaching out to OpenAI via email or social media Log data: IP address, browser type and settings, request date and time, and details about how the user interacts with OpenAI services Usage data: information about the user’s interaction with OpenAI services, such as content viewed, features used, actions taken, and technical details like country, time zone, device, and connection type Device information: device name, operating system, device identifiers, and the browser used Location information: the region determined by the IP address, rather than the exact location Cookies and similar technologies: necessary for service operation, user authentication, enabling specific features, and ensuring security; the complete list of cookies and their respective retention periods is available here What exactly OpenAI does with the data it collects from individual users will be discussed in the next part of this post. Here, we indicate the key difference between the privacy policies for users from the European Economic Area (EEA) and those from other regions. European users have the right to object to the use of their personal data for direct marketing. They may also challenge data processing where the company justifies this by its “legitimate interests”, such as internal administration or improvements to services. Note that OpenAI’s handling of data for business accounts is governed by separate rules that apply to ChatGPT Business and ChatGPT Enterprise subscriptions, as well as API access. What OpenAI does with your data, and whether ChatGPT is trained on your chats By default, ChatGPT can train its models on user prompts and the content that users upload. This policy applies to users of both the free version and the Plus and Pro subscriptions. For business accounts — specifically ChatGPT Enterprise, ChatGPT Business, and API access — training on user data is disabled by default. However, in the case of the API (the application programming interface that connects OpenAI models to other applications and services — the simplest use case being ChatGPT-based customer support bots), the company provides developers with the option to voluntarily enable data transmission. OpenAI outlines a comprehensive list of primary purposes for processing users’ personal information: To maintain services: to respond to queries and assist users To improve and develop services: to add new features and conduct research To communicate with users: to notify users about changes and events To protect the security of services: to prevent fraud and ensure security To comply with legal obligations: to protect the rights of users, OpenAI, and third parties The company also states that it may anonymize users’ personal data, though it does not obligate itself to do so. Furthermore, OpenAI reserves the right to transfer user data to third parties — specifically its contractors, partners, or government agencies — if such transfer is necessary for service operation, compliance with the law, or the protection of rights and security. As the company notes on its website: “In some cases, models may learn from personal information to understand how elements like names and addresses function in language, or to recognize public figures and well-known entities”. It’s important to note that all user data is processed and stored on OpenAI servers in the United States. Although the level of personal information protection may vary from country to country, the company asserts that it applies uniform security measures to all users. How to prevent ChatGPT from using your data for AI training To disable the collection of your data within the app, click your account name in the lower left corner of the screen. Select Settings, then navigate to Data controls. In the Data controls section of the ChatGPT settings, you can disable the use of your prompts for model training In Data controls, turn off the toggles next to the following items: Improve the model for everyone: disabling this option prevents the use of your prompts and uploads (text, files, images) for model training. Turning this off deactivates the two items below it Include your audio recordings: disabling this option prevents the voice messages from the dictation feature from being used for model training. It’s disabled by default Include your video recordings: this refers to the feature that allows you to include a video stream from your camera during a voice chat in the ChatGPT apps for iOS and Android. This video stream may also be used for model training. You can also disable this option through the web application. It’s disabled by default By turning off these settings, you prevent the use of new data for model training. However, it’s important to realize: if your prompts or content were already used for training before you disabled the option, it’s impossible to remove them from the trained model. In this same section, you can delete or archive all chats, and also request to Export Data from your account. This allows you to check what information OpenAI stores about you. A data archive will be sent to your email. Please note that preparing an export may take some time. The Delete account option is also available here. When your account is deleted, only your personal data is erased; information already used for model training remains. Beyond the in-app settings, you can manage your data through the OpenAI Privacy Portal. On the portal, you can: Request and download all your data stored by OpenAI Completely delete your custom GPTs, as well as your ChatGPT account and the personal data associated with it Ask OpenAI not to train the AI on your data. If OpenAI approves your request, the AI will stop training on the data you provided before you disabled the Improve the model for everyone option in the settings Sometimes ChatGPT may also train on personal data from public sources — you can submit a request to stop this as well Request the deletion of personal data from specific conversations or prompts Users from the European Economic Area, the UK, and Switzerland have additional rights under the GDPR. The law is in effect in European countries and regulates how companies collect and use personal data. These rights are not directly displayed on the OpenAI Privacy Portal, but they can be exercised by submitting a request through the portal, or by writing to dsar@openai.com. How to clear your data from ChatGPT’s memory Another critical element of privacy protection is ChatGPT’s memory. Unlike chat history, memory allows the model to recall specific details about you, such as your name, interests, preferences, and communication style. This data persists across sessions and is used to personalize the AI’s responses. To review exactly what the AI remembers within the app, click your account name in the lower-left corner of the screen. Choose Settings, then navigate to Personalization, and select Manage memories. Under Personalization, you can manage saved memories, temporarily disable memory, or prevent the model from referring to chat history when responding This section displays all stored information. If you wish for ChatGPT to forget a specific detail, click the trash can icon next to that memory. Important: for a memory to be completely erased, you need to also delete the specific chat the information was saved from. If you delete only the chat but not the memory, the data remains stored. In Personalization, you can also configure what data ChatGPT will store about you in future conversations. To do this, you should familiarize yourself with the  two types of memory available in the AI: Saved memories are fixed recollections about you, such as your name, interests, or communication style, which remain in the system until you manually delete them. These are created when you explicitly ask the chat to remember something Chat history is the model’s ability to consider specific details from past conversations to produce more personalized responses. In this case, ChatGPT doesn’t store every detail; instead, it selects only fragments that it deems useful. These types of memories can change and adapt over time You can disable one or both of these memory types in the ChatGPT settings. To deactivate saved memories, turn off the toggle next to Reference saved memories. To do the same for chat history, turn off the toggle next to Reference chat history. Disabling these features doesn’t delete previously saved information. The data remains within the system, but the model ceases to reference it in new responses. To completely delete saved memories, go to the Manage memories section as described above. The Personalization menu in the web-based version of ChatGPT is slightly different, with an additional option: Record mode. This allows the AI to reference transcripts of your past recordings when generating responses. It is possible to disable this feature within the web interface. In addition, the web version displays a memory usage indicator, such as 87% full, which shows how much space is occupied by memories. The web version of ChatGPT also includes a memory usage indicator under Personalization For sensitive conversations, you can utilize special Temporary Chats, which the AI won’t remember. How to use Temporary Chats in ChatGPT Temporary Chats in ChatGPT are designed to resemble incognito mode in a web browser. If you want to discuss something particularly intimate or confidential with the AI, this mode helps reduce the risks. The chats are not saved in the history, they don’t become part of the memory, and they’re not used to train the models. This last point holds true for all Temporary Chats regardless of the settings selected in the Data controls section, which was discussed above. Once a session ends, its contents disappear and cannot be recovered. This means Temporary Chats won’t appear in your history, and ChatGPT won’t remember their content. However, OpenAI warns that for security purposes, a copy of the Temporary Chat may be stored on the company’s servers for up to 30 days. In June 2025, a court ordered OpenAI to preserve all user chats with ChatGPT indefinitely. The decision has already taken effect, and even though the company plans to appeal it, at the time of this publication, OpenAI is compelled to store Temporary Chat data permanently in a special secure repository that “can only be accessed under strict legal protocols”. This largely nullifies the entire concept of “Temporary Chats”, and confirms the old adage, “There’s nothing more permanent than the temporary”. It’s important to note that when creating a Temporary Chat, you’re starting a conversation with the AI from a blank slate: the chatbot won’t remember any information from its previous chats with you. To initiate a Temporary Chat in the web-based version of ChatGPT, open a new chat and click Turn on temporary chat button in the upper right corner of the page. In the web version of ChatGPT, the Turn on temporary chat button is located in the upper right corner of the screen, and launches a new chat that won’t save any history or memory To activate a Temporary Chat in the ChatGPT applications for macOS and Windows, click the AI model selection, and a Temporary Chat toggle will appear at the bottom of the window that opens. In the ChatGPT app for macOS, Temporary Chat activation is available in the model selection menu After a Temporary Chat is activated, a special screen will open, which will look slightly different in the desktop and web versions. If you see this screen, it means things are working correctly. Temporary Chats are not saved in history, used to update memory, or utilized for model training Integrating ChatGPT with your device applications The ChatGPT application includes a feature named Work with Apps. This allows you to interact with the AI beyond the ChatGPT interface itself, extending its functionality into other apps on your device. Specifically, the model can connect to text editors and various development environments. When you utilize this feature, you can receive AI suggestions and make edits directly within those apps, eliminating the need to copy text to a separate chat window. The core concept is to embed the AI into your existing, familiar workflows. However, along with the convenience, this feature introduces privacy risks. By connecting to applications, ChatGPT gains access to the content of the files you’re working on. These files may include personal documents, work projects or reports, notes containing confidential information, and other similar content. A portion of this data may be sent to OpenAI’s servers for analysis and response generation. Therefore, the more applications you grant access to, the higher the probability that sensitive information will be exposed to OpenAI. At the time of this post, the ChatGPT application for macOS can connect to the following applications: Text-editing and note-taking apps: Apple Notes, Notion, TextEdit, Quip Development environments: Xcode, Script Editor, VS Code (Code, Code Insiders, VSCodium, Cursor, Windsurf) JetBrains IDEs: Android Studio, IntelliJ IDEA, PyCharm, WebStorm, PHPStorm, CLion, Rider, RubyMine, AppCode, GoLand, DataGrip Command-line interfaces: Terminal, iTerm, Warp, Prompt No comparable list has been published for the Windows version of the app yet. To check if this feature is currently enabled on your device, click your account name in the lower-left corner of the screen. Select Settings and scroll down to Work with Apps. If the toggle switch next to Enable Work with Apps is on, the feature is turned on. In Work with Apps, you can check if the feature is enabled, and manage connections to installed apps It’s important to emphasize that enabling the feature doesn’t immediately give the ChatGPT app access to the applications on your device. For ChatGPT to analyze and make changes to content in other apps, the user must explicitly grant a separate permission to each individual app. If you’re unsure whether you’ve granted ChatGPT any access permissions, you can verify this within the same section. To do this, select Manage Apps. The window that opens will display every app on your device that ChatGPT can potentially interact with. If each app shows Requires permissions underneath it, and Enable Permission on the right, it signifies that ChatGPT currently has no access to any apps. Manage Apps displays the apps ChatGPT can potentially access On macOS, should you choose to grant ChatGPT access to an application, you must also enable the AI app to control your computer via the accessibility features in the system settings. This permission grants ChatGPT extensive extra capabilities: monitoring your activities, managing other applications, simulating keystrokes, and interacting with the user interface. For this very reason, these permissions are granted only manually and require the user’s explicit confirmation. If you’re concerned about the uncontrolled sharing of your data with ChatGPT, we recommend you disable the Enable Work with Apps toggle switch and forgo using this feature. However, if you want ChatGPT to be able to work with applications on your device, you should pay attention to the following three features, and configure them according to your personal balance of privacy and convenience: Automatically pair with apps from chat bar allows ChatGPT to automatically connect to supported applications directly from the chat UI without requiring manual selection each time. This speeds up your workflow, but increases the risk that the model will gain access to an application that the user didn’t intend to connect it to Generate suggested edits allows ChatGPT to propose changes to text or code within the connected application, but you’ll need to apply those changes manually. This is the safer option because the user retains control over changes being made Automatically apply suggested edits allows the model to immediately implement changes to files. While this maximizes process automation, it carries additional risks, as modifications could be applied without confirmation — potentially affecting important documents or work projects How to connect ChatGPT to third-party online services ChatGPT can also be connected to third-party online services for greater customization: this allows the AI to offer more precise answers and execute tasks better by considering, for example, your email correspondence in Gmail or schedule in Google Calendar. Unlike Work with Apps, which enables ChatGPT to interact with locally installed applications, this feature involves external online platforms like GitHub, Gmail, Google Calendar, Teams, and many others. The exact list of available services depends on your plan. The most extensive selection is available in the Business, Enterprise, and Edu tiers; a slightly more limited set is found in Pro; and the roster of services is significantly more modest in Plus. Free users have no access to this feature. Some regional restrictions also apply. You can view the full list for all plans by following the link. When connecting to third-party services, it’s crucial to understand exactly what data OpenAI will process, how, and for what purposes. If you haven’t disabled training on your data, information received from connected services may also be used for model training. Furthermore, with the memory option enabled, ChatGPT is capable of remembering details obtained from third-party services and utilizing them in future chats. To view the list of online services available for connection, click your account name in the bottom left corner of the screen. Then, select Settings and, in the Account section, navigate to Connectors. Connectors available in the ChatGPT settings Under Connectors, you’ll see services that are already connected, as well as those that are available for activation. To disconnect ChatGPT from a service, select the service and click Disconnect. The settings for each connector allow you to disable ChatGPT’s access to the service, view the date when it was connected, and allow or disallow the automatic use of its data in chats To mitigate privacy risks, we recommend connecting only the absolutely necessary services, and configuring the memory and data controls within ChatGPT in advance. How to set up secure login to ChatGPT If you are a frequent ChatGPT user, the service likely stores significantly more information about you than even social media. Therefore, if your account is compromised, attackers could gain access to data they can use for doxing, blackmail, fraud, theft of funds, and other types of attacks. To mitigate these risks, it’s essential to set a complex password, and enable two-factor authentication for logging in to ChatGPT. What we have in mind when we say “complex” is a password that meets all of the following criteria: A minimum length of 16 characters A combination of uppercase and lowercase letters, numbers, and special characters Ideally, no dictionary words, no simple sequences like “12345” or “qwerty”, and no repeating characters Uniqueness: a different password for each website or online service If your current ChatGPT password doesn’t satisfy these criteria, we strongly recommend you change it. While there’s no option to change the password as such in the ChatGPT settings, you can use the password reset procedure. To do this, log out of your account, select Forgot password? on the login screen, and follow the instructions to set up a new password. You may be tempted to use the AI model itself to generate a password. However, we don’t recommend this: our research suggests that chatbots are often not very effective at this task, and frequently generate highly insecure passwords. Furthermore, even if you explicitly ask the neural network to create a random password, it won’t be truly random, and will therefore be more vulnerable. For additional account protection, we also recommend enabling two-factor authentication: navigate to Settings, select Security, and turn on the Multi-factor authentication toggle switch. After this, scan the QR code in an authenticator application, or manually enter the secret key that appears on the screen, and verify the action with the one-time code. In the Security section of the web version, you can also log out of all active sessions on all devices, including your current one. Unfortunately, you cannot view the login history. We recommend using this feature if you suspect that someone may have gained unauthorized access to your account. In the web version’s security settings, you can enable multi-factor authentication, and also log out of ChatGPT on all devices Final tips to secure your data When using AI chatbots, it’s important to remember that these applications create new privacy challenges. To protect our data, we now must account for things that were not a concern when setting up accounts in traditional apps and web services, or even in social media and messaging apps. We hope that this comprehensive guide to privacy and security settings in ChatGPT will help you with this tricky task. Also, please remember to safeguard your ChatGPT account against hijacking. The best way to do this is by using an app that generates and securely stores strong passwords, while also managing two-factor authentication codes. Kaspersky Password Manager helps you create unique, complex passwords, autofill them when logging in, and generate one-time codes for two-factor authentication. Passwords, one-time codes, and other data encrypted in Kaspersky Password Manager can be synchronized across all your devices. This will help provide robust protection for your account in ChatGPT and other online services. If you’re looking for more information on the secure use of artificial intelligence, here are some more useful posts: The pros and cons of AI-powered browsers Three approaches to workplace “shadow AI” from the cybersecurity standpoint New types of attacks on AI-powered assistants and chatbots Should you disable Microsoft Recall in 2025? Trojans masquerading as DeepSeek and Grok clients

 Feed

China on Sunday accused the U.S. National Security Agency (NSA) of carrying out a "premeditated" cyber attack targeting the National Time Service Center (NTSC), as it described the U.S. as a "hacker empire" and the "greatest source of chaos in cyberspace." The Ministry of State Security (MSS),   show more ...

in a WeChat post, said it uncovered "irrefutable evidence" of the agency's involvement in the intrusion

 Feed

It’s easy to think your defenses are solid — until you realize attackers have been inside them the whole time. The latest incidents show that long-term, silent breaches are becoming the norm. The best defense now isn’t just patching fast, but watching smarter and staying alert for what you don’t expect. Here’s a quick look at this week’s top threats, new tactics, and security stories shaping

 Feed

ClickFix, FileFix, fake CAPTCHA — whatever you call it, attacks where users interact with malicious scripts in their web browser are a fast-growing source of security breaches.  ClickFix attacks prompt the user to solve some kind of problem or challenge in the browser — most commonly a CAPTCHA, but also things like fixing an error on a webpage.  The name is a little misleading, though

 Feed

Cybersecurity researchers have uncovered a coordinated campaign that leveraged 131 rebranded clones of a WhatsApp Web automation extension for Google Chrome to spam Brazilian users at scale. The 131 spamware extensions share the same codebase, design patterns, and infrastructure, according to supply chain security company Socket. The browser add-ons collectively have about 20,905 active users. "

 Feed

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) on Monday added five security flaws to its Known Exploited Vulnerabilities (KEV) Catalog, officially confirming a recently disclosed vulnerability impacting Oracle E-Business Suite (EBS) has been weaponized in real-world attacks. The security defect in question is CVE-2025-61884 (CVSS score: 7.5), which has been described as a

2025-10
WED
THU
FRI
SAT
SUN
MON
TUE
OctoberNovemberDecember