Cyber security aggregate rss news

Cyber security aggregator - feeds history

image for AI government regula ...

 Technology

Im a bit tired by now of all the AI news, but I guess Ill have to put up with it a bit longer, for its sure to continue to be talked about non-stop for at least another year or two. Not that AI will then stop developing, of course; its just that journalists, bloggers, TikTokers, Tweeters and other talking heads out   show more ...

there will eventually tire of the topic. But for now their zeal is fueled not only by the tech giants, but governments as well: the UKs planning on introducing three-way AI regulation; Chinas put draft AI legislation up for a public debate; the U.S. is calling for algorithmic accountability; the EU is discussing but not yet passing draft laws on AI, and so on and so forth. Lots of plans for the future, but, to date, the creation and use of AI systems havent been limited in any way whatsoever; however, it looks like thats going to change soon. Plainly a debatable matter is, of course, the following: do we need government regulation of AI at all? If we do — why, and what should it look like? What to regulate What is artificial intelligence? (No) thanks to marketing departments, the terms been used for lots of things — from the cutting-edge generative models like GPT-4, to the simplest machine-learning systems, including some that have been around for decades. Remember ?9 on push-button cellphones? Heard about automatic spam and malicious file classification? Do you check out film recommendations on Netflix? All of those familiar technologies are based on machine learning (ML) algorithms, aka AI. Here at Kaspersky, weve been using such technologies in our products for close on 20 years, always preferring to modestly refer to them as machine learning — if only because artificial intelligence seems to call to most everyones mind things like talking supercomputers on spaceships and other stuff straight out of science fiction. However, such talking-thinking computers and droids need to be fully capable of human-like thinking — to command artificial general intelligence (AGI) or artificial superintelligence (ASI), yet neither AGI nor ASI have been invented yet, and will hardly be so in the foreseeable future. Anyway, if all the AI types are measured with the same yardstick and fully regulated, the whole IT industry and many related ones arent going to fare well at all. For example, if we (Kaspersky) will ever be required to get the consent from all our training-set authors, we, as an information security company, will find ourselves up against the wall. We learn from malware and spam, and feed the knowledge gained into our machine learning, while their authors tend to prefer to withhold their contact data (who knew?!). Moreover, considering that data has been collected and our algorithms have been trained for nearly 20 years now —  quite how far into the past would we be expected to go? Therefore, its essential for lawmakers to listen, not to marketing folks, but to machine-learning/AI industry experts and discuss potential regulation in a specific and focused manner: for example, possibly using multi-function systems trained on large volumes of open data, or high responsibility and risk level decision-making systems. And new AI applications will necessitate frequent revisions of regulations as they arise. Why regulate? To be honest, I dont believe in a superintelligence-assisted Judgement Day within the next hundred years. But I do believe in a whole bunch of headaches from thoughtless use of the computer black box. As a reminder to those who havent read our articles on both the splendor and misery of machine learning, there are three main issues regarding any AI: Its not clear just how good the training data used for it were/are. Its not clear at all what AI has succeeded in comprehending out of that stock of data, or how it makes its decisions. And most importantly — the algorithm can be misused by its developers and its users alike. Thus, anything at all could happen: from malicious misuse of AI, to unthinking compliance with AI decisions. Graphic real-life examples: fatal autopilot errors, deepfakes (1, 2, 3) by now habitual in memes and even the news, a silly error in school teacher contracting, the police apprehending a shoplifter but the wrong one, and a misogynous AI recruiting tool. Besides, any AI can be attacked with the help of custom-made hostile data samples: vehicles can be tricked using stickers, one can extract personal information from GPT-3, and anti-virus or EDR can be deceived too. And by the way, attacks on combat-drone AI described in science fiction dont appear all that far-fetched any more. In a nutshell, the use of AI hasnt given rise to any truly massive problems yet, but there is clearly a lot of potential for them. Therefore, the priorities of regulation should be clear: Preventing critical infrastructure incidents (factories/ships/power transmission lines/nuclear power plants). Minimizing physical threats (driverless vehicles, misdiagnosing illnesses). Minimizing personal damage and business risks (arrests or hirings based on skull measurements, miscalculation of demand/procurements, and so on). The objective of regulation should be to compel users and AI vendors to take care not to increase the risks of the mentioned negative things happening. And the more serious the risk, the more actively it should be compelled. Theres another concern often aired regarding AI: the need for observance of moral and ethical norms, and to cater to psychological comfort, so to say. To this end, we see warnings given so folks know that theyre viewing a non-existent (AI-drawn) object or communicating with a robot and not a human, and also notices informing that copyright was respected during AI training, and so on. And why? So lawmakers and AI vendors arent targeted by angry mobs! And this is a very real concern in some parts of the world (recall protests against Uber, for instance). How to regulate The simplest way to regulate AI would to prohibit everything, but it looks like this approach isnt on the table yet. And anyway its not much easier to prohibit AI than it is computers. Therefore, all reasonable regulation attempts should follow the principle of the greater the risk, the stricter the requirements. The machine-learning models that are used for something rather trivial — like retail buyer recommendations — can go unregulated, but the more sophisticated the model — or the more sensitive the application area — the more drastic can be the requirements for system vendors and users. For example: Submitting a models code or training dataset for inspection to regulators or experts. Proving the robustness of a training dataset, including in terms of bias, copyright and so forth. Proving the reasonableness of the AI output; for example, its being free of hallucinations. Labelling AI operations and results. Updating a model and training dataset; for example, screening out folks of a given skin color from the source data, or suppressing chemical formulas for explosives in the models output. Testing AI for hostile data, and updating its behavior as necessary. Controlling whos using specific AI and why. Denying specific types of use. Training large AI, or that which applies to a particular area, only with the permission of the regulator. Proving that its safe to use AI to address a particular problem. This approach is very exotic for IT, but more than familiar to, for example, pharmaceutical companies, aircraft manufacturers and many other industries where safety is paramount. First would come five years of thorough tests, then the regulators permission, and only then a product could be released for general use. The last measure appears excessively strict, but only until you learn about incidents in which AI messed up treatment priorities for acute asthma and pneumonia patients and tried to send them home instead of to an intensive care unit. The enforcement measures may range from fines for violations of AI rules (along the lines of European penalties for GDPR violations) to licensing of AI-related activities and criminal sanctions for breaches of legislation (as proposed in China). But whats the right way? Below represent my own personal opinions — but theyre based on 30 years of active pursuit of advanced technological development in the cybersecurity industry: from machine learning to secure-by-design systems. First, we do need regulation. Without it, AI will end up resembling highways without traffic rules. Or, more relevantly, resembling the online personal data collection situation in the late 2000s, when nearly everyone would collect all they could lay their hands on. Above all, regulation promotes self-discipline in the market players. Second, we need to maximize international harmonization and cooperation in regulation — the same way as with technical standards in mobile communications, the internet and so on. Sounds utopian given the modern geopolitical reality, but that doesnt make it any less desirable. Third, regulation neednt be too strict: it would be short-sighted to strangle a dynamic young industry like this one with overregulation. That said, we need a mechanism for frequent revisions of the rules to stay abreast of technology and market developments. Fourth, the rules, risk levels, and levels of protection measures should be defined in consultation with a great many relevantly-experienced experts. Fifth, we dont have to wait ten years. Ive been banging on about the serious risks inherent in the Internet of Things and about vulnerabilities in industrial equipment for over a decade already, while documents like the EU Cyber Resilience Act first appeared (as drafts!) only last year. But thats all for now folks! And well done to those of your whove read this to the end — thank you all! And heres to an interesting – safe – AI-enhanced future!

image for Researcher: maliciou ...

 application development

Researchers at ReversingLabs said they discovered two npm open source packages that contained malicious code linked to open source malware known as TurkoRat. The post Researcher: malicious packages lurked on npm for months appeared first on The Security Ledger with Paul F. Roberts. Related StoriesThe surveys speak:   show more ...

supply chain threats are freaking people outMalicious Automation is driving API Security BreachesEpisode 249: Intel Federal CTO Steve Orrin on the CHIPS Act and Supply Chain Security

 Incident Response, Learnings

Patient information left exposed in the MedEvolve incident included names, billing addresses, telephone numbers, primary health insurer and doctor's office account numbers, and some Social Security numbers, HHS OCR said.

 Trends, Reports, Analysis

H2 2022 marked a turning point in the security landscape. In several high-profile incidents, APIs emerged as a primary attack vector, posing a new and significant threat to organizations’ security posture, according to Cequence Security.

 Geopolitical, Terrorism

The NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) announced on Wednesday that four more countries have joined as members: Ukraine, Ireland, Japan, and Iceland.

 Expert Blogs and Opinion

Since .zip and .mov are both file extensions, security experts are concerned that a miscreant could employ these TLDs to confuse people by visiting a malicious website rather than opening a file, among other threat scenarios.

 Trends, Reports, Analysis

The number of organizations that experienced ransomware attacks over the past year has remained the same, but the average cost of data recovery has increased -- whether it is in ransomware payment or restoring lost data.

 Trends, Reports, Analysis

Described in a report called “The Growing Threat From Infostealers,” the new findings from Secureworks shed light on the thriving infostealer market, which plays a pivotal role in facilitating cybercrime activities such as ransomware attacks.

 Threat Actors

A hacking group known as OilAlpha has been identified in connection with a cyber espionage campaign that specifically targets development, humanitarian, media, and non-governmental organizations in the Arabian peninsula. The group employed remote access tools, such as SpyNote and SpyMax, to install mobile spyware.

 Malware and Vulnerabilities

The Royal ransomware group, which spun off from Conti in early 2022, is refining its downloader using tactics and techniques that appear to draw directly from other post-Conti groups, says Yelisey Bohuslavskiy, chief research officer at Red Sense.

 Trends, Reports, Analysis

Trellix has observed a surge in malicious emails targeted toward Taiwan, starting April 7 and continuing until April 10. The number of malicious emails during this time increased to over four times the usual amount.

 Trends, Reports, Analysis

Cyber-resilience has become a top priority for global organizations, but over half (52%) of those with programs are struggling because they lack a comprehensive assessment approach, according to Osterman Research.

 Identity Theft, Fraud, Scams

The new way that hackers originate BEC 3.0 attacks is through legitimate services. In this attack, hackers create free Dropbox accounts and leverage their domain legitimacy to create pages with phishing embedded within them.

 Malware and Vulnerabilities

The latest iteration of the Sotdas malware has emerged, showcasing a variety of innovative features and advanced techniques for evading detection. This malware family is written in C++. After achieving persistence and collecting system information, Sotdas leverages this data for optimizing resource utilization and initiating cryptomining operations.

 Feed

This Metasploit module exploits a command injection vulnerability in IBM AIX invscout set-uid root utility present in AIX 7.2 and earlier. The undocumented -rpm argument can be used to install an RPM file; and the undocumented -o argument passes arguments to the rpm utility without validation, leading to command injection with effective-uid root privileges. This module has been tested successfully on AIX 7.2.

 Feed

On May 11 2023, Essential Addons for Elementor, a WordPress plugin with over one million active installations, released a patch for a critical vulnerability that made it possible for any unauthenticated user to reset arbitrary user passwords, including user accounts with administrative-level access. Versions 5.7.1 and below are affected.

 Feed

Debian Linux Security Advisory 5405-1 - It was discovered that missing input sanitizing in the implementation of the OIDCStripCookie option in mod_auth_openidc could result in denial of service.

 Feed

Red Hat Security Advisory 2023-3221-01 - Mozilla Thunderbird is a standalone mail and newsgroup client. This update upgrades Thunderbird to version 102.11.0. Issues addressed include a bypass vulnerability.

 Feed

Red Hat Security Advisory 2023-3220-01 - Mozilla Firefox is an open-source web browser, designed for standards compliance, performance, and portability. This update upgrades Firefox to version 102.11.0 ESR. Issues addressed include a bypass vulnerability.

 Feed

Red Hat Security Advisory 2023-3223-01 - Red Hat AMQ Streams, based on the Apache Kafka project, offers a distributed backbone that allows microservices and other applications to share data with extremely high throughput and extremely low latency. This release of Red Hat AMQ Streams 2.4.0 serves as a replacement for   show more ...

Red Hat AMQ Streams 2.3.0, and includes security and bug fixes, and enhancements. Issues addressed include denial of service, deserialization, information leakage, memory exhaustion, and resource exhaustion vulnerabilities.

 Feed

Ubuntu Security Notice 6087-1 - It was discovered that Ruby incorrectly handled certain regular expressions. An attacker could possibly use this issue to cause a denial of service. This issue only affected Ubuntu 16.04 ESM.

 Feed

Ubuntu Security Notice 6088-1 - It was discovered that runC incorrectly made /sys/fs/cgroup writable when in rootless mode. An attacker could possibly use this issue to escalate privileges. It was discovered that runC incorrectly performed access control when mounting /proc to non-directories. An attacker could   show more ...

possibly use this issue to escalate privileges. It was discovered that runC incorrectly handled /proc and /sys mounts inside a container. An attacker could possibly use this issue to bypass AppArmor, and potentially SELinux.

 Feed

Ubuntu Security Notice 6086-1 - It was discovered that minimatch incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to cause a denial of service.

 Feed

Red Hat Security Advisory 2023-1329-01 - Red Hat build of MicroShift is Red Hat's light-weight Kubernetes orchestration solution designed for edge device deployments and is built from the edge capabilities of Red Hat OpenShift. MicroShift is an application that is deployed on top of Red Hat Enterprise Linux   show more ...

devices at the edge, providing an efficient way to operate single-node clusters in these low-resource environments. This advisory contains the RPM packages for Red Hat build of MicroShift 4.13.0. Issues addressed include a man-in-the-middle vulnerability.

 Feed

Red Hat Security Advisory 2023-2138-01 - Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments. This advisory contains the extra low-latency container images for Red Hat OpenShift Container Platform 4.13. Issues addressed include a bypass vulnerability.

 Feed

Red Hat Security Advisory 2023-3205-01 - OpenShift Virtualization is Red Hat's virtualization solution designed for Red Hat OpenShift Container Platform. This advisory contains OpenShift Virtualization 4.13.0 images. Issues addressed include a denial of service vulnerability.

 Feed

Red Hat Security Advisory 2023-1325-01 - Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments. This advisory contains the RPM packages for Red Hat OpenShift Container Platform 4.13.0. Issues addressed include bypass, denial of service, and information leakage vulnerabilities.

 Feed

Red Hat Security Advisory 2023-1328-01 - Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments. Issues addressed include denial of service and out of bounds read vulnerabilities.

 Feed

Red Hat Security Advisory 2023-3204-01 - OpenShift Virtualization is Red Hat's virtualization solution designed for Red Hat OpenShift Container Platform. This advisory contains OpenShift Virtualization 4.13.0 RPMs. Issues addressed include a denial of service vulnerability.

 Feed

Red Hat Security Advisory 2023-2695-01 - Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments. This advisory contains the RPM packages for Red Hat OpenShift Container Platform 4.11.40.

 Feed

Red Hat Security Advisory 2023-3198-01 - Jenkins is a continuous integration server that monitors executions of repeated jobs, such as building a software project or jobs run by cron. Issues addressed include bypass, code execution, cross site request forgery, cross site scripting, denial of service, deserialization, information leakage, and insecure permissions vulnerabilities.

 Feed

Red Hat Security Advisory 2023-1326-01 - Red Hat OpenShift Container Platform is Red Hat's cloud computing Kubernetes application platform solution designed for on-premise or private cloud deployments. This advisory contains the container images for Red Hat OpenShift Container Platform 4.13.0. Issues addressed   show more ...

include bypass, denial of service, information leakage, out of bounds read, and remote SQL injection vulnerabilities.

 Feed

Ubuntu Security Notice 6085-1 - It was discovered that some AMD x86-64 processors with SMT enabled could speculatively execute instructions using a return address from a sibling thread. A local attacker could possibly use this to expose sensitive information. Zheng Wang discovered that the Intel i915 graphics driver   show more ...

in the Linux kernel did not properly handle certain error conditions, leading to a double-free. A local attacker could possibly use this to cause a denial of service.

 Feed

Debian Linux Security Advisory 5404-1 - Multiple security issues were discovered in Chromium, which could result in the execution of arbitrary code, denial of service or information disclosure.

 Feed

Ubuntu Security Notice 6050-2 - USN-6050-1 fixed several vulnerabilities in Git. This update provides the corresponding updates for CVE-2023-25652 and CVE-2023-29007 on Ubuntu 16.04 LTS. It was discovered that Git incorrectly handled certain commands. An attacker could possibly use this issue to overwrite paths.

 Feed

Ubuntu Security Notice 6084-1 - Jordy Zomer and Alexandra Sandulescu discovered that the Linux kernel did not properly implement speculative execution barriers in usercopy functions in certain situations. A local attacker could use this to expose sensitive information. Xingyuan Mo discovered that the x86 KVM   show more ...

implementation in the Linux kernel did not properly initialize some data structures. A local attacker could use this to expose sensitive information.

 Feed

Red Hat Security Advisory 2023-3191-01 - This is a kernel live patch module which is automatically loaded by the RPM post-install script to modify the code of a running kernel. Issues addressed include denial of service and use-after-free vulnerabilities.

 Feed

Red Hat Security Advisory 2023-3177-01 - The Apache Portable Runtime is a portability library used by the Apache HTTP Server and other projects. apr-util is a library which provides additional utility interfaces for APR; including support for XML parsing, LDAP, database interfaces, URI parsing, and more. Issues addressed include an out of bounds write vulnerability.

 Feed

Red Hat Security Advisory 2023-3189-01 - GNU Emacs is a powerful, customizable, self-documenting text editor. It provides special code editing features, a scripting language, and the capability to read e-mail and news. Issues addressed include a code execution vulnerability.

 Feed

Cisco has released updates to address a set of nine security flaws in its Small Business Series Switches that could be exploited by an unauthenticated, remote attacker to run arbitrary code or cause a denial-of-service (DoS) condition. "These vulnerabilities are due to improper validation of requests that are sent to the web interface," Cisco said, crediting an unnamed external researcher for

 Feed

The rising geopolitical tensions between China and Taiwan in recent months have sparked a noticeable uptick in cyber attacks on the East Asian island country. "From malicious emails and URLs to malware, the strain between China's claim of Taiwan as part of its territory and Taiwan's maintained independence has evolved into a worrying surge in attacks," the Trellix Advanced Research Center said 

 Feed

The notorious cryptojacking group tracked as 8220 Gang has been spotted weaponizing a six-year-old security flaw in Oracle WebLogic servers to ensnare vulnerable instances into a botnet and distribute cryptocurrency mining malware. The flaw in question is CVE-2017-3506 (CVSS score: 7.4), which, when successfully exploited, could allow an unauthenticated attacker to execute arbitrary commands

 Feed

A U.S. national has pleaded guilty in a Missouri court to operating a darknet carding site and selling financial information belonging to tens of thousands of victims in the country. Michael D. Mihalo, aka Dale Michael Mihalo Jr. and ggmccloud1, has been accused of setting up a carding site called Skynet Market that specialized in the trafficking of credit and debit card data. Mihalo and his

 Feed

Apple has announced that it prevented over $2 billion in potentially fraudulent transactions and rejected roughly 1.7 million app submissions for privacy and security violations in 2022. The computing giant said it terminated 428,000 developer accounts for potential fraudulent activity, blocked 105,000 fake developer account creations, and deactivated 282 million bogus customer accounts. It

 Feed

Cybersecurity is constantly evolving, but complexity can give hostile actors an advantage. To stay ahead of current and future attacks, it's essential to simplify and reframe your defenses. Zscaler Deception is a state-of-the-art next-generation deception technology seamlessly integrated with the Zscaler Zero Trust Exchange. It creates a hostile environment for attackers and enables you to track

 Feed

Digitalization initiatives are connecting once-isolated Operational Technology (OT) environments with their Information Technology (IT) counterparts. This digital transformation of the factory floor has accelerated the connection of machinery to digital systems and data. Computer systems for managing and monitoring digital systems and data have been added to the hardware and software used for

 Feed

A cybercrime enterprise known as Lemon Group is leveraging millions of pre-infected Android smartphones worldwide to carry out their malicious operations, posing significant supply chain risks. "The infection turns these devices into mobile proxies, tools for stealing and selling SMS messages, social media and online messaging accounts and monetization via advertisements and click fraud,"

2023-05
Aggregator history
Thursday, May 18
MON
TUE
WED
THU
FRI
SAT
SUN
MayJuneJuly