Cyber security aggregate rss news

Cyber security aggregator - feeds history

image for Neural networks reve ...

 Privacy

Your (neural) networks are leaking Researchers at universities in the U.S. and Switzerland, in collaboration with Google and DeepMind, have published a paper showing how data can leak from image-generation systems that use the machine-learning algorithms DALL-E, Imagen or Stable Diffusion. All of them work the same   show more ...

way on the user side: you type in a specific text query — for example, an armchair in the shape of an avocado — and get a generated image in return. Image generated by the Dall-E neural network. Source. All these systems are trained on a vast number (tens or hundreds of thousands) of images with pre-prepared descriptions. The idea behind such neural networks is that, by consuming a huge amount of training data, they can create new, unique images. However, the main takeaway of the new study is that these images are not always so unique. In some cases its possible to force the neural network to reproduce almost exactly an original image previously used for training. And that means that neural networks can unwittingly reveal private information. Image generated by the Stable Diffusion neural network (right) and the original image from the training set (left). Source. More data for the data god The output of a machine-learning system in response to a query can seem like magic to a non-specialist: woah – its like an all-knowing robot!! But theres no magic really… All neural networks work more or less in the same way: an algorithm is created thats trained on a data set — for example a series of pictures of cats and dogs — with a description of what exactly is depicted in each image. After the training stage, the algorithm is shown a new image and asked to work out whether its a cat or a dog. From these humble beginnings, the developers of such systems moved on to a more complex scenario: the algorithm trained on lots of pictures of cats creates an image of a pet that never existed on demand. Such experiments are carried out not only with images, but also with text, video and even voice: weve already written about the problem of deepfakes (whereby digitally altered videos of (mostly) politicians or celebrities seem to say stuff they never actually did). For all neural networks, the starting point is a set of training data: neural networks cannot invent new entities from nothing. To create an image of a cat, the algorithm must study thousands of real photographs or drawings of these animals. There are plenty of arguments for keeping these data sets confidential. Some of them are in the public domain; other data sets are the intellectual property of the developer company that invested considerable time and effort into creating them in the hope of achieving a competitive advantage. Still others, by definition, constitute sensitive information. For example, experiments are underway to use neural networks to diagnose diseases based on X-rays and other medical scans. This means that the algorithmic training data contains the actual health data of real people, which, for obvious reasons, must not fall into the wrong hands. Diffuse it Although machine-learning algorithms look the same to the outsider, they are in fact different. In their paper, the researchers pay special attention to machine-learning diffusion models. They work like this: the training data (again images of people, cars, houses, etc.) is distorted by adding noise. And the neural network is then trained to restore such images to their original state. This method makes it possible to generate images of decent quality, but a potential drawback (in comparison with algorithms in generative adversarial networks, for example) is their greater tendency to leak data. The original data can be extracted from them in at least three different ways: First, using specific queries, you can force the neural network to output — not something unique, generated based on thousands of pictures — but a specific source image. Second, the original image can be reconstructed even if only a part of it is available. Third, its possible to simply establish whether or not a particular image is contained within the training data. Very often, neural networks are… lazy, and instead of a new image, they produce something from the training set if it contains multiple duplicates of the same picture. Besides the above example with the Ann Graham Lotz photo, the study gives quite a few other similar results: Odd rows: the original images. Even rows: images generated by Stable Diffusion v1.4. Source. If an image is duplicated in the training set more than a hundred times, theres a very high chance of its leaking in its near-original form. However, the researchers demonstrated ways to retrieve training images that only appeared once in the original set. This method is far less efficient: out of five hundred tested images, the algorithm randomly recreated only three of them. The most artistic method of attacking a neural network involves recreating a source image using just a fragment of it as input. The researchers asked the neural network to complete the picture, after having deleted part of it. Doing this can be used to determine fairly accurately whether a particular image was in the training set. If it was, the machine-learning algorithm generated an almost exact copy of the original photo or drawing. Source. At this point, lets divert our attention to the issue of neural networks and copyright. Who stole from whom? In January 2023, three artists sued the creators of image-generating services that used machine-learning algorithms. They claimed (justifiably) that the developers of the neural networks had trained them on images collected online without any respect for copyright. A neural network can indeed copy the style of a particular artist, and thus deprive them of income. The paper hints that in some cases algorithms can, for various reasons, engage in outright plagiarism, generating drawings, photographs and other images that are almost identical to the work of real people. The study makes recommendations for strengthening the privacy of the original training set: Get rid of duplicates. Reprocess training images, for example by adding noise or changing the brightness; this makes data leakage less likely. Test the algorithm with special training images, then check that it doesnt inadvertently reproduce them accurately. What next? The ethics and legality of generative art certainly make for an interesting debate — one in which a balance must be sought between artists and the developers of the technology. On the one hand, copyright must be respected. On the other, is computer art so different from human? In both cases, the creators draw inspiration from the works of colleagues and competitors. But lets get back down to earth and talk about security. The paper provides a specific set of facts about only one machine-learning model. Extending the concept to all similar algorithms, we arrive at an interesting situation. Its not hard to imagine a scenario whereby a smart assistant of a mobile operator hands out sensitive corporate information in response to a user query: after all, it was in the training data. Or, for example, a cunning query tricks a public neural network into generating a copy of someones passport. The researchers stress that such problems remain theoretical for the time being. But other problems are already with us. As we speak, the text-generating neural network ChatGPT is being used to write real malicious code that (sometimes) works. And GitHub Copilot is helping programmers write code using a huge amount of open-source software as input. And the tool doesnt always respect the copyright and privacy of the authors whose code ended up in the sprawling set of training data. As neural networks evolve, so too will the attacks on them — with consequences that no one yet fully understands.

image for Cybersecurity Surviv ...

 Feed

Consider adding some security-through-obscurity tactics to your organization's protection arsenal to boost protection. Mask your attack surface behind additional zero-trust layers to remove AI's predictive advantage.

 Companies to Watch

The round, which brought the total amount to $93M, was led by Lightspeed Venture Partners with participation from previous investors Felicis Ventures, Redpoint Ventures, and Sequoia Capital.

 Threat Actors

Chinese nation-state group APT41 targeted an unnamed Taiwanese media firm to deploy Google Command and Control (GC2), an open-source red teaming tool - revealed Google’s TAG. To initiate the attack, the attackers sent phishing emails with links to password-protected files hosted on Google Drive. 

 Security Products & Services

During the public beta, the option to report private vulnerabilities could be activated by maintainers and repository owners only on single repositories. Starting this week, they can now enable this for all repositories within their organization.

 Malware and Vulnerabilities

Trigona ransomware operators are targeting unsecured and internet-exposed Microsoft SQL (MS-SQL) servers, discovered AhnLab. They breach servers via brute-force attacks to crack account credentials. Before encryption, the attackers claim to steal sensitive documents that will be added to dark web leak sites if the ransom is not paid.

 New Cyber Technologies

CYFIRMA detected a cyberattack in Kashmir, India, linked to the DoNot APT group that used third-party file-sharing websites to spread malware disguised as chat apps named Ten Messenger and Link Chat QQ. The malware's source code was well obfuscated and protected with the Pro Guard code obfuscator utility. It is suggested to implement multiple layers of security to minimize the impact of this threat.

 Malware and Vulnerabilities

Morphisec found a campaign using a highly evasive loader, named in2al5d p3in4er, disseminating the Aurora info-stealer via links in YouTube video descriptions. It is compiled using Embarcadero RAD Studio which allows attackers to create executables for multiple platforms, with multiple configuration options.

 Breaches and Incidents

The “unauthorized access” that prompted the Guam Memorial Hospital to shut down its network in March is now being investigated by the U.S. Department of Health and Human Services, according to an acceptance letter addressed to a whistleblower.

 Malware and Vulnerabilities

A new backdoor, named DevOpt, was discovered that uses hard-coded names for persistence and provides various features such as keylogging, stealing browser credentials, and clipper. Multifunctional malware such as DevOpt are increasingly becoming common. Organizations must make continuous improvements in their defense approaches, and implement multi-layered defense architecture.

 Malware and Vulnerabilities

Secureworks analyzed the findings in a report published on Thursday, saying the infection chain for several of these attacks relied on a malicious Google Ad that sent users to a fake download page via a compromised WordPress site.

 Malware and Vulnerabilities

Uptycs found a new credential stealer, named Zaraza bot, being advertised on Telegram and simultaneously using the messaging service as C2 server. It can target 38 web browsers. Zaraza bot is a lightweight malware with just a 64-bit binary file. Some codes and logs are written in Russian. As a precaution, users should be wary of the links received over social media and downloading anything from unknown sources.

 Breaches and Incidents

Eurocontrol confirmed its website has been "under attack" since April 19, and said "pro-Russian hackers" had claimed responsibility for it. "The attack is causing interruptions to the website and web availability," a spokesperson told The Register.

 Malware and Vulnerabilities

At the beginning of March, ReversingLabs researchers encountered a malicious package on the Python Package Index (PyPI) named termcolour, a three-stage downloader published in multiple versions.

 Malware and Vulnerabilities

As generative AI tools like OpenAI ChatGPT and Google Bard continue to dominate the headlines—and pundits debate whether the technology has taken off too quickly without necessary guardrails—cybercriminals are showing no hesitance in exploiting them.

 Threat Actors

The threat actor targets government and diplomatic entities in the CIS. The few victims discovered in other regions (Middle East or Southeast Asia) turn out to be foreign representations of CIS countries, illustrating Tomiris’s narrow focus.

 Malware and Vulnerabilities

The Play ransomware group has added two custom tools written in .NET to expand the effectiveness of its attacks. Named Grixba and Volume Shadow Copy Service (VSS), these tools enable attackers to keep track of users in compromised networks and gather information about security, backup, and remote administration software.

 Threat Actors

The Blind Eagle cyberespionage group was identified as the source of a new multi-stage attack chain that ultimately results in the deployment of NjRAT on compromised systems. In this attack campaign, Blind Eagle leverages social engineering, custom malware, and spear-phishing attacks. Therefore, upgrade your security   show more ...

posture to stay safe. Moreover, training employees on how to detect phishing emails is much recommended. 

 Malware and Vulnerabilities

Sophos X-Ops uncovered a defense evasion tool called AuKill. The tool exploits an outdated version of the driver used by version 16.32 of the Microsoft utility Process Explorer to disable EDR processes to deploy either a backdoor or ransomware on the targeted system. Since the beginning of 2023, the tool has been used to drop Medusa Locker and LockBit ransomware strains.

 Breaches and Incidents

Microsoft connected the Iranian Mint Sandstorm APT group (aka PHOSPHORUS) to a wave of attacks, between late-2021 and mid-2022, targeting the U.S. critical infrastructure. The group targets private/public organizations, including activists, journalists, the Defense Industrial Base (DIB), political dissidents, and employees from various government agencies.

 Threat Actors

Researchers found 8220 Gang exploiting the Log4Shell vulnerability to install CoinMiner in VMware Horizon servers of Korean energy-related companies. The gang uses a PowerShell script to download ScrubCrypt and establish persistence by making edits to the registry entries. System administrators are advised to verify whether their existing VMware servers are susceptible and apply the latest patches.

 Feed

Debian Linux Security Advisory 5393-1 - Multiple security issues were discovered in Chromium, which could result in the execution of arbitrary code, denial of service or information disclosure.

 Feed

Debian Linux Security Advisory 5392-1 - Multiple security issues were discovered in Thunderbird, which could result in denial of service or the execution of arbitrary code.

 Feed

Red Hat Security Advisory 2023-1931-01 - GNU Emacs is a powerful, customizable, self-documenting text editor. It provides special code editing features, a scripting language, and the capability to read e-mail and news. Issues addressed include a code execution vulnerability.

 Feed

Red Hat Security Advisory 2023-1930-01 - GNU Emacs is a powerful, customizable, self-documenting text editor. It provides special code editing features, a scripting language, and the capability to read e-mail and news. Issues addressed include a code execution vulnerability.

 Feed

Red Hat Security Advisory 2023-1816-01 - Red Hat OpenShift Data Foundation is software-defined storage integrated with and optimized for the Red Hat OpenShift Data Foundation. Red Hat OpenShift Data Foundation is a highly scalable, production-grade persistent storage for stateful applications running in the Red Hat OpenShift Container Platform.

 Feed

A new "all-in-one" stealer malware named EvilExtractor (also spelled Evil Extractor) is being marketed for sale for other threat actors to steal data and files from Windows systems. "It includes several modules that all work via an FTP service," Fortinet FortiGuard Labs researcher Cara Lin said. "It also contains environment checking and Anti-VM functions. Its primary purpose seems to be to

 Feed

Print management software provider PaperCut said that it has "evidence to suggest that unpatched servers are being exploited in the wild," citing two vulnerability reports from cybersecurity company Trend Micro. "PaperCut has conducted analysis on all customer reports, and the earliest signature of suspicious activity on a customer server potentially linked to this vulnerability is 14th April 01

 Feed

A recent review by Wing Security, a SaaS security company that analyzed the data of over 500 companies, revealed some worrisome information. According to this review, 84% of the companies had employees using an average of 3.5 SaaS applications that were breached in the previous 3 months. While this is concerning, it isn't much of a surprise. The exponential growth in SaaS usage has security and

 Feed

Threat actors have been observed leveraging a legitimate but outdated WordPress plugin to surreptitiously backdoor websites as part of an ongoing campaign, Sucuri revealed in a report published last week. The plugin in question is Eval PHP, released by a developer named flashpixx. It allows users to insert PHP code pages and posts of WordPress sites that's then executed every time the posts are

 Feed

The Russian-speaking threat actor behind a backdoor known as Tomiris is primarily focused on gathering intelligence in Central Asia, fresh findings from Kaspersky reveal. "Tomiris's endgame consistently appears to be the regular theft of internal documents," security researchers Pierre Delcher and Ivan Kwiatkowski said in an analysis published today. "The threat actor targets government and

 Feed

Threat actors are employing a previously undocumented "defense evasion tool" dubbed AuKill that's designed to disable endpoint detection and response (EDR) software by means of a Bring Your Own Vulnerable Driver (BYOVD) attack. "The AuKill tool abuses an outdated version of the driver used by version 16.32 of the Microsoft utility, Process Explorer, to disable EDR processes before deploying

2023-04
Aggregator history
Monday, April 24
SAT
SUN
MON
TUE
WED
THU
FRI
AprilMayJune