Cyber security aggregate rss news

Cyber security aggregator - feeds history

image for How AI solutions wil ...

 Technology

Disclaimer: the opinions of the author are his own and may not reflect the official position of Kaspersky (the company). Beyond the various geopolitical events that defined 2022, on the technological level, it was the year of AI. I might as well start by coming clean: until very recently, whenever Id be asked about AI   show more ...

in cybersecurity, Id dismiss it as vaporware. I always knew machine learning had many real-world applications; but for us in the infosec world, AI had only ever been used in the cringiest of product pitches. To me, AI-powered was just an elegant way of vendors of saying we have no existing knowledge base or telemetry, so we devised a couple of heuristics instead. I remain convinced that in more than 95% of cases, the resulting products contained little actual AI either. But the thing is, while marketing teams were busy slapping AI stickers on any product that involved k-means calculus as part of its operation, the real AI field was actually making progress. The day of reckoning for me came when I first tried DALL-E 2 (and soon thereafter, Midjourney). Both projects allow you to generate images based on textual descriptions, and have already caused significant turmoil in the art world. This art is generated with Midjourney using the prompt All hail our new AI overlords Then, in December of last year, ChatGPT took the world by storm. Simply put, ChatGPT is a chatbot. I assume most people have already tried it at this point, but if you havent, I strongly suggest you do (just be sure not to confuse it with a virus). No words can convey how much it improves over previous projects, and hearing about it just isnt enough. You have to experience it to get a feel for everything thats coming… ChatGPT says for itself Language models In the words of Arthur C. Clarke, any sufficiently advanced technology is indistinguishable from magic. I love how technology can sometimes bring this sense of wonder into our lives, but this feeling unfortunately gets in the way when we attempt to think about the implications or limits of a new breakthrough. For this reason, I think we first need to spend some time understanding how these technologies work under the hood. Lets start with ChatGPT. Its a language model; in other words, its a representation of our language. As is the case with many large machine learning projects, nobody really knows how this model works (not even OpenAI, its creators). We know how the model was created, but its way too complex to be formally understood. ChatGPT, being the largest (public?) language model to date, has over 175 billion parameters. To grasp what that means, imagine a giant machine that has 175 billion knobs you can tweak. Every time you send text to ChatGPT, this text becomes converted into a setting for each of those knobs. And finally, the machine produces output (more text) based on their position. Theres also an element of randomness, to ensure that the same question wont always lead to the exact same answer (but this can be tweaked as well). This is the reason why we perceive such models as black boxes: even if you were to spend your life studying the machine, its unclear that youd ever be able to figure out the purpose of a single knob (let alone all of them). Still, we know what the machine does because we know the process through which it was generated. The language model is an algorithm that can process text, and it was fed lots of it during its training phase: all of Wikipedia, scraped web pages, books, etc. This allowed for the creation of a statistical model that knows the likelihood of having one word follow another. If I say roses are red, violets are, you can guess with a relatively high degree of confidence that the next word will be blue. This is, in a nutshell, how any language model works. To such a model, finishing your sentence is no different from guessing which sequence of words is likely to follow your question based on everything its read before. In the case of ChatGPT, there was actually one more step involved — called supervised fine-tuning. Human AI trainers had numerous chats with the bot and flagged all answers deemed problematic (inaccurate, biased, racist, etc.) so it would learn not to repeat them. If you cant wrap your head around AI, file it under math or statistics: the goal of these models is prediction. When using ChatGPT, we very easily develop the feeling that the AI knows things, since its able to return contextually-relevant and domain-specific information for queries it sees for the first time. But it doesnt understand what any of the words mean: its only capable of generating more text that feels like it would be a natural continuation of whatever was given. This explains why ChatGPT can lay out a complex philosophical argument, but often trips up on basic arithmetic: its harder to predict the result of calculus than the next word in a sentence. Besides, it doesnt have any memory: its training ended in 2021 and the model is frozen. Updates come in the form of new models (i.e., GPT-4 in 2024) trained on new data. In fact, ChatGPT doesnt even remember the conversations youre having with it: the recent chat history is sent along with any new text you type so that the dialog feels more natural. Whether this still qualifies as intelligence (and whether this is significantly different from human intelligence) will be the subject of heated philosophical debates in the years to come. Diffusion models Image generation tools like Midjourney and DALL-E are based on another category of models. Their training procedure, obviously, focuses on generating images (or collections of pixels) instead of text. There are actually two components required to generate a picture based on a textual description, and the first one is very intuitive. The model needs a way to associate words with visual information, so its fed collections of captioned images. Just like with ChatGPT, we end up with a giant, inscrutable machine thats very good at matching pictures with textual data. The machine has no idea what Brad Pitts face looks like, but if its seen enough photos of him, it knows that they all share common properties. And if someone submits a new Brad Pitt photo, the model is able to recognize him and go yup, thats him again. The second part, which I found more surprising, is the ability to enhance images. For this, we use a diffusion model, trained on clean images to which (visual) noise is gradually added until they become unrecognizable. This allows the model to learn the correspondence between a blurry, low-quality picture and its higher-resolution counterpart — again, on a statistical level — and recreate a good image from the noisy one. There are actually AI-powered products dedicated to de-noising old photos or increasing their resolution. An example of increasingly low-quality images used to train diffusion models with my trusty avatar Putting everything together, we are able to synthetize images: we start from random noise, and enhance it gradually while making sure it contains the characteristics that match the users prompt (a much more detailed description of DALL-Es internals can be found here). The wrong issues The emergence of all the tools mentioned in this article led to a strong public reaction, some of which was very negative. There are legitimate concerns to be had about the abrupt irruption of AI in our lives, but in my opinion, much of the current debate focuses on the wrong issues. Let us address those first, before moving on to what I think should be the core of the discussion surrounding AI. DALL-E and Midjourney steal from real artists On a few occasions, I have seen these tools described as programs that make patchworks of images theyve seen before, and then apply kind of filters that allow them to imitate the style of the requested artist. Anyone making such a claim is either ignorant of the technical realities of the underlying models, or arguing in bad faith. As explained above, the model is completely incapable of extracting images, or even simple shapes from the images it is trained on. The best it can do is extract mathematical features. What people believe DALL-E starts from (left) versus what DALL-E actually starts from (right) Theres no denying that many copyrighted works were used in the training phase without the original authors explicit consent, and maybe theres a discussion to be had about this. But its also worth pointing out that human artists follow the exact same process during their studies: they copy paintings from masters and draw inspiration from artwork that they encounter. And what is inspiration, if not the ability to capture the essence of an art piece combined with the drive to re-explore it? DALL-E and Midjourney introduce a breakthrough in the sense that theyre theoretically able to gain inspiration from every picture produced in human history (and, likely, any one they produce from now on), but its a change in scale only — not in nature. Compelling evidence of Wolfgang Amadeus Mozart stealing from artists during his training phase AI makes things too easy Such criticism usually implies that art should be hard. This has always been a surprising notion to me, since the observer of an art piece usually has very little idea of how much (or how little) effort it took to produce. Its not a new debate: years after Photoshop was released, a number of people are still arguing that digital art is not real art. Those who say it is put forward that using Photoshop still requires skill, but I think theyre also missing the point. How much skill did Robert Rauschenberg require to put white paint on a canvas? How much music practice do you need before you can perform John Cages infamous 4´33´´? Even if we were to introduce skill as a criterion for art, where would we draw the line in the sand? How much effort is enough effort? When photography was invented, Charles Baudelaire called it the refuge of every would-be painter, every painter too ill-endowed or too lazy to complete his studies (and he was not alone in this assessment). Turns out he was wrong. ChatGPT helps cybercriminals With the rise of AI, were going to see productivity gains all across the board. Right now, a number of media outlets and vendors are doing everything they can to hitch a ride on the ChatGPT hype, which leads to the most shameful clickbait in recent history. As we wrote earlier, ChatGPT may help criminals draft phishing emails or write malicious code — none of which have ever been limiting factors. People familiar with the existence of GitHub know that malware availability is not an issue for malicious actors, and anyone worried about speeding up development should have raised those concerns when Copilot was released. I realize its silly to debunk a media frenzy born of petty economic considerations instead of genuine concerns, but the fact is: AI is going to have a tremendous impact on our lives and there are real issues to be addressed. All this noise is just getting in the way. Theres no going back No matter how you feel about all the AI-powered tools that were released in 2022, know that more are coming. If you believe the field will be regulated before it gets out of control, think again: the political response Ive witnessed so far was mostly governments deciding to allocate more funds to AI research while they can still catch up. No one in power has any interest in slowing this thing down. The fourth industrial revolution AI will lead to — or has probably already led to — productivity gains. How massive they are/will be is hard to envision just yet. If your job consists in producing semi-inspired text, you should be worried. This applies if youre a visual designer working on commission too: therell always be clients who want the human touch, but most will go for the cheap option. But thats not all: reverse engineers, lawyers, teachers, physicians and many more should expect their jobs to change in profound ways. One thing to keep in mind is that ChatGPT is a general-purpose chatbot. In the coming years, specialized models will emerge and outperform ChatGPT on specific use-cases. In other words, if ChatGPT cant do your job now, its likely that a new AI product released in the next five years will. Our jobs, all our jobs, will involve supervising AI and making sure its output is correct rather than doing it ourselves. Its possible that AI will hit a complexity wall and not progress any further — but after being wrong a number of times, Ive learned not to bet against the field. Will AI change the world as much as the steam engine did? We should hope that it doesnt, because brutal shifts in means of production change the structure of human society, and this never happens peacefully. AI bias and ownership Plenty has been said about biases in AI tools that I wont get back into it. A more interesting subject is the way OpenAI fights those biases. As mentioned above, ChatGPT went through a supervised learning phase where the language model basically learns not to be a bigot. While this is a desirable feature, one cant help but notice that this process effectively teaches a new bias to the chatbot. The conditions of this fine-tuning phase are opaque: who are the unsung heroes flagging the bad answers? Underpaid workers in third-world countries, or Silicon Valley engineers on acid? (Spoiler: its the former.) Its also worth remembering that the AI products wont work for the common good. The various products designed at the moment are owned by companies that will always be driven, first and foremost, by profits that may or may not overlap with humankinds best interests. Just like a change in Googles search results has a measurable effect on people, AI companions or advisors will have the ability to sway users in subtle ways. What now? Since the question no longer seems to be whether AI is coming into our lives but when, we should at least discuss how we can get ready for it. We should be extremely wary of ChatGPT (or any of its scions) ending up in a position where its making unsupervised decisions: ChatGPT is extremely good at displaying confidence, but still gets a lot of facts wrong. Yet therell be huge incentives to cut costs and take humans out of the loop. I also predict that over the next decade, the majority of all content available online (first text and pictures, then videos and video games) will be produced with AI. I dont think we should count too much on automatic flagging of such content working reliably either — well just have to remain critical of what we read online and wade through ten times more noise. Most of all, we should be wary of the specialized models that are coming our way. What happens when one of the Big Four trains a model with the tax code and starts asking about loopholes? What happens when someone from the military plays with ChatGPT and goes: yeah, I want some of that in my drones? AI will be amazing: it will take over many boring tasks, bring new abilities to everyones fingertips and kickstart whole new artforms (yes). But AI will also be terrible. If history is any indication, it will lead to a further concentration of power and push us further down the path of techno-feudalism. It will change the way work is organized and maybe even our relationship with mankinds knowledge pool. We wont get a say in it. Pandoras box is now open.

 Malware and Vulnerabilities

A new malware stealer called WhiteSnake has surfaced to steal credit card numbers and other sensitive information from Windows and Linux users. The Windows variant, comparatively an older and mature variant, is capable of stealing sensitive data from different browsers. The info-stealer can steal files from various cryptocurrency wallets such as Atomic, Bitcoin, Coinomi, Electrum, Exodus, and Guarda.

 Malware and Vulnerabilities

Because of its age, patching the vulnerabilities does not appear to be a priority for the game’s publisher Activision, so two gamers-turned-hackers have taken it into their own hands to patch the game’s vulnerabilities and make it safer to play.

 Trends, Reports, Analysis

Successful attacks on systems no longer require zero-day exploits, as attackers now focus on compromising identities through methods such as bypassing MFA, hijacking sessions, or brute-forcing passwords, according to Oort.

 Trends, Reports, Analysis

While phishing, business email compromise (BEC), and ransomware still rank among the most popular cyberattack techniques, a mix of new-breed attacks is gaining steam, according to a new report from cybersecurity and compliance company Proofpoint.

 Malware and Vulnerabilities

Security researchers at Quarkslab have identified a pair of serious security defects in the Trusted Platform Module (TPM) 2.0 reference library specification, prompting a massive cross-vendor effort to identify and patch vulnerable installations.

 Threat Actors

The Blackfly espionage group (aka APT41, Winnti Group, Bronze Atlas) has continued to mount attacks against targets in Asia and recently targeted two subsidiaries of an Asian conglomerate, likely attempting to steal intellectual property

 Identity Theft, Fraud, Scams

“The attackers contact the victims via phone call, SMS and/or email to say that there’s been a security breach or suspicious activity on their Trezor account,” the firm warned in a Twitter post.

 Malware and Vulnerabilities

The critical flaws addressed by Aruba this time can be separated into two categories: command injection flaws and stack-based buffer overflow problems in the PAPI protocol (Aruba Networks access point management protocol).

 Trends, Reports, Analysis

Lookout also investigated the evolution of mobile phishing on professional devices, and since 2021 mobile phishing encounter rates have increased by roughly 10% for enterprise phones.

 Feed

Ubuntu Security Notice 5482-2 - USN-5482-1 fixed several vulnerabilities in SPIP. This update provides the corresponding updates for Ubuntu 20.04 LTS for CVE-2021-44118, CVE-2021-44120,CVE-2021-44122 and CVE-2021-44123. It was discovered that SPIP incorrectly validated inputs. An authenticated attacker could possibly use this issue to execute arbitrary code. This issue only affected Ubuntu 18.04 LTS.

 Feed

Ubuntu Security Notice 5907-1 - It was discovered that c-ares incorrectly handled certain sortlist strings. A remote attacker could use this issue to cause c-ares to crash, resulting in a denial of service, or possibly execute arbitrary code.

 Feed

This Metasploit module can be used to execute a payload on Lucee servers that have an exposed administrative web interface. It's possible for an administrator to create a scheduled job that queries a remote ColdFusion file, which is then downloaded and executed when accessed. The payload is uploaded as a cfm file   show more ...

when queried by the target server. When executed, the payload will run as the user specified during the Lucee installation. On Windows, this is a service account; on Linux, it is either the root user or lucee.

 Feed

Ubuntu Security Notice 5906-1 - Jacob Champion discovered that the PostgreSQL client incorrectly handled Kerberos authentication. If a user or automated system were tricked into connecting to a malicious server, a remote attacker could possibly use this issue to obtain sensitive information.

 Feed

Ubuntu Security Notice 5904-1 - Helmut Grohne discovered that SoX incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to cause a denial of service. This issue only affected Ubuntu 14.04 LTS,   show more ...

Ubuntu 16.04 LTS, and Ubuntu 18.04 LTS. Helmut Grohne discovered that SoX incorrectly handled certain inputs. If a user or an automated system were tricked into opening a specially crafted input file, a remote attacker could possibly use this issue to cause a denial of service.

 Feed

Red Hat Security Advisory 2023-1047-01 - A new image is available for Red Hat Single Sign-On 7.6.2, running on Red Hat OpenShift Container Platform from the release of 3.11 up to the release of 4.12.0. Issues addressed include code execution, cross site scripting, denial of service, deserialization, html injection, memory exhaustion, server-side request forgery, and traversal vulnerabilities.

 Feed

Red Hat Security Advisory 2023-1045-01 - Red Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak project, that provides authentication and standards-based single sign-on capabilities for web and mobile applications. This release of Red Hat Single Sign-On 7.6.2 on RHEL 9 serves as a replacement for Red   show more ...

Hat Single Sign-On 7.6.1, and includes bug fixes and enhancements, which are documented in the Release Notes document linked to in the References. Issues addressed include code execution, cross site scripting, denial of service, deserialization, html injection, memory exhaustion, server-side request forgery, and traversal vulnerabilities.

 Feed

Red Hat Security Advisory 2023-1049-01 - Red Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak project, that provides authentication and standards-based single sign-on capabilities for web and mobile applications. This release of Red Hat Single Sign-On 7.6.2 serves as a replacement for Red Hat   show more ...

Single Sign-On 7.6.1, and includes bug fixes and enhancements, which are documented in the Release Notes document linked to in the References. Issues addressed include code execution, cross site scripting, denial of service, deserialization, html injection, memory exhaustion, open redirection, server-side request forgery, and traversal vulnerabilities.

 Feed

Red Hat Security Advisory 2023-1043-01 - Red Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak project, that provides authentication and standards-based single sign-on capabilities for web and mobile applications. This release of Red Hat Single Sign-On 7.6.2 on RHEL 7 serves as a replacement for Red   show more ...

Hat Single Sign-On 7.6.1, and includes bug fixes and enhancements, which are documented in the Release Notes document linked to in the References. Issues addressed include code execution, cross site scripting, denial of service, deserialization, html injection, memory exhaustion, server-side request forgery, and traversal vulnerabilities.

 Feed

Red Hat Security Advisory 2023-1044-01 - Red Hat Single Sign-On 7.6 is a standalone server, based on the Keycloak project, that provides authentication and standards-based single sign-on capabilities for web and mobile applications. This release of Red Hat Single Sign-On 7.6.2 on RHEL 8 serves as a replacement for Red   show more ...

Hat Single Sign-On 7.6.1, and includes bug fixes and enhancements, which are documented in the Release Notes document linked to in the References. Issues addressed include code execution, cross site scripting, denial of service, deserialization, html injection, memory exhaustion, server-side request forgery, and traversal vulnerabilities.

 Feed

Ubuntu Security Notice 5810-4 - USN-5810-1 fixed several vulnerabilities in Git. This update provides the corresponding update for Ubuntu 14.04 ESM. Markus Vervier and Eric Sesterhenn discovered that Git incorrectly handled certain gitattributes. An attacker could possibly use this issue to cause a crash or execute arbitrary code.

 Feed

Cisco on Wednesday rolled out security updates to address a critical flaw impacting its IP Phone 6800, 7800, 7900, and 8800 Series products. The vulnerability, tracked as CVE-2023-20078, is rated 9.8 out of 10 on the CVSS scoring system and is described as a command injection bug in the web-based management interface arising due to insufficient validation of user-supplied input. Successful

 Feed

The threat actor known as Lucky Mouse has developed a Linux version of a malware toolkit called SysUpdate, expanding on its ability to target devices running the operating system. The oldest version of the updated artifact dates back to July 2022, with the malware incorporating new features designed to evade security software and resist reverse engineering. Cybersecurity company Trend Micro said

 Feed

A sophisticated attack campaign dubbed SCARLETEEL is targeting containerized environments to perpetrate theft of proprietary data and software. "The attacker exploited a containerized workload and then leveraged it to perform privilege escalation into an AWS account in order to steal proprietary software and credentials," Sysdig said in a new report. The advanced cloud attack also entailed the

 Feed

Misconfigured Redis database servers are the target of a novel cryptojacking campaign that leverages a legitimate and open source command-line file transfer service to implement its attack. "Underpinning this campaign was the use of transfer[.]sh," Cado Security said in a report shared with The Hacker News. "It's possible that it's an attempt at evading detections based on other common code

 Feed

As a primary working interface, the browser plays a significant role in today's corporate environment. The browser is constantly used by employees to access websites, SaaS applications and internal applications, from both managed and unmanaged devices. A new report published by LayerX, a browser security vendor, finds that attackers are exploiting this reality and are targeting it in increasing

 Feed

A malicious Python package uploaded to the Python Package Index (PyPI) has been found to contain a fully-featured information stealer and remote access trojan. The package, named colourfool, was identified by Kroll's Cyber Threat Intelligence team, with the company calling the malware Colour-Blind. "The 'Colour-Blind' malware points to the democratization of cybercrime that could lead to an

 Law & order

Who has been warning Italian criminals that their phones are wiretapped? Can you trust your voice to protect your bank account? And why is TikTok being singled out by investigators? All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Dinah Davis.

 Guest blog

Willie Sutton, the criminal who became legendary for stealing from banks during a forty year career, was once asked, "Why do you keep robbing banks?" His answer? "Because that's where the money is." However, today there's a better target for robbers today than banks, which are typically   show more ...

well-defended against theft... Cryptocurrency wallets. Read more in my article on the Tripwire State of Security blog.

2023-03
WED
THU
FRI
SAT
SUN
MON
TUE
MarchAprilMay