Have you ever wondered how we know who were talking to on the phone? Its obviously more than just the name displayed on the screen. If we hear an unfamiliar voice when being called from a saved number, we know right away somethings wrong. To determine who were really talking to, we unconsciously note the timbre, show more ...
manner and intonation of speech. But how reliable is our own hearing in the digital age of artificial intelligence? As the latest news shows, what we hear isnt always worth trusting – because voices can be a fake: deepfake. Help, Im in trouble In spring 2023, scammers in Arizona attempted to extort money from a woman over the phone. She heard the voice of her 15-year-old daughter begging for help before an unknown man grabbed the phone and demanded a ransom, all while her daughters screams could still be heard in the background. The mother was positive that the voice was really her childs. Fortunately, she found out fast that everything was fine with her daughter, leading her to realize that she was a victim of scammers. It cant be 100% proven that the attackers used a deepfake to imitate the teenagers voice. Maybe the scam was of a more traditional nature, with the call quality, unexpectedness of the situation, stress, and the mothers imagination all playing their part to make her think she heard something she didnt. But even if neural network technologies werent used in this case, deepfakes can and do indeed occur, and as their development continues they become increasingly convincing and more dangerous. To fight the exploitation of deepfake technology by criminals, we need to understand how it works. What are deepfakes? Deepfake (deep learning + fake) artificial intelligence has been growing at a rapid rate over the past few years. Machine learning can be used to create compelling fakes of images, video, or audio content. For example, neural networks can be used in photos and videos to replace one persons face with another while preserving facial expressions and lighting. While initially these fakes were low quality and easy to spot, as the algorithms developed the results became so convincing that now its difficult to distinguish them from reality. In 2022, the worlds first deepfake TV show was released in Russia, where deepfakes of Jason Statham, Margot Robbie, Keanu Reeves and Robert Pattinson play the main characters. Deepfake versions of Hollywood stars in the Russian TV series PMJason. (Source) Voice conversion But today our focus is on the technology used for creating voice deepfakes. This is also known as voice conversion (or voice cloning if youre creating a full digital copy of it). Voice conversion is based on autoencoders – a type of neural network that first compresses input data (part of the encoder) into a compact internal representation, and then learns to decompress it back from this representation (part of the decoder) to restore the original data. This way the model learns to present data in a compressed format while highlighting the most important information. Autoencoder scheme. (Source) To make voice deepfakes, two audio recordings are fed into the model, with the voice from the second recording converted to the first. The content encoder is used to determine what was said from the first recording, and the speaker encoder is used to extract the main characteristics of the voice from the second recording – meaning how the second person talks. The compressed representations of what must be said and how its said are combined, and the result is generated using the decoder. Thus, whats said in the first recording is voiced by the person from the second recording. The process of making a voice deepfake. (Source) There are other approaches that use autoencoders, for example those that use generative adversarial networks (GAN) or diffusion models. Research into how to make deepfakes is supported in particular by the film industry. Think about it: with audio and video deepfakes, its possible to replace the faces of actors in movies and TV shows, and dub movies with synchronized facial expressions into any language. How its done As we were researching deepfake technologies, we wondered how hard it might be to make ones own voice deepfake? It turns out there are lots of free open-source tools for working with voice conversion, but it isnt so easy to get a high-quality result with them. It takes Python programming experience and good processing skills, and even then the quality is far from ideal. In addition to open source, there are also proprietary and paid solutions available. For example, in early 2023, Microsoft announced an algorithm that could reproduce a human voice based on an audio example thats only three seconds long! This model also works with multiple languages, so you can even hear yourself speaking a foreign language. All this looks promising, but so far its only at the research stage. But the ElevenLabs platform lets users make voice deepfakes without any effort: just upload an audio recording of the voice and the words to be spoken, and thats it. Of course, as soon as word got out, people started playing with this technology in all sorts of ways. Hermiones battle and an overly trusting bank In full accordance with Godwins law, Emma Watson was made to read Mein Kampf, and another user used ElevenLabs technology to hack his own bank account. Sounds creepy? It does to us – especially when you add to the mix the popular horror stories about scammers collecting samples of voices over the phone by having folks say yes or confirm as they pretend to be a bank, government agency or poll service, and then steal money using voice authorization. But in reality things arent so bad. Firstly, it takes about five minutes of audio recordings to create an artificial voice in ElevenLabs, so a simple yes isnt enough. Secondly, banks also know about these scams, so voice can only be used to initiate certain operations that arent related to the transfer of funds (for example, to check your account balance). So money cant be stolen this way. To its credit, ElevenLabs reacted to the problem fast by rewriting the service rules, prohibiting free (i.e., anonymous) users to create deepfakes based on their own uploaded voices, and blocking accounts with complaints about offensive content. While these measures may be useful, they still dont solve the problem of using voice deepfakes for suspicious purposes. How else deepfakes are used in scams Deepfake technology in itself is harmless, but in the hands of scammers it can become a dangerous tool with lots of opportunities for deception, defamation or disinformation. Fortunately, there havent been any mass cases of scams involving voice alteration, but there have been several high-profile cases involving voice deepfakes. In 2019, scammers used this technology to shake down UK-based energy firm. In a telephone conversation, the scammer pretended to be the chief executive of the firms German parent company, and requested the urgent transfer of €220,000 ($243,000) to the account of a certain supplier company. After the payment was made, the scammer called twice more – the first time to put the UK office staff at ease and report that the parent company had already sent a refund, and the second time to request another transfer. All three times the UK CEO was absolutely positive that he was talking with his boss because he recognized both his German accent and his tone and manner of speech. The second transfer wasnt sent only because the scammer messed up and called from an Austrian number instead of a German one, which made the UK SEO suspicious. A year later, in 2020, scammers used deepfakes to steal up to $35,000,000 from an unnamed Japanese company (the name of the company and total amount of stolen goods werent disclosed by the investigation). Its unknown which solutions (open source, paid, or even their own) the scammers used to fake voices, but in both the above cases the companies clearly suffered – badly – from deepfake fraud. Whats next? Opinions differ about the future of deepfakes. Currently, most of this technology is in the hands of large corporations, and its availability to the public is limited. But as the history of much more popular generative models like DALL-E, Midjourney and Stable Diffusion shows, and even more so with large language models (ChatGPT anybody?), similar technologies may well appear in the public domain in the foreseeable future. This is confirmed by a recent leak of internal Google correspondence in which representatives of the internet giant fear theyll lose the AI race to open solutions. This will obviously result in an increase in the use of voice deepfakes – including for fraud. The most promising step in the development of deepfakes is real-time generation, which will ensure the explosive growth of deepfakes (and fraud based on them). Can you imagine a video call with someone whose face and voice are completely fake? However, this level of data processing requires huge resources only available to large corporations, so the best technologies will remain private and fraudsters wont be able to keep up with the pros. The high quality bar will also help users learn how to easily identify fakes. How to protect yourself Now back to our very first question: can we trust the voices we hear (that is – if theyre not the voices in our head)? Well, its probably overdoing it if were paranoid all the time and start coming up with secret code words to use with friends and family; however, in more serious situations such paranoia might be appropriate. If everything develops based on the pessimistic scenario, deepfake technology in the hands of scammers could grow into a formidable weapon in the future, but theres still time to get ready and build reliable methods of protection against counterfeiting: theres already a lot of research into deepfakes, and large companies are developing security solutions. In fact, weve already talked in detail about ways to combat video deepfakes here. For now, protection against AI fakes is only just beginning, so its important to keep in mind that deepfakes are just another kind of advanced social engineering. The risk of encountering fraud like this is small, but its still there, so its worth knowing and keeping in mind. If you get a strange call, pay attention to the sound quality. Is it in an unnatural monotone, is it unintelligible, or are there strange noises? Always double-check information through other channels, and remember that surprise and panic are what scammers rely on most.
An attack involves a multi-stage infection chain with custom malware hosted on Amazon EC2 that ultimately steals critical system and browser data; so far, targets have been located in Latin America.
A consumer finance journalist and television personality took to Twitter to warn his followers about advertisements using his name and face to scam victims.
“The team is not sure what happened and is currently investigating. It is recommended that all users suspend the use of Multichain services and revoke all contract approvals related to Multichain,” the crypto platform said in a statement.
Rambler Gallo (53), a man from Tracy, California, has been charged with intentionally causing damage to a computer after he allegedly breached the network of the Discovery Bay Water Treatment Facility.
The TOITOIN Trojan utilizes advanced techniques such as XOR decryption, system reboots, and process injection to evade detection and gather sensitive information from infected systems.
All the lawsuits were filed in the U.S. District Court for the Eastern District of Pennsylvania. They seek relief including monetary damages and an injunctive order compelling Onix to improve its security practices to prevent future incidents.
Viktor Markopoulos, a researcher at Bitcrack Cyber Security, said he accidentally discovered the leak on June 27, and shortly after contacted the Bangladeshi e-Government CERT. He said the leak includes data of millions of Bangladeshi citizens.
Ransomware attacks on educational institutions have increased significantly in recent years, with the Vice Society ransomware gang especially targeting the education sector in the United States and the United Kingdom.
The Russian hacking group Killnet aims to transform into a private military hacking company that conducts cybercrime on behalf of the Russian state. The group plans to expand its capabilities and hire skilled hackers for more destructive attacks.
CISOs predominantly report to CIOs and are less likely to report to CEOs now than in previous years, according to a Heidrick & Struggles survey. Despite a slight year-over-year decrease, over one-third of CISOs report directly to the CIO.
Even before the FBI seized domains related to BreachForums, the notorious online bazaar where cybercriminals bought and sold hacked or stolen data, a replacement marketplace was taking shape.
Indonesian security researcher Teguh Aprianto revealed on Twitter last week that a hacker had put up for sale Indonesian passport holders' details including their full names, birth dates, gender, passport numbers, and passport validity dates.
ISACA is joining the European Cyber Security Organisation (ECSO). The membership will work to accelerate ECSO and ISACA’s shared commitment to advancing cybersecurity, fostering collaboration and driving digital trust across Europe.
The HHS HIPAA Breach Reporting Tool shows that 336 major health data breaches affected nearly 41.4 million individuals between January 1st and June 30th this year - nearly double the number affected during the same period last year.
According to researchers at Vade, the attack email includes a harmful HTML attachment with JavaScript code. This code is designed to gather the recipient’s email address and modify the page using data from a callback function’s variable.
The Law Foundation of Silicon Valley notified regulators in California and Maine last week that the February ransomware attack on their offices resulted in the leak of Social Security numbers and other personal information.
Germany’s new cybersecurity chief, Claudia Plattner, told journalists on Friday that the country needed to defend itself amidst a surge in attacks on hospitals, local government authorities and private sector businesses in the country.
The "Letscall" group consists of Android developers, designers, frontend and backend developers, as well as call operators specializing in voice social engineering attacks.
Incidents of online extortion reported to the police increased by nearly two-fifths in 2022 compared to a year previously, according to law firm RPC. The findings, which cover the full year to December 2022, were sourced from the UK's Action Fraud.
The threat actors behind the RomCom RAT have been suspected of phishing attacks targeting the upcoming NATO Summit in Vilnius as well as an identified organization supporting Ukraine abroad.
An international financial institution owned by the world’s central banks has published a new framework designed to help members mitigate cyber risks associated with their digital currencies.
Unlike its competitors, Genesis Market did not just sell stolen data and credentials but also provided a platform to criminals that allowed them to weaponize that data using a custom browser extension to impersonate victims.
A recently patched vulnerability in Ubiquiti EdgeRouter and AirCube devices could be exploited to execute arbitrary code, vulnerability reporting firm SSD Secure Disclosure warns.
Honeywell has agreed to acquire SCADAfence for an undisclosed amount and plans on integrating its solutions into the company’s Forge Cybersecurity+ suite. The deal is expected to close in the second half of the year.
Log management tools help IT and security teams monitor and improve a system's performance by identifying bugs, cybersecurity breaches, and other issues that can create outages or compliance problems.
SCADAfence will integrate into the Honeywell Forge Cybersecurity+ suite providing expanded asset discovery, threat detection, and compliance management capabilities.
Mozilla has announced that some add-ons may be blocked from running on certain sites as part of a new feature called Quarantined Domains. "We have introduced a new back-end feature to only allow some extensions monitored by Mozilla to run on specific websites for various reasons, including security concerns," the company said in its Release Notes for Firefox 115.0 released last week. The company
Businesses operating in the Latin American (LATAM) region are the target of a new Windows-based banking trojan called TOITOIN since May 2023. "This sophisticated campaign employs a trojan that follows a multi-staged infection chain, utilizing specially crafted modules throughout each stage," Zscaler researchers Niraj Shivtarkar and Preet Kamal said in a report published last week. "These modules
Brick-and-mortar retailers and e-commerce sellers may be locked in a fierce battle for market share, but one area both can agree on is the need to secure their SaaS stack. From communications tools to order management and fulfillment systems, much of today's critical retail software lives in SaaS apps in the cloud. Securing those applications is crucial to ongoing operations, chain management,
The threat actors behind the RomCom RAT have been suspected of phishing attacks targeting the upcoming NATO Summit in Vilnius as well as an identified organization supporting Ukraine abroad. The findings come from the BlackBerry Threat Research and Intelligence team, which found two malicious documents submitted from a Hungarian IP address on July 4, 2023. RomCom, also tracked under the names
Malicious actors exploited an unknown flaw in Revolut's payment systems to steal more than $20 million of the company's funds in early 2022. The development was reported by the Financial Times, citing multiple unnamed sources with knowledge of the incident. The breach has not been disclosed publicly. The fault stemmed from discrepancies between Revolut's U.S. and European systems, causing funds