Dark Side of AI - How Hackers use AI & Deepfakes | Mark T. Hofmann | TEDxAristide Demetriade Street

29 Oct 2024 (2 months ago)
Dark Side of AI - How Hackers use AI & Deepfakes | Mark T. Hofmann | TEDxAristide Demetriade Street

Introduction

  • Artificial intelligence is compared to a neutral tool, like a knife, which can be used for both beneficial and harmful purposes, including by malicious actors. (1s)
  • Mark T. Hofmann, a crime analyst and business psychologist, explores the dark side of AI by engaging with hackers on various online platforms to understand their use of AI and deepfakes. (40s)
  • Profiling in crime analysis is often misrepresented in media; in reality, it relies heavily on accurate data, similar to AI, where incorrect or incomplete data leads to flawed outcomes. (1m16s)
  • An example is given of a picture-generating AI producing unexpected results due to biased training data, highlighting the importance of input data quality. (2m13s)
  • AI can produce humorous or erroneous outputs, such as suggesting people eat rocks, which raises concerns about human gullibility and the influence of perceived authority. (2m52s)

The Cybercrime Landscape

  • Cybercrime is described as a significant global industry, projected to cost over 10 trillion dollars annually, making it comparable to the third-largest economy in the world if it were a country. (4m21s)
  • Ransomware is identified as the leading business model in cybercrime, where hackers encrypt files and systems to demand payment. (4m55s)
  • Ransomware attacks can paralyze companies by encrypting files, blocking internet access, and halting production, demanding ransoms in Bitcoin that can range from $2,000 for individuals to $240 million for large companies. (5m4s)
  • Some ransomware groups offer customer support, including live chat and phone assistance, to guide victims through the ransom payment process, highlighting the organized and business-like nature of these criminal operations. (5m39s)
  • These criminal organizations have various departments such as technical support, customer service, financial, and recruitment, and even operate affiliate systems where others can use their software to commit cybercrimes for a commission. (6m12s)

AI's Impact on Cybercrime

  • The cybercrime industry is a trillion-dollar business, and as artificial intelligence (AI) evolves, it will impact both legitimate and illicit economies, potentially increasing the sophistication and frequency of cyberattacks. (6m31s)
  • The FBI's Cyber Most Wanted list predominantly features young, qualified men, but AI advancements could lead to more diversity in cybercriminal profiles, as AI enables individuals to perform complex tasks without traditional skills. (6m45s)
  • AI can be used to write books, generate music, and create phishing emails without requiring specific skills, suggesting that future cyberattacks could be more sophisticated and accessible to a broader range of people. (7m16s)

Motivations and Methods of Cybercriminals

  • Motivations for cybercrime extend beyond financial gain to include opposing authority, seeking challenges, thrill-seeking, ego, and humor, as exemplified by a hacker known as Ransom Boris who mocked the FBI by wearing a t-shirt with his Most Wanted poster. (7m49s)
  • Most cyberattacks are facilitated by human error, such as clicking on malicious links, opening dangerous attachments, or using unsecured networks, and AI could make these attacks more sophisticated. (8m27s)
  • Hackers use AI in various ways, and one method involves reverse psychology, where attempts to use AI for unethical purposes are met with resistance, such as AI refusing to provide malware codes. (9m13s)
  • Cybersecurity experts discuss the use of AI models, such as GPT, and how they can be manipulated through "jailbreak prompts" to bypass ethical guidelines and provide restricted information. One such prompt is called "Dan," which stands for "do anything now," allowing the AI to operate without its usual constraints. (9m36s)
  • Hackers are continuously developing new jailbreak prompts, leading to an ongoing cat-and-mouse game between cybersecurity efforts and malicious actors. (11m8s)
  • Beyond misusing existing AI, hackers are creating their own AI models, like "warm GPT" and "CET GPT," specifically designed to generate malware, malicious code, and phishing emails. This trend is expected to grow in the coming years. (11m21s)

The Potential of AI as a Perpetrator

  • The potential for AI to act as a perpetrator is discussed, with the possibility of automating tasks like ransomware distribution. Although not currently feasible, it is suggested that AI could eventually choose victims and execute attacks autonomously. (12m8s)

Deepfakes and Their Dangers

  • Deepfake technology has advanced to the point where distinguishing between real and artificial videos is challenging. A single high-resolution picture can be used to create a video, and only 15 to 30 seconds of audio are needed to clone a person's voice, posing significant risks for identity manipulation. (13m28s)
  • Using just 30 seconds of real voice material, it is possible to create a deepfake video that sounds like a real person, as demonstrated with a video of Joe Biden's voice (14m25s).
  • This technology can be used to make anyone say anything in any language, which can be used for malicious purposes such as spreading disinformation or creating fake evidence (14m47s).
  • Deepfakes have been used for political disinformation, such as a video of Selinsky calling on Ukrainians to surrender, and for CEO threats, where a fake CEO calls a CFO to transfer money (15m27s).
  • Deepfake technology can also be used to create fake videos of people saying racist or radical things, which can lead to serious consequences for the person being impersonated (15m14s).
  • The technology can also be used to create deepfake porn, which is a concern for celebrities like Taylor Swift (15m52s).
  • Short attacks against companies in the stock market can also be carried out using deepfakes, where a fake video of a CEO making a damaging statement can be spread to cause a stock price drop (16m0s).
  • Roman scams can also be carried out using deepfakes, where a person is tricked into falling in love with a fake person created using AI (16m37s).
  • The technology challenges the concept of video evidence, as it can be difficult to determine what is real and what is fake (17m1s).

Protecting Against AI-Powered Scams

  • To become a human firewall against these types of attacks, it is essential to be aware of the tactics used by scammers, such as claiming to be someone or something else, using time pressure, emotion, and extension (18m34s).
  • Scammers may use various methods, including phone calls, emails, and links, to trick people into doing something they shouldn't, and it is crucial to be cautious and verify the authenticity of the communication (18m49s).
  • To protect against voice imitation scams, it is recommended to establish a code word within the family and ask security questions to verify identity, as knowledge cannot be stolen even if a voice can be mimicked. (18m59s)
  • Public figures and those with publicly available voices should brief their families and employees on security measures, such as using code words or security questions, to prevent fraud. (19m30s)

Raising Cybersecurity Awareness

  • Cybersecurity awareness should target people who are indifferent to the topic, and the key to engaging them is to make cybersecurity discussions entertaining and relatable. (20m2s)
  • The focus of cybersecurity should be on people rather than just business, emphasizing personal stories and impacts, such as CEO fraud and romance scams, to make the subject more relatable. (20m23s)

Embracing AI Safely

  • Artificial intelligence presents significant opportunities, and the greatest risk is failing to seize these opportunities, so it is important to embrace AI while maintaining safety. (20m43s)

Overwhelmed by Endless Content?