In this interview, Levi Gundert (CSO, Recorded Future) discusses how Gen AI is both exploited by threat actors and leveraged by cyber defenders.
He explains that LLMs are beneficial for security analysts, automating tasks such as report writing and code generation, and change management tasks like tracking configuration changes and vulnerability patching.
However, he points out that attackers are using LLMs for malicious purposes, such as createing deepfakes and synthetic voices to deceive and launch large-scale attacks with alarming ease.
While LLMs are powerful, they have limitations – such as producing false information (“hallucinations”) or inheriting biases from training data. Security is also a concern, as training LLMs with sensitive data can be risky. The challenge, he notes, is balancing innovation and productivity with data security (e.g. exercise caution about the information provided to these models).
Gundert emphasizes that LLMs can be highly useful for cybersecurity professionals if trained on the right data and storytelling techniques. Communicating complex security risks to non-technical audiences is a major challenge, and storytelling is a powerful tool for building confidence in security strategies and investments. By training LLMs to assist in this area, security professionals can better convey risks to boards and other governing bodies.
Gundert also shares lessons from his experience with government crime-fighting agencies (UK Serious Fraud Office, FBI, US Secret Service), where proactive intelligence gathering allowed threats to be identified before crimes occurred. In the private sector, he sees a similar mission: bringing intelligence to businesses and emphasizing its growing importance due to the complex geopolitical landscape and the convergence of physical, cyber, and geopolitical threats.
———————————-
00:46 – How does Gen AI help transform the creation of cyber threat reports?
01:39 – “putting a premium on humans and human brains”
02:21 – How are threat actors using Gen AI to scale?
02:58 – nation states using it for influence operations
03:13 – LLM is increasing the volume and velocity of the content
04:04 – LLM’s are fueling more realistic deepfakes and social engineering attacks.
05:00 – Criminals are selling access to LLMs (without guardrails) in the underground market.
05:37 – How are cyber defenders using LLMs to scale?
06:33 – Using LLMs for code production
07:04 – Gen AI will be a game-changer for future security by streamlining change management tasks.
07:38 – Do LLMs in threat intelligence complicate/ amplify concerns of false positives/ false negatives?
08:08 – LLM outputs may not reliable for security decisions without knowing the data source (traceability)
08:52 – When should we involve humans (human-in-the-loop) in AI decision-making?
09:37 – Can AI hallucinations be helpful in identifying potential threats?
09:58 – Companies need to leverage intelligence (information gathering) for risk assessment, requiring deeper analysis (2nd order thinking)
10:24 – LLMs struggle with non-obvious threats, but hallucinations can help brainstorm potential threats if context is carefully considered.
11:22 – Given the communication challenges in your book, could LLMs be a powerful storytelling tools for IT security professionals presenting to boards?
11:42 – The key challenge is more about communicating security risk to the business (less about hands-on work)
13:03 – Beyond traditional data security, what additional factors should organizations consider when training LLMs with sensitive cyber intelligence data?
15:16 – LLM training needs to consider protecting sensitive internal data.
17:00 – From your experience in public sector security, what key lessons apply to cybersecurity?
18:03 – One key lesson is the power of proactive intelligence in stopping cyber threats before they happen.
19:48 – Given rapid AI developments since your 2023 annual report, what would be some additional predictions for the future?
21:21 – Expect a surge in social engineering attacks fueled by AI
———————————————————————-
Recorded 28th May 2024. Singapore Raffles City.
——————————————————————–
Levi Gundert is Recorded Future’s Chief Security Officer, leading the continuous effort to measurably decrease operational risk internally and for clients. Levi has spent the past 20 years in the public and private sectors, defending networks, arresting international criminals, and uncovering nation-state adversaries. Levi previously led senior information security functions at technology and financial enterprises. He is a trusted risk advisor to Fortune 500 companies and a prolific speaker, blogger, and columnist.
Stay with us:
LinkedIn ➡️ https://www.linkedin.com/in/lojane/
YouTube ➡️ https://cutt.ly/U2B0yVi
#misscyberpenny
#cybersecurity
#cyberthreatintelligence
source