Episodes

Thursday Jul 24, 2025
AI, Testing and Red Teaming, with Peter Garraghan
Thursday Jul 24, 2025
Thursday Jul 24, 2025
Artificial intelligence is often described as a "black box". We can see what we put in, and what comes out. But not how the model comes to its results.
And, unlike conventional software, large language models are non-deterministic. The same inputs can produce different results.
This makes it hard to secure AI systems, and to assure their users that they are secure.
There is already growing evidence that malicious actors are using AI to find vulnerabilities, carry out reconnaissance, and fine-tune their attacks.
But the risks posed by AI systems themselves could be even greater.
Our guest this week has set out to secure AI, by developing red team testing methods that take into account both the nature of AI, and the unique risks it poses.
Peter Garraghan is professor at Lancaster University, and founder and CEO at Mindgard.
Interview by Stephen Pritchard
No comments yet. Be the first to say something!