AI Security Ops

Technology

About

Join in on weekly podcasts that aim to illuminate how AI transforms cybersecurity—exploring emerging threats, tools, and trends—while equipping viewers with knowledge they can use practically (e.g., for secure coding or business risk mitigation).

Episodes

  • Vercel Breach | Episode 50

    In this episode, the AI Security Ops team analyzes the Vercel breach, detailing the attack chain from a Roblox cheat script to a multi-hop compromise involving an AI productivity tool. They highlight the vulnerabilities introduced by AI in…

  • Claude Mythos | Episode 49

    This episode of AI Security Ops discusses Anthropic’s unreleased Claude Mythos Preview model, examining its potential to revolutionize AI-powered cybersecurity. The team explores the implications of AI-driven vulnerability discovery and th…

  • Holocron OpenBrain with Alex Minster | Episode 48

    Alex Minster introduces Holocron OpenBrain, a persistent, model-agnostic memory layer designed to enhance AI workflows by providing centralized memory across various AI models and tools. He explains how this system helps overcome the "cold…

  • LiteLLM Supply Chain Compromise | Episode 47

    This episode of AI Security Ops discusses the LiteLLM supply chain compromise, detailing how AI systems were breached through software supply chain weaknesses. It covers the attack chain, the role of CI/CD pipelines, and the impact of mali…

  • Model Ablation | Episode 46

    In this episode, the AI Security Ops team examines model ablation, a process where specific components of an AI model are disabled to remove safety features. The discussion covers how this technique functions, its risks to AI security, and…

  • Embedding Space Attacks | Episode 45

    This episode of AI Security Ops examines embedding space attacks, focusing on how manipulators target the mathematical foundations of vector spaces and data representation. The team discusses the mechanics of embeddings, the risks of data…

  • Indirect Prompt Injection | Episode 44

    This episode of AI Security Ops examines indirect prompt injection, the top risk in the OWASP Top 10 for LLM Applications. The team discusses how this threat works, its real-world impact on AI tools like Microsoft 365 Copilot, and current…

  • Top AI Security Concerns | Episode 43

    In this episode of AI Security Ops, Bronwen Aker and Dr. Brian Fehrman analyze current AI security concerns, including agentic AI threats, large-scale deepfakes, and persistent prompt injection risks. The discussion focuses on differentiat…

  • Claude Cowork Discussion | Episode 42

    In this episode, Derek Banks, Bronwen Aker, and Brian Fehrman discuss Anthropic’s Claude Cowork, an agentic desktop tool. They analyze the tool's functionality, its local file access, and the associated security implications for defenders.

  • OpenClaw and Moltbook with Guests Beau Bullock and Hayden Covington | Episode 41

    In this episode of AI Security Ops, guests Beau Bullock and Hayden Covington discuss the OpenClaw autonomous AI agent and the Moltbook platform. The conversation covers the security implications of these tools, including vulnerabilities, a…

  • AI in the SOC: Interview with Hayden Covington and Ethan Robish from the BHIS SOC | Episode 40

    In this episode of AI Security Ops, Hayden Covington and Ethan Robish from the BHIS SOC discuss the practical application of AI in defensive security, including machine learning techniques and LLMs. They explore how these tools assist with…

  • AI News | Episode 39

    In this episode of AI Security Ops, hosts Brian Fehrman and Bronwen Aker discuss recent developments in AI security, including LLM-generated phishing scripts, identity governance for AI agents, NIST critical infrastructure guidance, and ne…

  • Questions From the Community | Episode 38

    In this episode of AI Security Ops, hosts Brian Fehrman, Joff Thyer, and Derek Banks answer questions from the community.

  • A.I. Frameworks and Databases | Episode 37

    In this episode of AI Security Ops, the team explores AI security frameworks and vulnerability databases used for tracking risks in machine learning and LLMs. The discussion covers MITRE ATLAS, the OWASP Top 10 for LLMs, and the challenges…

  • AI News Stories | Episode 36

    The AI Security Ops team discusses recent AI-related security threats, including an n8n zero-day vulnerability, prompt injection risks via ChatGPT memory, and malicious browser extensions. They also address indirect prompt injection scenar…

  • 2026 Predictions | Episode 35

    In this episode, the BHIS panel discusses their 2026 predictions for AI, covering topics such as energy limitations, advancements in drug development, agentic AI in software creation, and emerging cybersecurity threats.

  • AI Security Ops - Why Did We Create This Podcast? | Podcast Trailer

    The BHIS team discusses the purpose and mission of the AI Security Ops podcast. They outline future episode topics, including AI news, industry insights, community Q&A, and practical demonstrations for those working in AI and cybersecurity.

  • Community Q&A on AI Security | Episode 34

    In this episode of AI Security Ops, the panel answers community questions regarding AI security, hallucinations, privacy, and practical use cases for large language models. The discussion covers topics including legal liability, memory fea…

  • AI News Stories | Episode 33

    In this episode of AI Security Ops, the panel discusses recent developments in AI security, including state-sponsored AI cyber-espionage, critical RCE vulnerabilities in AI tools, the emergence of malicious LLMs, and polymorphic malware po…

  • Model Evasion Attacks | Episode 32

    This episode discusses model evasion attacks, where attackers manipulate AI inputs to bypass security classifiers. It covers various tactics, defensive measures like adversarial training, and the future of AI security threats and regulatio…

  • Data Poisoning | Episode 31

    This episode of BHIS Presents: AI Security Ops discusses data poisoning, a method where attackers corrupt AI training data. It covers how poisoned data affects classifiers and LLMs, risks from open-source repositories, and defensive strate…

  • AI News Stories | Episode 30

    This episode of BHIS Presents: AI Security Ops discusses AI cybersecurity news from November 2025. The panel discusses public AI awareness, the security risks associated with local LLMs, and emerging AI-driven threats.

  • A Conversation with Dr. Colin Shea-Blymyer | Episode 29

    Dr. Colin Shea-Blymyer joins AI Security Ops to discuss AI governance, cybersecurity, and red teaming, covering regulatory differences between the U.S. and EU, historical lessons from AI, and emerging risks in the field.

  • Questions from the Community | Episode 28

    This episode of AI Security Ops features a Q&A with the community, discussing practical, ethical, and technical challenges of AI in cybersecurity, including open-source red teaming tools, AI threat modeling, and prompt privacy.

  • Azure AI Foundry Guardrails | Episode 27

    This episode of AI Security Ops discusses configuring content filters in Azure AI Foundry using guardrails and controls. It details how to block unwanted content, enforce policy, and maintain compliance by adjusting default filters, settin…

  • Questions from the Community | Episode 26

    In this episode of BHIS Presents: AI Security Ops, the panel addresses viewer questions regarding AI security, privacy, and risk. Topics discussed include prompt guardrails, the difference between hallucination and confabulation, AI's inte…

  • AI News Stories | Episode 25

    This episode of BHIS Presents: AI Security Ops covers major AI cybersecurity headlines from late September 2025. Topics include government oversight, Accenture

  • Model Extraction Attacks | Episode 24

    This episode of AI Security Ops, hosted by Brian Fehrman, covers model extraction attacks, a threat where adversaries can clone AI models by querying their APIs. The discussion includes how these attacks work, their risks to intellectual p…

  • News of the Month | Episode 23

    In this episode, hosts Brian Fehrman and Joff Thyer review the latest AI news impacting cybersecurity. They discuss AI

  • Insider Threat 2.0 - Prompt Leaks & Shadow AI | Episode 22

    This episode of AI Security Ops discusses Insider Threat 2.0, focusing on prompt leaks and Shadow AI. It covers the risks of employees using public AI tools, the dangers of unauthorized Shadow AI, and the need for clear company policies on…

  • Deepfakes and Fraudulent Interviews In Remote Hiring | Episode 21

    Episode 21 of AI Security Ops discusses the increasing use of deepfakes and fraudulent interviews in remote hiring. The hosts cover how cybercriminals impersonate candidates using AI and provide strategies for securing hiring processes aga…

  • The Hallucination Problem | Episode 20

    In Episode 20 of AI Security Ops, Joff Thyer and Brian Fehrman explore the hallucination problem in AI large language models and generative AI. They discuss the causes, risks, security implications, and mitigation strategies for these AI-g…

  • News of the Month | Episode 19

    In Episode 19, "News of the Month," Brian and Derek discuss a zero-click prompt injection attack against ChatGPT connectors and Google Calendar events exploiting Gemini to control smart homes. They also cover Microsoft's patch for an Azure…

  • Malware in the Age of AI | EP 18

    This episode of AI Security Ops, "Malware in the Age of AI," features hosts Joff Thyer, Derek Banks, and Brian Fehrman discussing AI-powered malware. They cover topics like polymorphic keyloggers, the use of LLMs like ChatGPT for cyberatta…

  • Community Q&A | Episode 17

    In episode 17, hosts Joff Thyer, Derek Banks, Brian Fehrman, and Bronwen Aker address viewer questions on system prompts, prompt injection risks, AI hallucinations, deep fakes, and the application of AI in cybersecurity. They cover prompt…

  • A Conversation with Daniel Miessler | Episode 16

    In episode 16 of AI Security Ops, Joff and his team discuss AI in cybersecurity with innovator Daniel Miessler. They cover topics including intent engineering, the Fabric AI framework, the shift toward spec coding, and the implications of…

  • News of the Month – Episode 15

    Episode 15 of AI Security Ops covers the acquisition of Protect AI by Palo Alto Networks, the emergence of Shadow AI, and significant AI-related security incidents including data leaks and issues with AI coding agents.

  • Questions From The Community podcast – Episode 14

    In Episode 14 of the AI Security Ops Podcast, hosts Joff Thyer, Derek Banks, and Brian Fehrman answer community questions. They discuss prompt engineering, comparing AI tools such as Claude, ChatGPT, and NotebookLM, and emphasize the neces…

  • Augmenting Red Teaming with AI- Episode 13

    In Episode 13 of AI Security Ops, hosts Joff Thyer, Derek Banks, and Brian Fehrman explore the use of Agentic AI in Red Teaming. They discuss how AI can automate penetration testing, identify vulnerabilities, and enhance security coverage,…

  • Global AI Laws and the Impact of GDPR – Episode 12

    Episode 12 discusses the global challenges of regulating AI, focusing on the EU's GDPR framework for data privacy and accountability. It contrasts the EU's regulatory approach with the US's innovation focus and notes the fragmented state o…

  • A.I. News of the Month – Episode 11

    AI Security Ops discusses recent AI developments like the Scale AI data leak affecting Google and Meta, a new jailbreak technique called Echo Chamber, and Anthropic's Claude-Gov for national security. The episode also touches on AI for det…

  • Agentic AI Threats, challenges, and Defenses | Episode 10

    Episode 10 of AI Security Ops features experts discussing agentic AI threats, including prompt injection vulnerabilities. They cover mitigation strategies like guardrails and granular logging for cybersecurity professionals and AI develope…

  • AI Model Usage and Comparisons – Episode 9

    Episode 9 of AI Security Ops compares popular AI models including OpenAI, Claude, Gemini, and Copilot, discussing their uses, strengths, weaknesses, and integration into cybersecurity workflows.

  • AEO vs SEO | Episode 8

    Episode 8 of AI Security Ops discusses the shift from Search Engine Optimization (SEO) to Answer Engine Optimization (AEO), exploring AI's role in search results. It covers security and ethical concerns, including misinformation and data p…

  • R.A.G. [Retrieval Augmented Generation] – Episode 7

    This episode of AI Security Ops discusses Retrieval Augmented Generation (RAG), a method to improve Large Language Models (LLMs) by using external data. Hosts explore how RAG enhances the reliability and relevance of AI systems, addressing…

  • LLM Guardrails | Episode 6

    Episode 6 discusses the essential role of LLM guardrails in securing large language models. It covers implementation challenges, current methods resembling early InfoSec practices, and the need for layered defenses, including input/output…

  • Harmful Content | Episode 5

    This episode of AI Security Ops covers the challenges of AI-generated harmful content, focusing on models like GPT. It stresses the need for detection, ethical oversight, and regulation to ensure responsible AI use.

  • A.I. News of the month

    This episode discusses the application of AI, particularly classic machine learning models like logistic regression and SVMs, in revolutionizing spam detection for improved cybersecurity. It covers the role of NLP and the process of buildi…

  • AI Deepfakes

    This episode of AI Security Ops explores AI-generated deepfakes, covering their creation with GANs and diffusion models, real-world fraud incidents, detection methods, and the associated ethical and legal challenges.

  • Introduction to Prompt Injection

    This episode of AI Security Ops features Joff Thyer, Derek Banks, Brian Fehrman, and Ben Bowman exploring prompt injection attacks. They cover how large language models work, the mechanics of prompt injection, differentiating it from jailb…