All Episodes

Displaying 81 - 100 of 113 in total

Episode 81 — Evaluate AI threats and vulnerabilities that do not exist in normal IT (Domain 2F)

This episode explains AI-specific threats and vulnerabilities that go beyond normal IT risk, which matters for Domain 2F because AAIA expects you to recognize failure ...

Episode 82 — Understand data poisoning, evasion, and model theft in plain language (Domain 2F)

This episode breaks down three high-yield AI attack categories—data poisoning, evasion, and model theft—in plain language so you can recognize them in AAIA scenarios a...

Episode 83 — Evaluate AI threat and vulnerability management programs for real coverage (Task 19)

This episode teaches you how to evaluate whether an AI threat and vulnerability management program has real coverage, because Task 19 scenarios often describe “we have...

Episode 84 — Build threat monitoring that catches abuse of models and prompts early (Task 19)

This episode focuses on threat monitoring that detects abuse of models and prompt interfaces early, because Task 19 expects monitoring to catch misuse patterns before ...

Episode 85 — Evaluate identity and access management for AI models, data, and keys (Task 16)

This episode teaches you how to evaluate identity and access management for AI systems, because Task 16 scenarios often test whether you protect the most sensitive ass...

Episode 86 — Audit least privilege for pipelines, service accounts, and model endpoints (Task 16)

This episode focuses on auditing least privilege in the places where AI systems most often break it: pipelines, service accounts, and model endpoints. You’ll learn how...

Episode 87 — Evaluate AI vendors and supply chain controls where your visibility ends (Task 10)

This episode explains how to evaluate AI vendors and supply chain controls when your visibility ends at the contract boundary, because Task 10 often tests whether you ...

Episode 88 — Audit AI vendor claims, contracts, and control evidence without getting sold (Task 10)

This episode teaches you how to audit AI vendor claims, contracts, and control evidence without getting sold by polished marketing metrics and generic security stateme...

Episode 89 — Evaluate AI problem and incident management programs for fast containment (Task 20)

This episode focuses on evaluating AI problem and incident management programs with an emphasis on fast containment, because Task 20 scenarios often involve harmful ou...

Episode 90 — Run AI incident response: detect, triage, contain, recover, and learn (Domain 2G)

This episode walks through AI incident response as a complete lifecycle—detect, triage, contain, recover, and learn—because Domain 2G expects you to treat AI incidents...

Episode 91 — Spaced Retrieval Review: Domain 2 operations and controls, simplified (Review: Domain 2)

This review episode reinforces Domain 2 by pulling operations and controls into a compact, easy-to-recall mental model that matches how AAIA questions are written. You...

Episode 92 — Plan an AI audit: scope, criteria, stakeholders, and timing choices (Domain 3A)

This episode explains how to plan an AI audit in a way that produces a workable scope, clear criteria, the right stakeholders, and timing that fits the AI lifecycle. Y...

Episode 93 — Build AI audit objectives that connect directly to business risk (Domain 3A)

This episode teaches you how to build audit objectives that connect directly to business risk, because AAIA scenarios often test whether you can write objectives that ...

Episode 94 — Choose audit criteria for AI using policy, risk, and outcomes (Domain 3A)

This episode explains how to choose audit criteria for AI by using policy, risk, and outcomes, because AAIA expects you to build criteria that can be proven with evide...

Episode 95 — Use audit techniques tailored to AI systems, not generic checklists (Domain 3B)

This episode teaches audit techniques that are tailored to AI systems, because Domain 3B often tests whether you can select methods that match AI realities like data d...

Episode 96 — Design sampling for AI decisions that reveals bias and failure modes (Domain 3B)

This episode focuses on designing sampling approaches that reveal bias and failure modes in AI decisions, because AAIA questions often ask what sampling plan best supp...

Episode 97 — Test AI controls with evidence, not opinions or vendor demos (Domain 3B)

This episode teaches you how to test AI controls using evidence, because Domain 3B scenarios often tempt you to accept “trust me” statements, impressive demos, or subj...

Episode 98 — Collect AI audit evidence: logs, lineage, artifacts, and change records (Domain 3C)

This episode explains how to collect AI audit evidence across logs, lineage, artifacts, and change records, because Domain 3C expects you to prove what happened, when ...

Episode 99 — Validate evidence integrity when models and data change over time (Domain 3C)

This episode focuses on validating evidence integrity in environments where models and data change over time, because AI auditing fails quickly when you cannot prove w...

Episode 100 — Audit data quality before trusting any AI output or model score (Domain 3D)

This episode teaches you why auditing data quality must happen before you trust any AI output or model score, because Domain 3D scenarios often hinge on the fact that ...

Broadcast by