All Episodes

Displaying 61 - 80 of 113 in total

Episode 61 — Audit AI deployment controls: approvals, gates, and rollback readiness (Task 8)

This episode focuses on deployment controls for AI, because Task 8 scenarios often test whether you treat deployment as a controlled release with approvals, gates, and...

Episode 62 — Audit AI monitoring controls: drift, performance, and incident triggers (Task 8)

This episode teaches you how to audit AI monitoring controls for drift, performance, and incident triggers, because Task 8 expects monitoring to be designed and proven...

Episode 63 — Audit AI decommissioning: retirement criteria and data cleanup duties (Task 8)

This episode focuses on AI decommissioning, because Task 8 scenarios sometimes test whether you can manage the end of the lifecycle with the same discipline as develop...

Episode 64 — Evaluate algorithms and models for alignment to business objectives (Task 9)

This episode teaches you how to evaluate whether an algorithm or model aligns to business objectives, because Task 9 questions often focus on fit-for-purpose decisions...

Episode 65 — Test model alignment to policy: what it should do versus what it does (Task 9)

This episode focuses on testing model alignment to policy by comparing what the model should do to what it actually does, which is a common AAIA scenario pattern when ...

Episode 66 — Evaluate model explainability expectations without overpromising certainty (Task 9)

This episode teaches you how to evaluate explainability expectations without overpromising certainty, because Task 9 questions often test whether you can set realistic...

Episode 67 — Evaluate model performance claims using audit-grade skepticism (Task 9)

This episode focuses on evaluating model performance claims with audit-grade skepticism, because AAIA scenarios often include impressive numbers that are meaningless w...

Episode 68 — Evaluate change management for AI where “updates” can change outcomes (Task 13)

This episode explains why change management for AI must be stricter than typical software change management, because in AI, “updates” can silently change outcomes even...

Episode 69 — Audit model update approvals, testing evidence, and release readiness (Task 13)

This episode focuses on auditing model updates by verifying approvals, testing evidence, and release readiness, because Task 13 scenarios often revolve around a model ...

Episode 70 — Audit emergency changes for AI when risk forces fast decisions (Task 13)

This episode teaches you how to audit emergency changes for AI when risk forces fast decisions, because AAIA questions often test whether you can balance urgency with ...

Episode 71 — Evaluate configuration management for AI across code, data, and models (Task 14)

This episode explains how configuration management for AI must cover more than application settings, because Task 14 expects you to control anything that can change ou...

Episode 72 — Prove reproducibility: model versions, parameters, and training snapshots (Task 14)

This episode teaches you how to prove reproducibility for AI systems, because Task 14 scenarios often test whether the organization can recreate a model’s behavior whe...

Episode 73 — Audit access to model artifacts, pipelines, and configuration repositories (Task 14)

This episode focuses on auditing access controls for model artifacts, pipelines, and configuration repositories, because Task 14 expects you to protect the elements th...

Episode 74 — Supervise AI outputs: detect harmful decisions before customers do (Domain 2D)

This episode explains how to supervise AI outputs so harmful decisions are detected internally before customers, employees, or regulators surface the problem, which is...

Episode 75 — Build human oversight triggers for AI decisions that need escalation (Domain 2D)

This episode teaches you how to build human oversight triggers that route the right AI decisions to review and escalation, because Domain 2D frequently tests whether y...

Episode 76 — Validate supervision of AI impacts on fairness, safety, and quality (Domain 2D)

This episode focuses on validating whether supervision actually covers fairness, safety, and quality impacts, because Domain 2D expects oversight to detect harm patter...

Episode 77 — Test AI solutions for accuracy, robustness, bias, and safety (Domain 2E)

This episode explains how to test AI solutions across four dimensions—accuracy, robustness, bias, and safety—because Domain 2E questions often require you to choose a ...

Episode 78 — Choose AI testing methods that match the risk of the use case (Domain 2E)

This episode teaches you how to choose testing methods that match use-case risk, because Domain 2E expects you to scale testing depth based on impact, not apply a one-...

Episode 79 — Evaluate the design and effectiveness of AI-specific controls (Task 12)

This episode focuses on evaluating the design and effectiveness of AI-specific controls, because Task 12 is about proving that controls exist for AI risks that traditi...

Episode 80 — Prove AI controls work over time, not only on launch day (Task 12)

This episode teaches you how to prove AI controls work over time, because Task 12 often tests whether you can validate continuous control effectiveness in a world wher...

Broadcast by