Episode 99 — Validate evidence integrity when models and data change over time (Domain 3C)

This episode focuses on validating evidence integrity in environments where models and data change over time, because AI auditing fails quickly when you cannot prove which version produced which outcome. You’ll learn how to confirm that evidence is complete, consistent, and tied to specific model versions, configuration states, and data snapshots, so findings cannot be dismissed as “from before the update.” We’ll cover integrity risks like missing logs, overwritten configuration records, undocumented retraining, uncontrolled dataset changes, and vendor updates that alter behavior without clear notification. You’ll also learn practical integrity checks, such as reconciling timestamps across systems, verifying immutable logging where appropriate, sampling change events back to approvals, and validating that lineage artifacts match actual pipeline behavior. The goal is to help you answer AAIA scenarios by selecting the approach that preserves chain-of-custody thinking for AI evidence, enabling defensible conclusions even in fast-moving operational environments. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.
Episode 99 — Validate evidence integrity when models and data change over time (Domain 3C)
Broadcast by