Episode 25 — Identify data risks across the AI life cycle: leaks and tampering (Task 14)

This episode targets Task 14 by teaching you to identify data risks across the AI life cycle, with a focus on leaks and tampering, because AAISM expects you to reason about where data can be exposed or altered from intake through training, evaluation, deployment, and ongoing operations. You’ll define key risk types such as unauthorized disclosure through prompts and outputs, exposure through logs and telemetry, poisoning or manipulation of training data, and integrity failures that lead to unsafe or misleading outputs. We’ll walk through scenarios including a vendor-hosted model that stores conversation history, a dataset sourced from multiple business units with inconsistent controls, and a retraining event that introduces unvetted external content. Exam practice emphasizes selecting the best next action that establishes control and evidence, such as tightening data handling rules, adding validation steps, restricting access paths, and documenting decisions so risk acceptance is explicit rather than accidental. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your educational path. Also, if you want to stay up to date with the latest news, visit DailyCyber.News for a newsletter you can use, and a daily podcast you can commute with.
Episode 25 — Identify data risks across the AI life cycle: leaks and tampering (Task 14)
Broadcast by