Episode 104 — Follow up AI audits so fixes stick and risk stays reduced (Domain 3E)
In this episode, we focus on the part of auditing that often separates a meaningful risk reduction program from a program that produces polished reports but little lasting change. Follow-up is what turns an A I audit from a moment in time into a sustained improvement cycle, because controls can look fixed right after a project, then gradually slip back as people get busy, staff changes, or priorities shift. For brand-new learners, it helps to think of follow-up as the difference between cleaning your room once and building habits that keep it clean. A I environments make follow-up even more important because models, data, and business use cases change, and a fix that worked last quarter may not be sufficient next quarter. Follow-up is not about nagging teams; it is about confirming that commitments were met, that the remediation actually reduced the risk described in the finding, and that the organization can prove the improvement when leaders or regulators ask. By the end, you should understand what strong follow-up looks like, what evidence makes a fix credible, and how to keep risk reduced over time instead of letting it quietly creep back.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A good follow-up process starts with clarity about what was promised, because it is impossible to verify progress if remediation goals were vague. When an audit finding recommends remediation, it should have a clear expected outcome and a way to measure whether that outcome happened. Follow-up begins by restating that outcome in a practical way, such as requiring evidence that release approvals are recorded for each deployment, or requiring evidence that drift monitoring thresholds are defined and reviewed on a schedule with escalation. Beginners sometimes think follow-up means checking whether someone completed a task, but audit follow-up is about checking whether the control is operating, not merely whether a document exists. A policy update alone is often not enough, because the real question is whether people actually follow the policy when under time pressure. The follow-up mindset is therefore operational: show me that the fix lives in the workflow, not just in a folder.
Another essential concept is timing, because the right moment to check depends on what kind of fix was made. Some fixes can be verified quickly, like updating access permissions or disabling a risky feature. Other fixes need time to demonstrate consistent behavior, like monitoring review cycles, incident response triggers, or release gates that must be passed repeatedly. A strong follow-up plan matches timing to the control being validated. For example, if the fix involves approvals and gates, you might review evidence across several releases to confirm the new process is actually being used. If the fix involves incident triggers, you may need to see how alerts are handled over a period long enough that alerts occur naturally, or you may need to review drill records if the organization uses simulations. Auditors also avoid waiting so long that memory fades and evidence becomes harder to collect. The point is to choose a follow-up window that is long enough to test durability but soon enough to catch backsliding early.
Follow-up also requires a clear division between implementation evidence and operating evidence. Implementation evidence shows the fix was put in place, such as a new policy, a revised workflow, an updated role assignment, or a configured gate that blocks deployment without approval. Operating evidence shows the fix is being used, such as completed approvals tied to specific releases, records of monitoring reviews, incident tickets created from triggers, or periodic reports showing drift signals are evaluated and acted on. In A I governance, operating evidence is the stronger proof because it demonstrates behavior, not intention. A common failure is when teams produce beautiful implementation evidence but cannot show actual use. An audit follow-up that accepts implementation evidence alone can create a false sense of security. The goal is to confirm both, with special weight on evidence that demonstrates the control is functioning in everyday operations.
A particularly important follow-up challenge in A I audits is verifying outcomes without relying on luck. Sometimes a team claims the fix worked because no incidents happened, but absence of incidents does not prove controls are strong. A system might simply be in a calm period, or the monitoring might be broken and therefore not detecting issues. Follow-up should therefore include checks that controls would detect and respond if a problem occurred. For example, if monitoring was the weakness, follow-up evidence might include alert logs, review schedules, escalation records, and examples of issues that were detected and triaged. If access control was the weakness, follow-up might include access reviews, changes to permissions, and verification that privileged actions are logged and reviewed. The key is that follow-up should validate capability, not just a quiet period. Beginners can think of it like checking a smoke alarm by pressing the test button, not by assuming it works because there has not been a fire.
Another part of making fixes stick is ownership and accountability, because controls weaken when responsibilities are unclear. Follow-up should confirm that owners were assigned, that they understand the responsibility, and that the responsibility is stable even when people change roles. In practice, this often means confirming that ownership is tied to a role rather than to a specific person and that there is an escalation path if owners are unavailable. A I controls often span teams, like model owners, data governance, security, and compliance, so follow-up must also check coordination. If a fix depends on one team doing work and another team reviewing it, follow-up should verify both sides are happening. Auditors also pay attention to whether ownership is supported with time and resources, because an owner without capacity is a weak control in disguise. Durable remediation requires not only assignment, but also realistic support.
Follow-up is also where you test whether remediation truly addressed the cause, not just the symptom. If the audit finding identified that gates were bypassed because the workflow allowed exceptions without documentation, a fix that adds a new form but still allows undocumented bypass does not resolve the cause. If the finding identified unclear roles as the cause of inconsistent approvals, a fix that updates a policy but does not assign clear accountable approvers may not solve the real problem. Follow-up should therefore revisit the original cause and check whether the remediation changed the conditions that created it. This is one reason good findings tie cause and remediation together, because it makes follow-up more objective. When cause is addressed, you usually see behavior change in evidence, like fewer exceptions, consistent approvals, or more timely escalations. If behavior does not change, follow-up is a chance to say the fix is incomplete and risk remains higher than acceptable.
A I systems also introduce the challenge that fixes can drift, just like models can drift. A monitoring process might be strong at first, then gradually become less consistent as alert noise increases or as staff rotate. A review meeting might start happening weekly, then get skipped during busy periods, then become monthly without anyone formally approving the change. Follow-up should therefore include checks for consistency and for early signs of decay, such as missed reviews, unresolved alerts, or increasing reliance on informal communication instead of formal records. This does not mean expecting perfection, but it does mean expecting the organization to notice and correct slippage. A mature control environment has a meta-control: it monitors its own control performance and intervenes when it weakens. Follow-up helps confirm whether that self-correction exists.
Another key follow-up idea is validating that risk was actually reduced, not merely that tasks were completed. Risk reduction can sometimes be measured directly, such as fewer high-severity incidents, fewer harmful outputs, improved stability, or reduced exception rates. In other cases, risk reduction is demonstrated through stronger controls that are logically connected to the risk, such as enforced approvals, improved access restrictions, and documented incident response actions. Auditors should be careful not to promise that fixes eliminate risk, because risk is rarely eliminated, especially in A I. Instead, follow-up should confirm that the organization moved risk from an unacceptable level to a more controlled and acceptable level. This is where clear risk statements in findings matter, because they define what unacceptable looked like and what improved control should achieve. Follow-up can then show the organization is more resilient, more transparent, and more capable of preventing or responding to issues.
Communication during follow-up is also important, because follow-up often involves tension between audit teams and operational teams. The best follow-up conversations are specific, evidence-driven, and collaborative without giving up independence. Auditors explain what evidence they need, why it matters, and how it ties to the original finding, and they remain open to learning about practical constraints. Operational teams explain what changed, what is working, and what barriers remain. If barriers remain, follow-up can highlight that additional decisions or resources are needed, rather than treating incomplete fixes as personal failure. This style of communication keeps attention on risk and control rather than on defensiveness. For beginners, it is helpful to see that strong auditing is firm on evidence but respectful in tone, because the goal is sustained improvement. Follow-up is where that balance is most visible.
Follow-up should also account for the possibility that the environment changed since the audit, because A I use cases evolve quickly. A model might have been updated, a new data source added, a product expanded to a new market, or a policy updated due to new requirements. These changes can affect whether the original remediation is still sufficient. A follow-up that ignores context can give a false sense of completion, because the fix might have been adequate for the old environment but not for the new one. This does not mean re-auditing everything, but it does mean confirming that the fix still applies and that new risks were not introduced. Auditors often ask whether changes occurred that affect the finding, whether new exceptions were granted, and whether monitoring shows stable operation after changes. Follow-up therefore acts as a checkpoint that connects remediation to the reality of ongoing change.
Finally, a strong follow-up process closes the loop with documentation and accountability that leaders can understand. This does not require long reports, but it does require clear status, clear evidence, and clear statements about whether the issue is resolved, partially resolved, or unresolved. Leaders need to know whether risk is still elevated and whether additional action is required. Teams need to know what evidence was accepted and what gaps remain. The organization also benefits from capturing lessons learned, such as which remediation approaches worked well and which did not, because that improves future audits and future fixes. In A I governance, where the same patterns repeat across systems, those lessons can significantly reduce effort and improve control consistency. Follow-up is therefore not only about one finding; it is about building an organization that gets better at sustaining controls over time.
Following up A I audits so fixes stick and risk stays reduced is about verifying real operational change, not just checking boxes. You start with clear remediation outcomes, choose timing that tests durability, and gather evidence that shows controls are implemented and operating consistently. You validate capability, not luck, and you ensure remediation addresses the original cause so the same problem does not recur in a different form. You watch for control decay, confirm ownership and resources, and consider changes in the environment that might affect whether a fix is still adequate. Throughout the process, you communicate in a way that is evidence-driven and constructive, keeping the focus on measurable risk reduction. When follow-up is done well, audits become part of a continuous improvement system where controls remain strong even as models and data evolve. That is how organizations move from temporary compliance to lasting, defensible governance.