Episode 48 — Run the AI risk management life cycle from intake to monitoring (Task 4)

In this episode, we treat A I standards and regulations as something very practical: constraints that shape what an organization is allowed to do and what it must prove it has done. Beginners sometimes hear the words standards and regulations and imagine a thick stack of legal language that only attorneys can understand. While legal interpretation does matter, there is a simpler way to approach this topic as an auditor or evaluator: standards and regulations usually translate into repeatable requirements about governance, risk management, data handling, transparency, and accountability. If you can identify those requirements and ask for evidence that they are met, you can audit compliance in a realistic way without needing to memorize every detail. Another reason this matters is that standards and regulations do not only exist to avoid penalties; they also capture lessons learned from past harms and failures, which means they often point directly at the places where A I systems go wrong. Your goal here is to understand the big categories of constraints and how to turn them into concrete audit questions.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

A helpful place to start is understanding the difference between a standard and a regulation, because they influence organizations in different ways. A standard is typically a set of recommended practices or requirements created by a recognized body, and organizations may adopt it voluntarily, or they may be required to follow it by contracts, customer expectations, or industry norms. A regulation is a rule backed by a government authority, and it can create legal obligations with penalties for noncompliance. In real life, these often blend together because a regulation might reference a standard, and a standard might become a requirement in a contract. For auditing, the key is that both create expectations that must be met and documented. Your job is not to debate whether a rule is good or bad, but to confirm whether the organization knows which constraints apply to it and has built controls that satisfy them. Ignorance is not a control, and assuming the rules do not apply is rarely safe.

Standards and regulations can feel overwhelming because there are many of them, and they can differ by country, industry, and type of A I use case. A beginner-friendly approach is to focus on the themes that show up repeatedly across frameworks. One theme is risk-based governance, meaning the higher the impact of the A I system, the stronger the controls and oversight must be. Another theme is transparency, meaning people should know when A I is used and should have understandable information about it. Another is fairness and non-discrimination, meaning systems should not produce systematically unequal harm, especially in sensitive areas like employment, lending, housing, education, and public services. Another is privacy and data protection, meaning personal data must be handled with lawful purpose, minimization, and security. Another is accountability, meaning organizations must be able to explain decisions, assign responsibility, and respond to incidents. When you audit, you look for these themes translated into the organization’s policies, processes, and evidence.

A strong audit mindset begins by confirming scope: which A I systems are in play and what kinds of rules might apply to them. A system that generates marketing text is different from a system that screens job applicants. A system used internally for low-impact analytics is different from a system used to make decisions about individuals. Many regulations and standards care deeply about whether an A I system influences rights, opportunities, or access to services. So a basic audit question is how the organization classifies its A I use cases by impact and risk. If everything is treated as low risk, that can be a red flag that the organization is ignoring its responsibilities. A second audit question is where the systems are deployed and who they affect, because jurisdiction and audience often determine which rules apply. A third question is whether the organization is using third-party services, because vendor use can create obligations about data transfers, transparency, and contractual controls.

Once you know the scope, you can treat standards and regulations as constraint categories and ask for evidence in each category. The first category is governance structure, meaning there is a defined way the organization makes decisions about A I. This often includes policies, approval gates, risk assessments, and roles with authority to stop or limit a deployment. Audit evidence here includes documented responsibilities, decision logs, and records of review meetings or approvals. You are looking for a repeatable process, not heroic efforts by a single person. Standards and regulations frequently require that governance exists and that it is effective, which means it must be used consistently and it must influence outcomes. If a governance process exists but projects routinely bypass it, the organization may be exposed even if it has impressive documents. Constraints are only real when they shape behavior.

The second category is documentation and traceability, which is the idea that organizations should be able to explain what they built and how they built it. In A I, traceability can include the purpose of the model, the data sources used, the assumptions made, the evaluation results, and the limitations known at the time of release. Many frameworks require documentation because without it, accountability is impossible. Audit evidence here includes a model description, data source records, test results, and change history. For beginners, the simplest way to test traceability is to ask whether the organization can answer basic questions without guessing, such as why the model exists, what it is allowed to be used for, what data it relies on, and what happens when it fails. If answers depend on institutional memory rather than written records, traceability is weak. Weak traceability often correlates with hidden risk because nobody can clearly see what the system is doing.

The third category is data governance, which standards and regulations treat as foundational because data is where many A I problems start. Constraints here typically include lawful basis or legitimacy for using personal data, minimization, purpose limits, retention, access controls, and vendor restrictions. A I also raises special concerns about training data, such as whether sensitive information is included, whether data was collected with proper notice, and whether data quality supports fair outcomes. Audit evidence includes data inventories, data flow diagrams, retention schedules, access logs or access control documentation, and records of data source approvals. You also look for evidence that prompt logs and outputs are handled responsibly because they can create a new dataset of personal information. A strong program treats data governance as a living discipline rather than a one-time cleanup. Regulations and standards become audit constraints when the organization can show how data rules are enforced in everyday operations.

The fourth category is risk management and impact assessment, which is where many A I regulations focus. Instead of requiring perfection, a lot of frameworks require that organizations identify foreseeable risks, evaluate likelihood and impact, and implement controls proportionate to those risks. In practice, that means high-impact A I use cases should trigger deeper assessment, more testing, and stronger monitoring. Audit evidence includes risk assessments, impact assessments, mitigation plans, and proof that mitigations were implemented before deployment. A beginner can audit this by checking whether the assessments are specific to the use case and whether they include concrete outcomes. If every assessment says the same generic risks and the same generic mitigations, it may be theater. Another key evidence point is follow-through, such as change records that show a system was modified or delayed due to risk findings. Risk management is real only when it changes decisions.

The fifth category is evaluation and testing, which often includes requirements around accuracy, robustness, fairness, and security. Standards and regulations differ in detail, but many expect organizations to test models and confirm they perform acceptably for the intended purpose. Fairness testing can be required or strongly expected in sensitive contexts, especially where discrimination laws are relevant. Robustness testing considers whether the model behaves reasonably when inputs change or when users try to misuse it. Security testing includes protecting the system from abuse and protecting data from leakage. Audit evidence includes evaluation plans, test results, and documented acceptance criteria. You also look for monitoring plans because many frameworks recognize that models can drift over time and need ongoing oversight. If testing is described only as informal experimentation, that may not satisfy constraints that require controlled evaluation.

The sixth category is transparency and user-facing obligations, which many standards and regulations treat as essential for trust and rights. Transparency constraints often include notifying people when A I is used, avoiding deceptive interaction, and providing understandable information about what the system does. In higher-impact settings, transparency can also include providing explanations for decisions and offering a way to contest or appeal. Audit evidence includes user notices, internal scripts or guidance for staff, and records showing how users can raise concerns or request human review where applicable. A practical audit question is whether transparency is consistent across channels, such as websites, apps, and customer support, because inconsistency can undermine trust. Another question is whether the organization discourages over-reliance by clearly stating limitations and uncertainty. Transparency is not about exposing trade secrets; it is about ensuring people are not misled about automation that affects them.

The seventh category is accountability and oversight, which becomes especially important when A I is used to influence decisions about individuals. Many frameworks require that organizations maintain human responsibility, even when automated systems assist or decide. That can include training staff, maintaining clear roles, and ensuring escalation paths exist when issues are found. Audit evidence includes role definitions, governance charters, training materials, escalation procedures, and records of incidents and responses. You also look for evidence that the organization can pause or roll back a system if harm is detected. Oversight is not real if the organization cannot intervene quickly. Accountability constraints are auditable when you can point to who owns the system, what they are responsible for, and what they have done when risks appeared.

A critical point for beginners is that you do not need to name every standard or regulation to audit constraints effectively, but you do need to ensure the organization has done that mapping for itself. A practical audit test is to ask the organization to show how it identifies applicable rules and translates them into internal controls. Evidence here includes a compliance mapping document, a register of obligations, or a governance record that links requirements to policies and tests. If the organization cannot show how it knows what applies, it may be compliant by accident rather than by design. Another practical test is to look for change management, because standards and regulations evolve, and organizations need a way to update controls as rules change. Compliance is not a one-time project; it is ongoing alignment. An organization that treats it as a one-time checkbox often falls behind.

To close, explaining A I standards and regulations as constraints you can audit is really about converting complexity into clear categories and evidence expectations. You identify the A I use cases and their impact, then you look for governance decisions and documentation that show responsible oversight. You examine data governance, risk assessment, evaluation and testing, transparency, and accountability as recurring constraint themes that appear across frameworks. You then ask for evidence that those constraints are enforced, such as approval records, data inventories, test results, notices, and incident responses. When you approach the topic this way, you do not need to be a lawyer to be useful; you need to be disciplined about asking what applies, what controls exist, and what proof shows those controls are working. That is how standards and regulations become practical guardrails rather than intimidating stacks of text, and it is how organizations build A I that can survive scrutiny when pressure is high.

Episode 48 — Run the AI risk management life cycle from intake to monitoring (Task 4)
Broadcast by