Episode 10 — Apply ethical principles when AI outcomes create real business risk (Task 3)

In this episode, we take a skill that sounds abstract and make it practical: turning a business goal into A I requirements that an auditor can verify. Beginners often hear business goals like improve efficiency or reduce risk and assume those goals are too vague to audit, but the opposite is true. Vague goals are a signal that requirements work has not been done yet, and without requirements, an organization cannot honestly claim the system is successful, safe, or controlled. Requirements are where the organization commits to what the system should do, what it should not do, and how it will prove those claims. This matters in A I because models can look impressive while still failing the real goal, and they can also create unintended harm if constraints are not made explicit. As an auditor, you are not rewriting the business strategy, but you are evaluating whether the strategy has been translated into testable, documented expectations. That translation is what makes auditing possible, because auditing is always a comparison between what is happening and what should be happening. Once you can do this translation in plain language, Task 7 stops feeling like theory and starts feeling like a repeatable method.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

Start with the simplest idea: a business goal is a desired outcome, while a requirement is a specific expectation that can be checked with evidence. A goal might be reduce customer wait time, but a requirement might specify that average first response time must be under a defined number of minutes for certain ticket categories, with clear definitions for categories, measurement method, and acceptable exceptions. Another goal might be improve fraud detection, but a requirement might specify acceptable false positive and false negative thresholds, escalation rules for high-risk alerts, and a minimum standard for investigation documentation. The transformation from goal to requirement is basically moving from a wish to a contract the organization makes with itself. Beginners sometimes think requirements are only for engineers, but in audits, requirements include governance expectations like who approves deployment, who reviews performance, and what monitoring triggers intervention. Requirements also include constraints like privacy, fairness, safety, and explainability expectations, because those constraints define acceptable behavior. If the goal is what the organization wants, the requirements are the rules that define success without creating harm.

A helpful audit approach is to translate any business goal into four requirement categories: purpose, performance, constraints, and governance. Purpose requirements clarify what the system is used for and what decisions it influences, which sets scope boundaries and prevents mission creep. Performance requirements define what good looks like in measurable terms, such as accuracy targets, response time improvements, or error rate limits, while acknowledging tradeoffs that may exist. Constraint requirements define what must not be violated, such as privacy boundaries, fairness expectations, safety limitations, and appropriate-use rules for outputs. Governance requirements define who owns the system, who approves changes, who monitors outcomes, and what happens when something goes wrong. You do not need to memorize these as labels to use the method, but keeping these four categories in mind makes the translation process feel manageable. It also prevents a common mistake where organizations define performance but forget constraints and governance, which is how high-performing systems become high-risk systems. For Task 7 thinking, the exam will often reward answers that include constraints and accountability, not just performance metrics.

Let’s look more closely at purpose requirements, because purpose is the anchor that makes every other requirement meaningful. Purpose answers questions like what problem is being solved, which users are involved, and what decisions will be influenced by the A I output. If the purpose is unclear, the system can expand into new uses without proper review, which increases risk. A purpose requirement might specify that the model is used only to prioritize customer support tickets, not to deny service, and that final decisions remain with human staff for certain categories. Another purpose requirement might specify that the model provides recommendations, not automatic approvals, and that recommendations must be presented with limitations so users do not treat them as facts. For auditors, purpose requirements create scope, and scope creates a clear boundary for evidence gathering. If the system is used outside that boundary, that is a governance issue and potentially an audit finding. Beginners should remember that purpose is not just about what the model can do, it is about what the organization permits it to do. That permission must be documented, communicated, and enforced.

Performance requirements are where the organization defines measurable success, but performance is often misunderstood as just accuracy. Accuracy can matter, but in many business goals, the most important performance measure is operational impact, such as reduced wait time or improved throughput, not just prediction correctness. A system can be accurate but slow, expensive, or disruptive to workflows, which means it fails the business goal. Another point is that performance should be defined at the right level of detail, because overall averages can hide failures in critical categories. For example, a model might perform well for common tickets but poorly for urgent cases, and urgent cases might matter most. So a good performance requirement often includes segmentation, meaning separate expectations for different types of inputs or different risk tiers. Performance requirements should also include tolerance for errors, because every model makes mistakes, and the organization must define what level of mistakes is acceptable and what happens when thresholds are exceeded. In audit terms, performance requirements are auditable because you can compare real metrics to documented thresholds over time. If there are no thresholds, performance becomes a matter of opinion and cannot be audited honestly.

Constraint requirements are where many organizations get uncomfortable, because constraints force them to state what they are willing to trade off and what they will not compromise. In A I, constraints often include privacy, fairness, safety, and explainability expectations. A privacy constraint might specify that certain types of personal information are excluded from training data or excluded from inference inputs, and that access to sensitive data is restricted and logged. A fairness constraint might specify that the organization will test for disparate outcomes across relevant groups and will take action when disparities exceed a defined threshold. A safety constraint might specify that the model cannot be used for fully automated decisions in high-risk contexts without human review, or that certain outputs require confirmation before action. An explainability constraint might specify that the organization must be able to provide a human-understandable rationale or traceability path for decisions that affect people. From an audit perspective, constraints are crucial because they define what unacceptable looks like, not just what successful looks like. If an organization cannot state constraints, it may not have thought through the risk, and that is a signal of immaturity in governance.

Governance requirements complete the picture because even the best purpose, performance, and constraint requirements can fail if nobody owns them. Governance requirements identify an owner who is accountable for outcomes, not just an operator who runs the system. They define who approves deployment, who approves changes, and who reviews performance reports. They also define escalation paths, meaning who must be informed when thresholds are exceeded and who has authority to pause, rollback, or modify the system. Governance requirements should include documentation expectations, such as maintaining model version records, training data lineage, validation results, and monitoring logs. They should also include periodic review expectations, because requirements can become outdated as business goals change. For Task 7 thinking, this governance layer is often what separates a good-looking A I initiative from an auditable one. Auditors want to see that requirements are not just written once but are embedded into decision making and operational control. When governance requirements are missing, the system may be running on hope rather than on accountable oversight.

Now let’s practice translating a simple business goal to show how the method works. Suppose the goal is to reduce customer churn by identifying at-risk customers earlier. Purpose requirements might define that the model produces a risk score used to prioritize outreach, not to automatically change pricing or deny service. Performance requirements might define how early the system should identify risk, what acceptable error rates are, and how success is measured, such as improved retention in a controlled rollout. Constraint requirements might define that the model cannot use certain sensitive attributes directly, that it must be tested for unfair outcomes, and that outreach must not become harassment. Governance requirements might define who owns the model, who reviews monthly performance and fairness reports, and what triggers a pause in usage. Notice how this translation creates concrete audit points: documented thresholds, documented prohibited uses, documented reviews, and documented authority. It also exposes gaps quickly, such as if nobody can explain how success will be measured or who will respond to negative outcomes. This is the value of requirements translation: it turns a vague ambition into a set of checkable commitments.

A common misconception is that requirements must be extremely detailed and technical to be legitimate, but audit-ready requirements can be plain language as long as they are specific and testable. A requirement does not need to describe the model architecture, but it should describe what outputs are allowed, how outputs are used, what constraints apply, and what evidence proves ongoing oversight. Another misconception is that requirements reduce innovation, when in fact requirements can protect innovation by preventing chaotic outcomes and by building trust. If stakeholders trust that the A I system is governed, they are more willing to use it responsibly. Another misconception is that requirements are only about preventing harm, when requirements also protect the business goal by defining success clearly enough to evaluate return on investment. In A I, success can be hard to measure if the organization does not define it, and that leads to wasted effort and confusion. An auditor will often find that unclear requirements are the root cause of later issues, such as misaligned performance metrics or uncontrolled use. The exam may test whether you recognize unclear requirements as a fundamental risk factor and choose actions that clarify and document them.

Another important audit skill is spotting requirements that are not actually requirements, even though they sound like they are. Statements like the model must be accurate or the model must be fair are not auditable unless accuracy and fairness are defined. Auditable requirements include measurement methods, thresholds, and scope boundaries. If a requirement says the model must not create unfair outcomes, the auditor asks what unfair means, what groups are considered, what test method is used, and what triggers remediation. If a requirement says the model must protect privacy, the auditor asks what data is sensitive, what access controls apply, what retention rules exist, and what monitoring detects leakage. If a requirement says the model must improve efficiency, the auditor asks which process metrics will be tracked, over what time period, and what baseline is used for comparison. This is not nitpicking, it is the difference between a statement that inspires and a requirement that can be verified. Task 7 expects you to recognize this difference and to push toward requirements that create evidence trails. On the exam, answer choices that add clarity, thresholds, and documented processes often represent the stronger audit approach.

As you carry this forward, it helps to remember that requirements are also the foundation for later audit steps like enterprise architecture fit and hidden assumptions, which are the topics coming next. If requirements are unclear, it is impossible to judge whether the system fits the organization’s environment and controls, and it is hard to detect assumptions that could become future findings. Strong requirements create a stable target for validation and monitoring. They also help stakeholders communicate, because everyone can refer to the same documented expectations instead of debating vague goals. For beginners, this can feel like a big leap from A I model basics, but it is actually a natural progression: once you understand what models do, you can define what you want them to do and how you will keep them within safe boundaries. That is the bridge from A I knowledge to audit practice. In the next episode, we will move from requirements into how those requirements align with enterprise architecture, which is a way of asking whether the organization can actually operate and govern the system it wants.

Episode 10 — Apply ethical principles when AI outcomes create real business risk (Task 3)
Broadcast by