Episode 12 — Plan AI impact assessments early so compliance is not an afterthought (Task 8)
In this episode, we take a quiet problem that causes loud failures and we learn how to catch it early: hidden assumptions inside Artificial Intelligence (A I) requirements. When beginners first hear the word assumption, they often think it means a guess, but in audits an assumption is more like an invisible rule that someone is relying on without writing it down. Those invisible rules can feel harmless during planning because everyone thinks they agree, yet they often become the exact reason a system fails in production or becomes risky in ways nobody intended. When requirements contain hidden assumptions, you cannot really audit the system, because you cannot verify expectations that were never stated clearly. Even worse, teams may believe they met the requirements because the assumptions were never tested, so problems are discovered only after deployment when the costs are higher and the impacts are real. The goal here is to give you a beginner-friendly method for hearing requirements and immediately asking, what must be true for this to work safely, and did anyone actually write that down.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A useful starting point is recognizing that requirements are always written in human language, and human language naturally leaves out details that feel obvious to the writer. In everyday life, that works because people can clarify in conversation, but in A I governance it creates risk because systems do exactly what they are built to do, not what people vaguely meant. Hidden assumptions often appear when requirements use words like accurate, fair, real time, and secure without defining what those words mean and how they will be measured. Another common signal is when a requirement promises a result but does not mention the conditions that must exist for that result to be reliable, such as data quality, stable inputs, or consistent workflows. Auditors care because assumptions are not just communication gaps, they are control gaps, and control gaps can produce harm. If a requirement assumes that users will treat model outputs as suggestions, but the interface encourages them to treat outputs as facts, the assumption collapses and the risk grows. Catching these assumption patterns early is part of Task 7 because evaluating requirements is not only checking whether they sound reasonable, but checking whether they are complete enough to be auditable.
One of the biggest categories of hidden assumptions is data assumptions, because A I systems live or die based on what data is available, what it means, and how consistent it is. A requirement might assume that customer records are clean and complete, yet in many organizations customer identities are split across systems and fields are missing or inconsistent. Another requirement might assume that the input data used during model operation will look like the training data, but real-world data often shifts due to seasonality, product changes, or policy changes. There are also assumptions about labels and ground truth, such as assuming that past outcomes represent correct decisions when they may reflect biased or incomplete processes. If these assumptions remain unstated, the organization may treat model performance metrics as proof of readiness even when the underlying data quality is weak. An auditor would want requirements that explicitly define data sources, quality expectations, allowed use, and what happens when data quality drops. This matters because if the data assumption is wrong, no amount of model tuning will fix the core problem, and the audit finding becomes that the organization built a decision system on an unstable foundation.
Closely related are assumptions about measurement, which show up when requirements promise improvement but do not define how success will be measured and compared. A requirement like improve efficiency can hide assumptions about what efficiency means, which part of the process is measured, and what baseline is used. Even a more specific requirement like reduce average response time can hide the assumption that average is the right statistic, even though averages can improve while the worst cases get worse. There are also assumptions about tradeoffs, such as assuming the organization is willing to accept more false alarms to catch more true issues, when stakeholders may disagree once they feel the pain of extra workload. Auditors care because measurements are the proof mechanism, and proof must be planned, not improvised. If measurement assumptions are hidden, the organization may select metrics that make the project look successful rather than metrics that reflect real outcomes. A strong requirements evaluation pushes measurement into the open by defining metrics, thresholds, segmentation by risk level, and decision triggers when metrics fall outside acceptable ranges.
Another category is assumptions about users and decision behavior, because requirements often describe what the system does but not how humans will interact with it. A requirement might assume that staff will always review model outputs before acting, yet under time pressure people may follow the output automatically. A requirement might assume that users understand uncertainty, but if the output is presented as a single confident answer, users can overtrust it. There can also be assumptions about training and adoption, such as assuming all users know how to interpret a risk score or a recommendation, when beginners or new employees may not. From an audit perspective, human behavior is not a side detail, it is part of the system, and hidden human assumptions create predictable failure modes. This is why audit-ready requirements often specify decision boundaries, such as which outputs require human review, which decisions cannot be fully automated, and what escalation looks like when outcomes are unclear. It also explains why documentation and communication requirements matter, because the system is not safe if only the development team understands its limitations. When you evaluate requirements, you listen for implied human behavior and ask whether the organization has written controls that make that behavior realistic and consistent.
Architecture assumptions are also common, especially when people assume technology capabilities that do not exist yet. In the previous episode we discussed Enterprise Architecture (E A) fit, and hidden assumptions often appear as unspoken beliefs about integration, identity, and operational support. A requirement might assume real-time access to multiple data sources, but the organization may only have batch feeds or limited interfaces. Another requirement might assume that access can be tightly controlled, but the current identity system may not support granular role-based controls for the new service. There can be assumptions about logging and monitoring, such as assuming every inference request will be logged with the right context, when the system design may not capture that detail. Auditors care because these assumptions turn into control gaps when the system is deployed through workarounds, such as manual exports, shared accounts, or undocumented integrations. A requirement that depends on architecture capabilities should either confirm those capabilities exist or include requirements to build them before deployment. If it does not, the requirement is effectively a wish, not an auditable expectation, and that mismatch becomes an audit finding waiting to happen.
A subtle but high-impact category is assumptions about scope boundaries, which is where projects quietly expand beyond their original purpose. A requirement might state the model will be used to prioritize work, but it may not explicitly forbid using it to deny access or make final decisions. Over time, teams may reuse the model in new ways because it seems convenient, and that reuse can introduce new risks without new validation. This is especially common when model outputs are easy to call through an interface and people assume the model is generally intelligent rather than purpose-specific. Auditors care because scope creep breaks governance, because controls and testing were designed for one context, not for every possible context. Audit-ready requirements should define allowed uses, prohibited uses, and what triggers a review before expanding use. They should also define who has authority to approve a scope change, because scope changes are risk changes. When you evaluate requirements, you should listen for what is not said, especially around what the model must not be used for, because omissions there are often the root cause of later harm.
Assumptions also hide inside words that sound objective but are actually loaded with interpretation, and fairness is a clear example. A requirement might say the system must be fair, but fairness can mean different things depending on the context and the stakeholders. It can involve equal outcomes, equal opportunity, consistent treatment, or appropriate use of sensitive attributes, and different fairness goals can conflict. If the requirement does not define what fairness means, what groups are considered, how it will be tested, and what action is taken when disparities appear, the requirement cannot be audited and the organization cannot prove it met its promise. Another term that hides assumptions is explainable, because explainability expectations vary from showing key factors to providing policy-aligned reasons to enabling meaningful challenge of decisions. In audit logic, these are not philosophical debates, they are control design questions, because the organization needs a measurable standard for what counts as acceptable fairness and acceptable explanation. The exam may present answer choices that treat fairness as a vague intention, but the stronger audit response is to require definitions, metrics, thresholds, and governance. Turning loaded words into testable expectations is one of the most practical ways to surface hidden assumptions.
Operational assumptions are another major source of future findings because operations determines whether controls actually happen day after day. A requirement might assume that the model will be reviewed monthly, but it may not specify who performs the review, what they review, and what actions they are empowered to take. A requirement might assume that monitoring alerts will be acted on, but if there is no clear on-call responsibility or no incident process for model performance issues, the alerts become noise. There can be assumptions about retraining, such as assuming retraining can happen whenever needed, without acknowledging that retraining changes behavior and therefore requires validation and approval. Auditors care because operational reality is where governance becomes measurable, and hidden operational assumptions often lead to controls that exist only on paper. Audit-ready requirements should describe review cadence, ownership, escalation, change approval, version control, and evidence retention. When you evaluate requirements, you should ask whether the requirement describes not only what should happen, but how it will happen consistently under stress, because stress is when systems reveal their true design.
A powerful way to spot hidden assumptions is to ask a simple series of questions that turn implicit beliefs into explicit requirements. When you hear a requirement, ask what inputs must exist, what outputs are produced, who uses the outputs, and what decision follows. Then ask what could go wrong, how you would notice it, and who must act. Finally, ask what evidence would prove each part of that chain is controlled, such as documentation, logs, review records, approvals, and monitoring reports. This method works because assumptions tend to live in the gaps between these questions, such as assuming inputs are clean, assuming users behave consistently, or assuming monitoring exists. You do not need to turn this into a formal checklist to benefit, because the value is in the habit of probing for missing links. On exam questions, this habit helps you choose answers that strengthen requirements and governance rather than answers that jump to technical fixes. It also helps you recognize when the best next step is to clarify requirements before evaluating control effectiveness, because you cannot evaluate controls against a moving or unstated target.
Let’s ground this in a concrete example that shows how hidden assumptions appear and how an auditor would surface them. Suppose the business goal is to use A I to automatically route incoming customer support tickets to the right team. The requirement might say tickets must be routed correctly with minimal delay, but hidden assumptions quickly appear. It may assume ticket text is always clear enough to classify, even though customers often write messy, emotional messages with missing context. It may assume routing teams are stable, even though organizations reorganize and rename queues. It may assume that routing mistakes are low impact, even though a misrouted urgent safety issue could be serious. It may assume that users will correct errors consistently, even though under pressure they might accept the routing to save time. An audit-ready response would add requirements about how uncertainty is handled, what confidence thresholds trigger human review, how misroutes are measured and corrected, and how monitoring detects drift when ticket patterns change. The important lesson is that hidden assumptions are not abstract, they are practical risk drivers that become visible when you imagine the system operating on a bad day.
Another important skill is recognizing the difference between an assumption you can accept and an assumption you must control, because not every assumption is equally dangerous. Some assumptions are low risk and can be documented as context, such as assuming a certain ticket volume range, while others are high risk and require explicit controls, such as assuming the model will not be used to deny service. The audit mindset is risk-based, which means you focus on assumptions that affect people, safety, privacy, compliance, and material business impact. When an assumption touches those areas, you want it written as a requirement with evidence expectations, not left as a shared understanding. This is also where ownership matters, because someone must be accountable for deciding whether an assumption is acceptable and for revisiting that decision when conditions change. If a system’s environment changes, an assumption that was reasonable can become false, and the organization must have a mechanism to detect and respond. Task 7 thinking is about building requirements that survive reality, and reality changes. Making assumptions explicit is a way of designing requirements that can be audited and updated over time rather than quietly breaking.
As we close, the main idea to carry forward is that hidden assumptions are the invisible gaps between what people say they want and what a system can safely and reliably deliver. In A I requirements, those gaps often involve data quality, measurement definitions, human behavior, architecture capabilities, scope boundaries, fairness and explainability meaning, and operational ownership. Auditors care because you cannot verify what was never defined, and you cannot control what was never acknowledged. When you learn to surface assumptions early, you reduce the chance that the first real test of the system happens in production with real consequences. You also create stronger audit evidence because requirements become specific, testable, and tied to accountability and monitoring. In the next episode, we will build on this by evaluating A I solutions for opportunity, impact, and business risk tradeoffs, which depends on your ability to see what is promised, what is assumed, and what must be controlled. For now, keep practicing the habit of listening for what must be true for a requirement to work, and pushing those invisible truths into explicit, auditable requirements.