Episode 91 — Spaced Retrieval Review: Domain 2 operations and controls, simplified (Review: Domain 2)
In this episode, we slow down and deliberately strengthen your memory of Domain 2 by practicing recall in a way that feels calm and structured instead of stressful. Domain 2 can feel like a long hallway of topics because it includes operations, controls, monitoring, identity, vendors, and incident response, and beginners often understand each topic on its own but struggle to connect them into one mental picture. That is exactly what spaced retrieval is for, because the goal is not to reread explanations, but to pull the ideas back out of your mind, notice what is missing, and then rebuild the connections. Artificial Intelligence (A I) systems add a few new twists compared to normal Information Technology (I T), yet the core discipline is still about managing access, reducing exposure, watching for abuse, and responding quickly when something goes wrong. As you listen, keep one simple intention in mind: try to answer each recall prompt in your head before I expand it, because that effort is what builds durable understanding.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A useful way to begin recall is to rebuild the Domain 2 map in your own words, starting with the difference between threats and vulnerabilities. A threat is the who and how of harm, such as an attacker probing a model endpoint, a user misusing a capability, or a vendor change introducing risk. A vulnerability is the weakness that makes the threat easier, such as overbroad access, weak monitoring, unsafe defaults, or unreviewed model updates. Now pause and ask yourself what makes A I threats different from normal I T threats, and try to name at least two. A strong answer includes the idea that language itself can steer behavior through prompts, and that data can reshape behavior through training or updates, which is not how traditional software usually works. If you can recall those influence channels, you are already thinking like an evaluator rather than a casual user of A I, and that shift is the foundation of Domain 2.
Next, bring back the three plain-language attack ideas that were tied closely to Domain 2 operations: data poisoning, evasion, and model theft. Before you hear any explanation, try to define each one using a single sentence. Data poisoning is about tampering with what the model learns from so its future behavior is shaped in a harmful way, even if the servers are perfectly patched. Evasion is about crafting inputs at runtime that trick the model into making the wrong decision or bypassing a guardrail without changing the model. Model theft is about taking the model’s value, either by stealing artifacts, extracting sensitive behavior through systematic querying, or recreating the model’s behavior well enough to compete or to attack it more effectively. If any of those definitions felt fuzzy, notice which one, because the fuzziness usually points to a missing distinction about timing, where poisoning changes learning, evasion changes inputs in the moment, and theft targets the model as an asset.
Now shift from individual threats to the idea of a program, because Domain 2 is not just about knowing what can go wrong, but about whether an organization manages risk consistently over time. In your own mind, try to describe what a real threat and vulnerability management program does for A I, beyond scanning and patching. A complete answer includes inventorying A I assets, tracking which models are used where, controlling data sources and updates, testing model behavior for known abuse patterns, and monitoring for interaction signals that indicate probing or misuse. It also includes a method for prioritizing work based on exposure and impact, because not every model and not every integration carries the same level of risk. If you only remember scanning and patching, that is a normal beginner starting point, but Domain 2 expects you to broaden the lens to include model behavior, prompt surfaces, data flows, and the operational controls that shape how the system behaves day to day.
Threat monitoring is another area where recall should be active, because it is easy to nod along to the idea of monitoring while still missing what you would actually watch for. Close your eyes for a moment and try to name three monitoring signals that are especially meaningful for A I abuse. A strong set includes repeated prompt bypass attempts, iterative small variations that suggest probing, unusual retrieval access to sensitive sources, abnormal tool-call sequences if tools are connected, and high-volume structured queries that resemble extraction attempts. The reason these matter is that many A I incidents do not start with malware or loud technical errors, so you need behavioral visibility into how the system is being used. Monitoring also depends on good telemetry, so ask yourself what must be logged to make those signals detectable, such as user identity, model endpoint, time windows, policy blocks, retrieval sources touched, and any privileged configuration changes. If you can recall both the signals and the supporting telemetry, you are thinking at the level of real coverage rather than marketing claims.
Identity and access is the next anchor point, and it is one of the most reliable ways to reduce risk quickly. Try to explain Identity and Access Management (I A M) in one sentence that goes beyond passwords. A good answer is that I A M is the set of practices and controls that ensure only the right identities can access the right resources with the minimum privileges required, and that every meaningful action is attributable and auditable. Now apply that to A I by naming the resources that need protection, and try to do it without drifting back to only servers. You should be thinking about model endpoints, training and retrieval data, prompt and configuration repositories, and the keys or tokens that allow systems to talk to those services. The beginner trap is to protect the infrastructure while leaving the model management plane and the data connectors loosely controlled, and Domain 2 expects you to notice that gap because it is exactly where attackers and accidents create high-impact outcomes.
Least privilege is a specific I A M principle, and it deserves its own recall moment because it is easy to say and easy to violate. In your head, try to describe why pipelines and service accounts are often the biggest least privilege weakness. The key idea is that automated identities run continuously and are frequently granted broad access for convenience, which turns them into powerful targets if compromised. A careful evaluation asks whether pipeline stages are separated by role, whether each service account has narrow permissions, whether access is scoped by environment so development cannot touch production, and whether there is a clear owner and review process for every non-human identity. Now add model endpoints to the picture by recalling why endpoint access is not just about who can call it, but also about what the endpoint can do once called, especially if it can retrieve documents or trigger tools. Least privilege in A I means narrowing both reach and capability, so compromise impact stays small even when something slips.
Vendor and supply chain controls form another major Domain 2 theme, and the recall skill here is to resist vague assurances. Try to remember the three framing questions that keep vendor evaluation grounded: what do we share, what do they control, and what could go wrong. If you can answer those for a specific vendor relationship, you can identify where your visibility ends, which is the moment you stop relying on hope and start relying on evidence and enforceable commitments. In A I systems, what you share often includes prompts, outputs, and connected documents, and what they control may include model behavior changes, infrastructure security, employee access, and incident detection in their environment. What could go wrong includes data exposure, silent model changes that alter risk, and dependency issues in the vendor’s own supply chain. When you recall vendor evaluation, focus on what the organization can verify and what it must contractually require, because that is how supply chain risk is managed when you cannot directly inspect the internal controls.
Contracts and evidence are the practical tools for avoiding being sold by confident vendor language, so test your memory by turning a generic claim into a specific question. If a vendor says they do not train on your data, what follow-up questions should you ask. A solid recall includes asking what data is stored, how long it is retained, who can access it, whether it is used for troubleshooting or improvement, and how segregation between customers is enforced. If a vendor says they are secure, you should ask what controls they use for privileged access, how they monitor for abuse patterns, how they handle incidents, and how they communicate changes to the service. The key is that claims are not controls, and controls require evidence, such as descriptions of enforcement and monitoring, independent assessments, and clear contractual commitments. When you can consistently convert claims into testable control questions, you are demonstrating the mindset that Domain 2 is training.
Now recall the purpose of incident management in an A I context, and try to say it in one sentence that includes fast containment. A good answer is that incident management is the coordinated process of detecting an event, deciding its severity, containing harm quickly, restoring safe operation, and communicating appropriately, all while collecting enough evidence to learn and prevent repeats. In A I, incidents might include data exposure through outputs, prompt injection success, misuse of tool integrations, or compromise of keys that provide access to model services. The tricky part is that the incident may look like normal use, so detection and triage require attention to patterns and system state, such as which model version was active and which data connectors were enabled. Also recall that containment in A I often means restricting capabilities, disconnecting risky data sources, disabling tools, or rotating secrets, rather than only patching software. If your mental picture of incident response is only malware removal, this is where Domain 2 expands it into a broader operational discipline.
Triage deserves its own recall because it is where time is either saved or wasted. Ask yourself what triage must establish quickly in an A I incident, and try to name at least four elements. You want to identify what system is affected, what type of misuse or failure is happening, what the likely impact is, whether it is ongoing, and whether it is easily repeatable by others. You also want to determine whether the trigger is malicious activity, an accidental exposure, or a change-induced behavior shift from an update, because that changes the response path. Logs and telemetry matter here, because you need to tie outputs and actions to a user identity, an endpoint, a configuration state, and any connected data sources. Beginners sometimes think triage is about finding the full root cause immediately, but strong triage is about making safe, timely decisions with partial information. Domain 2 expects you to value speed with discipline, because fast containment prevents small issues from becoming large incidents.
Recovery and learning complete the incident cycle, and recalling them correctly prevents a common organizational mistake, which is declaring victory too early. Recovery in A I is not only restoring uptime; it is restoring trust in the system’s behavior, which often means validating configurations, verifying data connector restrictions, ensuring tools are re-enabled safely, and confirming that compromised credentials are fully rotated. Learning is where problem management steps in, because you do not want to fight the same fire again next month. Try to recall what a good post-incident review produces, and aim for something more concrete than lessons learned. It should identify root causes and contributing factors, assign owners to corrective actions, set timelines, and verify that improvements are actually implemented. In A I contexts, improvements might include tightening access boundaries, enhancing monitoring for abuse patterns, strengthening change control for prompts and model versions, and expanding adversarial testing based on what the incident revealed. When learning is disciplined, incident response becomes a force that steadily improves the program instead of a series of exhausting emergencies.
A final recall technique for Domain 2 is to practice connecting topics, because the exam is rarely about isolated definitions. Take a moment and imagine a scenario where a model starts leaking sensitive information, and then ask yourself which Domain 2 controls should work together. Monitoring should detect unusual retrieval or output patterns, I A M should limit what data the model can reach, least privilege should prevent a service account from seeing everything, vendor controls should define responsibilities if the service is hosted, and incident response should contain quickly by disconnecting or restricting risky connectors. The point is that no single control saves you, and the program succeeds when multiple controls reinforce each other. Beginners sometimes look for the one perfect safeguard, but Domain 2 teaches you to build layered defenses that assume something will fail. If you can narrate how controls interact, you are moving from memorization into operational understanding, which is exactly what makes your knowledge durable and useful.
As we wrap up, keep one simple mental test that you can reuse anytime you review Domain 2: can you explain what could go wrong, what would detect it early, what would stop it quickly, and what would prevent it from happening again. What could go wrong includes A I-specific issues like prompt abuse, data poisoning, evasion, model theft, and unsafe integrations. Detection includes interaction monitoring and telemetry that reveals probing, misuse, and unusual data access. Stopping quickly includes containment levers such as disabling tools, restricting connectors, tightening policies, and rotating secrets. Preventing repeats includes problem management actions like improving least privilege, strengthening change control, demanding vendor evidence, and expanding testing and monitoring coverage. If you can answer those four questions with confidence, you have a strong, simplified grasp of Domain 2 operations and controls, and you are ready to move forward without losing what you have already learned.