Episode 17 — Keep AI security policies current using ownership and change control (Task 2)

In this episode, we widen the idea of impact beyond software and stakeholders and look at something that is easy to ignore until it becomes expensive or controversial: the environmental and operational sustainability impacts of A I. Beginners sometimes assume this topic is only about politics or public relations, but in audit work it is much more practical than that. Sustainability here means whether the organization can operate the A I solution responsibly over time without creating avoidable waste, runaway cost, fragile dependencies, or hidden resource burdens. Environmental impact includes energy use, hardware use, and the indirect effects of scaling compute. Operational sustainability includes whether the system can be maintained, monitored, updated, and supported without burning out teams or creating constant emergency work. Task 3 asks you to evaluate impacts, and these impacts matter because A I systems, especially complex ones, can increase energy consumption, drive infrastructure expansion, and change operational practices in ways that affect cost, reliability, and risk. As an auditor, you are not making climate policy, but you are evaluating whether the organization understands and manages the resource consequences of its A I choices. Clear thinking here helps you separate responsible adoption from careless adoption.

Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.

A useful starting point is recognizing that A I systems consume resources in two major phases: the phase where a model is built and trained, and the phase where it is used day to day for inference. Training can be resource-intensive because it may require large amounts of computing, often concentrated over a period of time, and it may be repeated as models are improved. Inference can be resource-intensive in a different way because it can run continuously and at scale, especially in customer-facing systems where many users request outputs. Auditors care because resource use ties directly to cost, reliability, and environmental footprint, and those factors become business risks when they are unmanaged. A system that is too expensive to run may be quietly downgraded or operated without proper monitoring to save cost, which can increase risk. A system that requires specialized hardware may introduce supply chain and resilience concerns, because replacements and upgrades may be harder to obtain. A system that spikes energy use can create operational stress, especially in environments with capacity constraints. Understanding the resource profile is therefore part of understanding impact, and it should be considered during evaluation rather than discovered after deployment.

Energy use is often the most visible environmental factor, but auditors should approach it in a grounded way. You do not need to calculate exact emissions, but you do need to understand whether the A I solution materially changes energy demand and whether the organization has thought about that change. Training large models can consume significant energy, and repeated training cycles can multiply that consumption. Inference can also be costly, especially for models that generate content or perform complex analysis per request. Auditors care because energy use is tied to operating cost and because operating cost influences behavior. If a model is expensive to run, teams may reduce monitoring frequency, limit validation, or avoid retraining even when drift is detected, simply to control expense. That creates risk because governance and safety practices are often the first things cut under cost pressure. A responsible evaluation asks what the expected compute demand is, how it will scale with usage, and what the plan is for operating within budget without weakening controls. Environmental impact and operational risk often travel together through cost.

Hardware and infrastructure impacts matter because A I workloads can drive decisions about data centers, cloud capacity, and specialized compute. Some A I systems depend on accelerators and specialized hardware that may have longer supply chains and higher embodied environmental costs. Even when the organization uses shared infrastructure, increased demand can lead to expansions that have real resource consequences. Auditors care because infrastructure decisions create long-term commitments and long-term risks, including vendor dependence, capacity planning challenges, and recovery constraints during outages. If a system requires a particular hardware setup to run safely and efficiently, then resilience planning must include how that hardware is maintained and replaced. Another practical sustainability question is equipment lifecycle, because frequent upgrades can increase waste if hardware is replaced prematurely. A thoughtful evaluation considers whether the organization can meet performance needs using efficient configurations and whether it can reuse infrastructure rather than constantly expanding it. This is not about preventing growth, it is about ensuring growth is planned and controlled. In audit terms, unplanned infrastructure growth can be a sign of weak requirements and weak governance.

Operational sustainability includes people and processes, and this is often the most immediate and most overlooked impact. An A I system is not set and forget, because it requires monitoring, periodic validation, incident response, and change management. If the organization does not plan for these activities, the workload lands on a few individuals who become single points of failure and who may burn out. Burnout is an operational risk because tired teams miss signals, delay corrective action, and make mistakes during changes. Auditors evaluate whether responsibilities are clearly assigned, whether there is coverage for key roles, and whether the operational plan is realistic given the organization’s staffing. They also evaluate whether operational processes are mature enough to support the system, such as whether there is a reliable way to deploy updates, roll back changes, and document decisions. Sustainability here means the organization can keep doing the right things over time, not just in the first month after launch. A system that depends on heroics is not sustainable, and heroics are not a control. This is why operational sustainability is part of impact evaluation, because it predicts whether controls will actually operate.

Another operational sustainability factor is the stability of the model supply chain, which includes vendors, external services, and supporting tools. Many A I solutions depend on third-party components, external model services, or proprietary platforms. Auditors care because dependence can create lock-in, cost surprises, and limited transparency, which can complicate governance. If a vendor changes pricing, changes service behavior, or discontinues a feature, the organization may be forced into rushed changes that increase risk. If the organization relies on external services for inference, network reliability and service availability become part of operational sustainability. A sustainability evaluation therefore asks what parts of the system are controlled internally and what parts are controlled by external parties, and what contingency plans exist if external conditions change. This connects to environmental impact as well, because external providers may have different energy profiles and reporting practices, and the organization may still be accountable for overall footprint claims. The audit mindset is to treat dependency sustainability as a governance issue, not just a procurement detail.

A related concept is model efficiency, which means achieving the desired outcome using an appropriate level of complexity and compute. There is a temptation to pick the biggest or most advanced model because it sounds safer or more capable, but bigger models often cost more to run and can be harder to monitor and govern. Auditors care because unnecessary complexity increases resource use without necessarily increasing value, and it can create a sustainability risk if the organization cannot afford to operate the system responsibly. A key evaluation question is whether the model choice matches the problem, meaning the organization selected a solution proportional to the use case rather than a solution chosen for prestige. For example, a simple classification task might not require a heavy deep learning model, and using a heavy model could waste resources and increase operational burden. Efficiency is not about cutting corners, it is about aligning compute and complexity with the business goal and risk profile. If a smaller, well-controlled model meets requirements with lower cost and lower footprint, that can be a more responsible choice. Sustainability evaluation encourages this kind of proportional design thinking.

Environmental impact also includes the idea of scaling behavior, because A I systems often start small and then expand quickly. A pilot might run a few thousand inferences a day, but full deployment might run millions, and resource use scales accordingly. Auditors evaluate whether the organization understands this scaling curve and whether requirements include capacity and cost constraints. They also evaluate whether monitoring includes resource consumption metrics, because you cannot manage what you do not measure. In some contexts, organizations may have sustainability goals or reporting obligations, and A I systems can affect those commitments. Even without formal reporting, resource spikes can cause budget overruns and operational stress, leading to rushed changes that weaken controls. A thoughtful evaluation asks how the system will be optimized for efficiency as usage grows, such as by batching requests where appropriate, caching results where safe, or using tiered approaches where simpler checks handle routine cases and heavier computation is reserved for harder cases. These are high-level design choices, not implementation instructions, and they are relevant because they affect sustainability and risk. The audit lens focuses on whether such choices are considered and governed.

Another important area is the sustainability of monitoring and incident response, because oversight itself consumes resources and can be neglected when budgets tighten. Monitoring requires collecting metrics, analyzing trends, investigating anomalies, and documenting outcomes, and those activities require time and tooling. If an A I system is expensive to run, the organization may be tempted to reduce monitoring depth, which increases the chance that drift or harmful outcomes go undetected. Auditors care because this creates a dangerous feedback loop where higher cost leads to weaker controls, which leads to higher risk, which can lead to incidents that are even more expensive. Sustainable operation means building a monitoring plan that is realistic, prioritized, and aligned to impact, so that critical signals are always tracked even if less critical ones are reduced. It also means ensuring incident processes include model-related issues, such as what triggers a pause in usage and who approves it. If a model causes harm, the organization must be able to respond quickly without needing an emergency governance process created on the fly. Sustainability therefore includes the ability to handle predictable failures without chaos. For beginners, it helps to see that operational sustainability is about calm, repeatable response under stress.

Let’s ground these ideas with a scenario that makes sustainability impacts obvious. Imagine an organization wants an A I system that generates personalized customer emails at scale to increase engagement. The opportunity might be improved marketing performance, but the sustainability impacts include running a large number of inference requests, storing prompts and outputs, and monitoring for quality and privacy issues. If the model is heavy, inference costs could become significant, and those costs might push teams to reduce oversight or to skip periodic reviews of output quality. Environmental impacts could increase due to continuous compute use, especially if usage spikes during campaigns. Operational sustainability impacts include needing staff to review a sample of outputs, handle customer complaints about inappropriate messaging, and respond to incidents if sensitive information leaks. There may also be vendor dependence if the organization uses an external model service, creating risk if pricing changes. An audit evaluation would ask whether the organization has defined limits on usage, established monitoring for quality and privacy, and chosen an approach proportional to the benefit. This scenario shows that sustainability is not abstract, it affects cost, oversight, and the ability to maintain trust over time.

Another scenario is an internal A I tool that helps engineers search documentation and summarize incident notes. The environmental footprint might be smaller if usage is limited, but operational sustainability still matters because data access controls, privacy, and monitoring are needed. If the tool is trained or fine-tuned frequently, training costs and energy use could increase. If the tool becomes central to operations, downtime could disrupt work, requiring resilience planning. Auditors would evaluate whether the organization has a clear lifecycle plan, including how updates are approved and how outputs are validated for accuracy and confidentiality. They would also evaluate whether the tool’s benefits justify its ongoing resource use and governance burden. A common beginner misunderstanding is assuming internal tools are always low risk, but internal tools can still expose sensitive information and influence critical decisions. Sustainability evaluation helps ensure internal A I is not deployed casually and then left unmanaged. It emphasizes that responsible operation is a continuous commitment, not a one-time launch event.

As we close, remember that evaluating environmental and operational sustainability impacts is about understanding whether the organization can operate the A I solution responsibly over time without creating hidden costs, fragile dependencies, or weakened oversight. Environmental impact includes energy and infrastructure demands, especially across training and inference, and those demands often scale as usage grows. Operational sustainability includes staffing, processes, monitoring, incident response, change management, and dependency stability, because these determine whether controls remain effective month after month. Task 3 expects you to recognize these impact dimensions and to favor evaluation steps that make resource assumptions explicit, align model complexity to the use case, and ensure monitoring and governance are sustainable rather than heroic. In the next episode, we will evaluate A I impacts on humans, safety, and real-world outcomes, which builds on the idea that impact is not only technical, it is lived. For now, keep a simple habit: when you hear an A I proposal, ask not only whether it can work today, but whether the organization can afford to run it, govern it, and monitor it responsibly for years.

Episode 17 — Keep AI security policies current using ownership and change control (Task 2)
Broadcast by