CISA Publishes Guide for AI Critical Infrastructure Integration
Document Outlines Risks of AI in OT Environments

Listen to this Story Listen to this story

CISA's guidance is meant to help critical infrastructure owners and operators integrate artificial intelligence into their systems securely.
CISA's guidance is meant to help critical infrastructure owners and operators integrate artificial intelligence into their systems securely. | CISA
|
In new guidance, the Cybersecurity and Infrastructure Security Agency sought to help utilities integrate artificial intelligence into their operational technology environments.

To help critical infrastructure owners address the “opportunity and risk” of artificial intelligence, the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, along with the FBI and several overseas counterparts, released guidance for incorporating AI into operational technology systems.

CISA and the FBI developed the Principles for the Secure Integration of AI in OT document in collaboration with the National Security Agency’s AI Security Center, as well as security centers representing Canada, Germany, New Zealand, the United Kingdom and the Netherlands.

OT consists of hardware and software that interact with the physical environment, or manage devices that do so, according to the National Institute of Standards and Technology. They include industrial control systems, building management systems, fire control systems and physical access control systems.

The agencies wrote in an alert that while AI promises multiple benefits for OT environments, such as “increased efficiency, enhanced decision-making and cost savings,” it also poses “unique risks” for safety, security and reliability. The document focuses on machine learning, large language models and AI agents, but the authors wrote it also can be applied to “systems using traditional statistical modeling and logic-based automation.”

“While AI can enhance the performance of OT systems that power vital public services, it also introduces new avenues for adversarial threats,” Nick Andersen, CISA’s executive assistant director for cybersecurity, said in a news release. “That’s why CISA, in close coordination with our U.S. and international partners, is committed to providing clear, actionable guidance. We strongly encourage OT owners and operators to apply the principles in this joint guide to ensure AI is implemented safely, securely and responsibly.”

The guidance is organized into four key principles:

    • understanding the risks and impacts of AI in OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle;
    • considering the business cases for integrating AI into OT spaces, its short- and long-term challenges, and the role of vendors;
    • establishing governance mechanisms and testing procedures for AI models; and
    • embedding oversight mechanisms to ensure safe operation and cybersecurity of AI-enabled OT systems.

Risks posed by AI include the potential for manipulation of data, models and deployment software that causes incorrect outcomes or bypasses security and physical safety guardrails. Even without external manipulation, the authors observed that “AI models can only be as effective as the quality of their training data.” Collecting high-quality sensor data for the AI program can be difficult in distributed OT environments, they wrote, while centralizing operational data can create a target for cyber threat actors.

AI models also can become less accurate over time, the authors continued, as data is introduced that is not part of its initial training set. In addition, operators may have trouble understanding a model’s decision-making process, making it hard to diagnose and correct errors. Finally, operators may miss crucial information if they become too reliant on AI to manage their systems.

Regarding the business case for AI, the agencies recommended that infrastructure owners and operators determine whether AI is the most appropriate solution for their needs and requirements. This assessment should include security, performance, complexity, cost and effects on physical safety of the OT environment, along with the organization’s capacity for maintaining an AI system compared to established technologies.

OT vendors “play a crucial role in advancing AI integration,” the authors continued, writing that “some OT devices now come with built-in AI technology, which may require internet connectivity to function.” Operators “should demand transparency and security considerations” from vendors regarding their use of AI and connectivity, with contractual guarantees of open communication.

Governance mechanisms for AI should outline clear roles for leadership, OT and information technology subject matter experts, and cybersecurity teams. They also should provide data governance policies, audits and compliance testing to validate and verify performance.

“AI holds tremendous promise for enhancing the performance and resilience of operational technology environments — but that promise must be matched with vigilance,” CISA acting Director Madhu Gottumukkala wrote. “OT systems are the backbone of our nation’s critical infrastructure, and integrating AI into these environments demands a thoughtful, risk-informed approach. This guidance equips organizations with actionable principles that AI adoption strengthens, not compromises, the safety, security and reliability of essential services.”

FERC & Federal

Leave a Reply

Your email address will not be published. Required fields are marked *