
Earlier this month, the Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), in collaboration with federal and international partners, released Principles for the Secure Integration of Artificial Intelligence in Operational Technology (OT).
Co-authored by CISA and ACSC, and developed with contributions from the U.S. National Security Agency’s Artificial Intelligence Security Center (NSA AISC), the Federal Bureau of Investigation (FBI), the Canadian Centre for Cyber Security (Cyber Centre), Germany’s Federal Office for Information Security (BSI), the Netherlands NCSC-NL, the New Zealand NCSC-NZ, and the U.K. NCSC-UK, the guidance reflects an unprecedented level of international coordination around the safe deployment of AI in industrial environments.
The document provides critical-infrastructure owners and operators with practical principles for integrating AI into OT systems without compromising safety, reliability, or compliance.
You can read the full document here.
Industrial operators are exploring AI to improve predictive maintenance, optimize production, and detect anomalies—but unlike enterprise IT, OT systems run physical processes where unintended behavior can have real-world consequences.
CISA and partners developed this guidance because most existing AI security frameworks focus on IT and cloud environments, not on deterministic control networks governed by safety standards. The paper adapts those ideas to the OT domain, showing how long-standing cybersecurity fundamentals—asset visibility, network segmentation, change management—can be extended to address the unique characteristics of AI systems.
The agencies also signal that AI is now considered a distinct risk category within critical-infrastructure protection, warranting its own assurance and governance lens.
The publication organizes its recommendations around four principles. Each combines familiar OT best practices with AI-specific considerations.
1. Understand AI. Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle.
2. Consider AI Use in the OT Domain. Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration.
3. Establish AI Governance and Assurance Frameworks. Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance.
4. Embed Safety and Security Practices Into AI and AI-Enabled OT Systems. Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans.
Does a lot of this sound basic? Granted, a lot of the guide restates much of what is considered standard or best practices across all the usual OT cybersecurity frameworks and regulations. Here are what we think are really AI-specific or should get extra emphasis in the context of AI.
Model-centric risks and behavior
Unique AI risks described in the guidance include:
To address these risks, the guidance advises operators to establish safe operating bounds, monitor models for drift or abnormal behavior, and validate outputs in simulated environments before redeployment. It specifically recommends using anomaly detection, logging, and regular AI red-teaming to identify vulnerabilities and ensure models remain accurate over time
AI data lifecycle and “training risk”
The document introduces new expectations around how OT data is used for training and the risks that come with it:
CISA and ACSC recommend implementing strict data governance controls, including encryption, access control, and defined data retention, as well as reviewing vendor practices to prevent unauthorized reuse of operational data. They also urge organizations to validate and sanitize datasets to detect data poisoning or synthetic data corruption before retraining models.
Treating AI as an embedded, safety-relevant component
The guidance integrates AI directly into functional-safety thinking:
The guide directs operators to test AI systems as rigorously as any new OT system, verifying latency, interoperability, and their effect on safety boundaries. It recommends limiting active control of OT assets by AI without a human in the loop and periodically revalidating model performance to ensure accuracy and safe behavior.
AI-specific development and testing guidance
The guide adopts a secure AI system development lifecycle, emphasizing:
The paper advises adopting a secure AI development lifecycle, integrating continuous validation and threat modeling. Operators are encouraged to continuously test AI models for adversarial behavior and manipulation attempts, and to use simulated environments for retraining and performance verification before production deployment.
Explainability, human training, and cognitive load
A unique human-factors perspective runs throughout the guidance:
To mitigate these human-factors risks, the guide calls for cross-disciplinary training to help OT personnel interpret AI decisions correctly and maintain manual skills. It also recommends adopting explainable or interpretable AI tools that make model reasoning traceable to operators and auditors, strengthening safety and regulatory compliance.
AI-aware vendor and SBOM expectations
While SBOMs and supply-chain risk are already established, the guidance adds:
The guidance encourages owners and operators to demand secure-by-design AI systems, integrate vendor oversight into procurement, and include AI model details within SBOMs. It also suggests setting clear contractual expectations around model transparency, data use, and update notifications, mirroring standard OT vendor risk management but extended to AI.
If you’re already aligned with frameworks like NERC CIP, IEC 62443, and NIST 800-82, much of the remaining content will feel familiar and not so specific to AI. However, it’s always good to reinforce the basics.
Classic security lifecycle and governance
Risk assessment and business case
Classic data protection and segmentation
Integration and interoperability
Monitoring, logging, and incident response
Training, SOPs, and human factors (in general)
Developed by CISA, Australia’s ACSC, and seven other national cybersecurity agencies, this publication marks a coordinated global step toward a common foundation for securing AI in critical infrastructure. Its message is straightforward: AI can enhance reliability and efficiency, but only when it’s governed like any other critical control system.
The fundamentals of cybersecurity still apply; they just need to extend to data, models, and human oversight. For operators in power generation, water, manufacturing, and other essential sectors, the document offers both reassurance and practical direction. You don’t need to reinvent your security program for AI, just evolve it with intention.
Read the full joint guidance:
CISA – Principles for the Secure Integration of Artificial Intelligence in Operational Technology