Support
No items found.

CISA & Partners Release “Principles for the Secure Integration of Artificial Intelligence in Operational Technology (OT)”

December 8, 2025

Earlier this month, the Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), in collaboration with federal and international partners, released Principles for the Secure Integration of Artificial Intelligence in Operational Technology (OT).

Co-authored by CISA and ACSC, and developed with contributions from the U.S. National Security Agency’s Artificial Intelligence Security Center (NSA AISC), the Federal Bureau of Investigation (FBI), the Canadian Centre for Cyber Security (Cyber Centre), Germany’s Federal Office for Information Security (BSI), the Netherlands NCSC-NL, the New Zealand NCSC-NZ, and the U.K. NCSC-UK, the guidance reflects an unprecedented level of international coordination around the safe deployment of AI in industrial environments.

The document provides critical-infrastructure owners and operators with practical principles for integrating AI into OT systems without compromising safety, reliability, or compliance.

You can read the full document here.

Risks and Opportunities of AI in Operational Technology

Industrial operators are exploring AI to improve predictive maintenance, optimize production, and detect anomalies—but unlike enterprise IT, OT systems run physical processes where unintended behavior can have real-world consequences.

CISA and partners developed this guidance because most existing AI security frameworks focus on IT and cloud environments, not on deterministic control networks governed by safety standards. The paper adapts those ideas to the OT domain, showing how long-standing cybersecurity fundamentals—asset visibility, network segmentation, change management—can be extended to address the unique characteristics of AI systems.

The agencies also signal that AI is now considered a distinct risk category within critical-infrastructure protection, warranting its own assurance and governance lens.

Principles for the Secure Integration of Artificial Intelligence in Operational Technology (OT)

The publication organizes its recommendations around four principles. Each combines familiar OT best practices with AI-specific considerations.

1. Understand AI. Understand the unique risks and potential impacts of AI integration into OT environments, the importance of educating personnel on these risks, and the secure AI development lifecycle.

2. Consider AI Use in the OT Domain. Assess the specific business case for AI use in OT environments and manage OT data security risks, the role of vendors, and the immediate and long-term challenges of AI integration.

3. Establish AI Governance and Assurance Frameworks. Implement robust governance mechanisms, integrate AI into existing security frameworks, continuously test and evaluate AI models, and consider regulatory compliance.

4. Embed Safety and Security Practices Into AI and AI-Enabled OT Systems. Implement oversight mechanisms to ensure the safe operation and cybersecurity of AI-enabled OT systems, maintain transparency, and integrate AI into incident response plans.

How This Guidance Expands on Existing OT Standards

Does a lot of this sound basic? Granted, a lot of the guide restates much of what is considered standard or best practices across all the usual OT cybersecurity frameworks and regulations. Here are what we think are really AI-specific or should get extra emphasis in the context of AI.

What’s really specific to AI

Model-centric risks and behavior
Unique AI risks described in the guidance include:

  • Model drift – a model’s accuracy degrades as the process or environment changes, requiring continuous monitoring and re-validation.
  • Hallucination – LLMs fabricating plausible but false outputs; the guidance explicitly warns that such systems “almost certainly should not be used to make safety decisions for OT environments.”
  • Lack of explainability as a risk in itself — the difficulty of understanding how a model arrived at a decision and how that affects troubleshooting, auditability, and recovery.
  • Prompt injection and AI-specific TTPs – explicit mention of AI-oriented attacks, with a recommendation to use MITRE ATLAS alongside ATT&CK for threat modeling.

To address these risks, the guidance advises operators to establish safe operating bounds, monitor models for drift or abnormal behavior, and validate outputs in simulated environments before redeployment. It specifically recommends using anomaly detection, logging, and regular AI red-teaming to identify vulnerabilities and ensure models remain accurate over time

AI data lifecycle and “training risk”

The document introduces new expectations around how OT data is used for training and the risks that come with it:

  • Data assurance & sovereignty: where training data resides, who can access it, and whether vendors can reuse it to train their own models.
  • Exposure of sensitive process data: operational data can become “statistically” embedded in models, persisting beyond its normal retention period.
  • Data poisoning and synthetic data risks: both treated as credible threats to model reliability.

CISA and ACSC recommend implementing strict data governance controls, including encryption, access control, and defined data retention, as well as reviewing vendor practices to prevent unauthorized reuse of operational data. They also urge organizations to validate and sanitize datasets to detect data poisoning or synthetic data corruption before retraining models.

Treating AI as an embedded, safety-relevant component

The guidance integrates AI directly into functional-safety thinking:

  • Explicitly states that AI/LLMs should not make safety decisions autonomously; a human-in-the-loop must be maintained.
  • Requires safety thresholds for reverting to non-AI systems when performance or safety metrics aren’t met.
  • Instructs operators to update safety and incident-response plans to include AI-specific failure modes and malicious activity.

The guide directs operators to test AI systems as rigorously as any new OT system, verifying latency, interoperability, and their effect on safety boundaries. It recommends limiting active control of OT assets by AI without a human in the loop and periodically revalidating model performance to ensure accuracy and safe behavior.

AI-specific development and testing guidance

The guide adopts a secure AI system development lifecycle, emphasizing:

  • How models are trained and updated, and how updates are validated to avoid unsafe changes.
  • The value of AI red-teaming and “offensive” testing focused on model behavior.
  • Cross-references to NIST’s AI RMF and ETSI’s SAI standards as complementary frameworks.

The paper advises adopting a secure AI development lifecycle, integrating continuous validation and threat modeling. Operators are encouraged to continuously test AI models for adversarial behavior and manipulation attempts, and to use simulated environments for retraining and performance verification before production deployment.

Explainability, human training, and cognitive load

A unique human-factors perspective runs throughout the guidance:

  • Train operators to validate AI outputs using alternate sensors or manual checks.
  • Manage AI-generated alarm noise to avoid operator fatigue.
  • Prioritize explainable or transparent AI so humans can understand decisions and auditors can trace them.

To mitigate these human-factors risks, the guide calls for cross-disciplinary training to help OT personnel interpret AI decisions correctly and maintain manual skills. It also recommends adopting explainable or interpretable AI tools that make model reasoning traceable to operators and auditors, strengthening safety and regulatory compliance.

AI-aware vendor and SBOM expectations

While SBOMs and supply-chain risk are already established, the guidance adds:

  • SBOMs should explicitly describe AI components and hosting locations.
  • Vendors should notify customers when AI produces unsafe or misleading recommendations, not just CVEs.
  • Operators should be able to disable AI features, run them offline, and control whether operational data is reused for training.

The guidance encourages owners and operators to demand secure-by-design AI systems, integrate vendor oversight into procurement, and include AI model details within SBOMs. It also suggests setting clear contractual expectations around model transparency, data use, and update notifications, mirroring standard OT vendor risk management but extended to AI.

Reinforcing the OT Cybersecurity Basics

If you’re already aligned with frameworks like NERC CIP, IEC 62443, and NIST 800-82, much of the remaining content will feel familiar and not so specific to AI. However, it’s always good to reinforce the basics.

Classic security lifecycle and governance

  • Secure design → procurement → deployment → operations follows the same lifecycle management seen in CIP-010 and IEC 62443.
  • Governance roles, RACI models, and audit cycles align with standard internal control structures.

Risk assessment and business case

  • “Assess whether AI is the right tool” mirrors traditional technology risk analysis: weigh risk, cost, complexity, and performance.
  • Define clear success metrics and thresholds — equally valid for any new OT capability.

Classic data protection and segmentation

  • Protect OT data in transit and at rest, manage access, and avoid unsegmented data aggregation.
  • Maintain IT/OT separation with DMZs and one-way transfer patterns—straight from CIP-005 and IEC 62443.

Integration and interoperability

  • Evaluate timing, protocol compatibility, and latency before integrating new technologies.
  • Test in non-production environments first — the long-standing mantra of NERC and IEC.

Monitoring, logging, and incident response

  • Inventory components, log access and data flow, and apply KPIs for continuous monitoring.
  • Incorporate AI scenarios into existing incident-response playbooks, rather than reinventing the process.

Training, SOPs, and human factors (in general)

  • Update standard operating procedures, reinforce manual skills, and clarify responsibilities.
  • Avoid over-reliance on automation — consistent with control-center training and NERC PER-005 principles.

Helping Critical Infrastructure Keep Pace with AI Around The World

Developed by CISA, Australia’s ACSC, and seven other national cybersecurity agencies, this publication marks a coordinated global step toward a common foundation for securing AI in critical infrastructure. Its message is straightforward: AI can enhance reliability and efficiency, but only when it’s governed like any other critical control system.

The fundamentals of cybersecurity still apply; they just need to extend to data, models, and human oversight. For operators in power generation, water, manufacturing, and other essential sectors, the document offers both reassurance and practical direction. You don’t need to reinvent your security program for AI, just evolve it with intention.

Read the full joint guidance:
CISA – Principles for the Secure Integration of Artificial Intelligence in Operational Technology