November 18, 2024
CISA Releases AI Security Guidelines
CISA and the U.K.'s National Cyber Security Centre published the "Guidelines for security AI system development."
CISA and the U.K.'s National Cyber Security Centre published the "Guidelines for security AI system development." | CISA
|
The rapid spread of AI software prompted CISA and other global cybersecurity agencies to create new guidelines for safe development of machine learning tools.

With artificial intelligence systems undergoing a rapid pace of development and deployment in recent years across multiple industries, and security often “a secondary consideration,” developers must be active in preventing “novel security vulnerabilities” from taking root in their software, cybersecurity agencies from multiple countries warned in new guidance issued over the weekend.

The “Guidelines for secure AI system development” document was published Nov. 26 by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency (CISA) and the U.K.’s National Cyber Security Centre (NCSC). Similar organizations in other countries also signed on, including Australia, Canada, Chile, France, Germany, Israel, Japan, Poland, South Korea and Singapore. In addition, multiple technology companies and groups contributed to the document, many with their own AI initiatives, such as Amazon, IBM, Google, OpenAI and Microsoft.

“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure and trustworthy,” Homeland Security Secretary Alejandro Mayorkas said in a press release. “Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology.”

CISA and the NCSC’s goal with the document was to provide a framework for developers to “build AI systems that function as intended, are available when needed and work without revealing sensitive data to unauthorized parties.” The guidelines can be used by developers of AI systems created from scratch or by those adding AI to existing tools or systems, with a focus on machine learning applications that can detect patterns in data that are not explicitly programmed by humans and can generate predictions, recommendations or decisions based on statistical reasoning.

New Vulnerabilities in Software Stack

Along with known cybersecurity threats, the addition of AI and machine learning elements to a system introduces new vulnerabilities that malicious actors can exploit. Attackers’ strategies can include prompt injection attacks, which involve manipulating large language models to produce unintended responses and actions, or corrupting the system’s training data or user feedback. If successful, such actions may allow users to perform unauthorized actions, extract sensitive information about the model or alter the model’s classification or regression performance.

Complicating the security picture is the fact that modern software products, including AI applications, integrate components from third parties such as data, models and remote services, making it “harder for end users to understand where responsibility for secure AI lies.” CISA and the NCSC said component providers “should take responsibility for the security outcomes of users further down the supply chain” and, when known risks cannot be mitigated, inform users of the risks and advise them on how to use the components securely.

The agencies arranged the guidelines into four key areas with the aim of covering the entire life cycle of AI system development and ensuring a “secure by default” approach: secure design, secure development, secure deployment, and secure operation and maintenance.

The authors sought to align the guidelines to the NCSC’s “Secure development and deployment guidance” and the National Institute of Standards and Technology’s Secure Software Development Framework.

Secure design covers identifying risks and threat modeling, along with “specific topics and trade-offs to consider on system and model design.” Recommendations in this section for managers include ensuring staff members are aware of the risks facing AI and that threats to the system are understood and adequately modeled. Project leaders should ensure developers consider security as important as functionality and performance, judging the security benefits and trade-offs of elements as fundamental as the choice of AI model.

Under secure development, the authors grouped guidelines relating to supply chain security, documentation and asset management. Developers must identify, track and protect both their in-house assets and those from outside parties, ensuring third-party software is sourced from “verified commercial [and] open-source … developers” and prepared to switch to alternate solutions for mission-critical systems if outside components are compromised.

Technical debt management also falls in the second category; this concept applies to the practice of making engineering decisions for short-term results that fall short of best practices. The document’s authors acknowledged that “like financial debt, technical debt is not inherently bad, but [it] should be managed from the earliest stages of development” so numerous small decisions over rapid development cycles do not add up to major vulnerabilities.

Responsibilities Continue After Release

Secure deployment guidelines apply to the stage in which the AI system has been released to end users. Recommendations in this phase include releasing software only after subjecting it to thorough security evaluations; securing the infrastructure, such as application programming interfaces, models, data and the training and processing pipelines; and developing incident-management procedures.

Finally, once the system has been deployed, developers enter the secure operation and maintenance stage. A developer’s responsibility at this point is to monitor the system’s behavior in order to detect changes that could affect security, including malicious intrusions and natural data drift. Update procedures should be secure, the updates themselves must be tested before they are released, and developers should continue to participate in information-sharing communities to share lessons learned and best practices.

“As nations and organizations embrace the transformative power of AI, this international collaboration … underscores the global dedication to fostering transparency, accountability and secure practices,” said CISA Director Jen Easterly. “This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of international partnership in securing our digital future.”

FERC & Federal

Leave a Reply

Your email address will not be published. Required fields are marked *