How to craft a generative AI security policy that works

The rapid evolution of generative AI is spurring interest in finding ways to prevent these technologies from creating more bad than good. This is a key concern in cybersecurity as organizations wrestle with the role GenAI might play in creating or supporting a security breach.

One way to combat such attacks is to establish a cybersecurity policy that includes AI. Let’s discuss some key security issues GenAI presents and examine what to include in a generative AI security policy.

How does AI affect cybersecurity measures?

AI, and GenAI in particular, introduces a number of cybersecurity risks. Cyberadversaries use GenAI to craft convincing social engineering and phishing scams, including deepfakes. Organizations unable to manage AI-associated risks open themselves to data loss, system access by unauthorized users, and malware and ransomware attacks, among others.

GenAI is also subject to prompt injection attacks, where malicious actors use specially crafted input to bypass a large language model’s normal restrictions, and data poisoning attacks, in which attackers alter or corrupt the data training an LLM.

Organizations must also be aware of other challenges related to GenAI, including employees exposure of sensitive data, shadow AI use, vulnerabilities in AI tools and compliance obligation breaches.

AI standards and frameworks

Standards and frameworks play a key role in helping organizations develop and deploy secure AI. Each of the following ISO standards addresses AI risk in varying degrees:

  • ISO/IEC 22989:2022 — Information technology: Artificial intelligence: Artificial intelligence concepts and terminology.
  • ISO/IEC 23053:2022 — Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML).
  • ISO/IEC 23984:2023 — Information technology: Artificial intelligence: Guidance on risk management.
  • ISO/IEC 42001:2023 — Information technology: Artificial intelligence: Management system.

NIST developed the Artificial Intelligence Risk Management Framework, an essential document for organizations developing and deploying secure, trustworthy AI.

Dozens of independently developed frameworks, many of which address risk and cybersecurity, have also been developed to help organizations create and deploy AI-based systems.

Consult the above references and tools when developing an AI-based system, especially when cybersecurity is a concern. These guidelines and their recommended activities can translate to controls that can be built into a GenAI security policy document. They can then be translated into detailed procedures for managing AI-based cyberthreats.

What goes into an AI security policy?

The first decision is whether to update an existing cybersecurity policy to include AI, use a separate AI cybersecurity policy that includes GenAI or develop a separate GenAI cybersecurity policy. For our purposes, the goal is to develop a GenAI security policy.

If the policy defines an overall approach to AI security management, it could be dubbed “AI Security Policy.” If it focuses on GenAI, the name might be “Generative AI Security Policy.” Logically, the former could also include GenAI.

Next, consider the following four conditions:

  1. Accept that security breaches happen.
  2. Establish procedures to identify suspicious activity and its source, and address it.
  3. Collaborate with departments such as legal and HR.
  4. Initiate activities and procedures to reduce the likelihood that such security events can occur and mitigate their severity and impact to the organization.

The following sections identify people, process, technology and other issues to factor into a GenAI security policy.

People

  • Establish a process for identifying suspicious activity that can be associated with GenAI activities.
  • Work with HR to set up procedures to identify and deal with employees suspected of GenAI-based security exploits.
  • Work with the legal department to address how to prosecute GenAI security breaches.
  • Identify how the company responds to such activities — e.g., reprimand or termination — based on HR policies.
  • Determine legal implications if perpetrators fight legal action.
  • Identify outside expertise — e.g., legal, insurance — who can assist with GenAI security attacks.

Process

  • Examine existing procedures for recovering and reestablishing disrupted IT operations to see if they can be used for AI-based breaches.
  • Examine existing disaster recovery (DR) and incident response plans to see if they can be used to recover operations from GenAI-based events.
  • Develop or update existing procedures to recover, replace and reactivate IT systems, networks, databases and data affected by GenAI-based security breaches.
  • Develop or update existing procedures to address the business impact — e.g., lost revenue, reputational damage — from GenAI-based security breaches.
  • Consider using external experts to assist in the aftermath of GenAI-based events.
  • Determine if any standards or regulations have been violated by GenAI-based cyberattacks, as well as how to reestablish compliance.

Technology operations

  • Examine technology that can identify and track cybersecurity activities with suspected GenAI signatures, whether within the firm’s IT infrastructure or with outside firms, e.g., cloud services.
  • Establish methods to shut down GenAI-based activities once they are detected and verified. Quarantine affected resources until the issues have been resolved.
  • Review and update existing network security policies and procedures following GenAI-based attacks.
  • Update or replace existing cybersecurity software and systems to be more effective in GenAI-based cyberattacks.
  • Repair or replace hardware devices that have been damaged by attacks.
  • Repair or replace systems and data affected by attacks.
  • Ensure critical systems, data, network services and other assets are backed up.
  • Ensure encryption of data at rest and in motion is in effect.
  • Recover IT operations, applications and systems that might have been affected by GenAI-based attacks.
  • If additional expertise is needed, consider retaining external vendors or consultants.

Security operations

  • Establish and regularly test procedures for dealing with physical and logical breaches caused by GenAI-based security events.
  • Establish and regularly test procedures to prevent theft of intellectual property and personally identifiable information.
  • Establish and regularly test procedures to address attacks to physical security systems — e.g., closed-circuit television cameras, building access systems — from GenAI-based attacks.
  • Establish and regularly test an incident response plan that addresses all types of cybersecurity events, including those from GenAI-based breaches.
  • If additional expertise is needed, consider retaining external vendors or consultants.

Facilities operations

  • Develop, document and regularly test procedures to repair, replace and reactivate data center and other facilities that might have been disrupted by GenAI-based security breaches.
  • Establish and regularly test procedures to address attacks to physical security systems from GenAI-based attacks.
  • Establish and regularly test a DR plan that addresses all types of cybersecurity events, including those from GenAI-based attacks.
  • If additional expertise is needed, consider retaining external vendors or consultants.

Financial performance

  • Develop and regularly review procedures for evaluating the impact of GenAI-based security attacks on financial and general business operations.
  • Define potential legal and regulatory penalties for failure to comply with specific regulations as an outcome of GenAI-based security breaches.
  • Identify potential insurance implications of GenAI-based cybersecurity attacks with the company’s insurance provider(s).
  • If additional expertise is needed, consider retaining external vendors or consultants.

Company performance

  • Develop procedures to repair potential reputational and other damage from GenAI-based cyberattacks.
  • Develop procedures for responding to media inquiries about reported AI-based security breaches.
Click here to download the

Generative Artificial Intelligence

Security Policy Template.

Policy template

A generative AI security policy template that covers GenAI-based attacks largely incorporates the same line items as a standard cybersecurity policy. It also recognizes that the organization must be able to identify security breaches that exhibit signatures that indicate something other than a “normal” attack.

Use the accompanying template as a starting point to create a policy to address GenAI-based attacks and exploits. Again, the result could be a standalone policy, or the AI content might be added to an established cybersecurity policy.

Paul Kirvan is an independent consultant, IT auditor, technical writer, editor and educator. He has more than 25 years of experience in business continuity, disaster recovery, security, enterprise risk management, telecom and IT auditing.

Leave a Reply

Your email address will not be published. Required fields are marked *