The integration of artificial intelligence into companies’ business practices poses increased cybersecurity risks, which we have previously written about here. As AI systems become ubiquitous, they also become targets for cyberattacks due to their valuable data and operational significance, and because their rapid development may leave certain AI systems outside some of a company’s robust cybersecurity controls.

As the U.S. Department of Treasury noted in its recent report on managing AI cybersecurity risks for financial institutions, discussed further here, “applying appropriate risk management principles to AI development is critical from a cybersecurity perspective, as data poisoning, data leakage, and data integrity attacks can take place at any stage of the AI development and supply chain. AI systems are more vulnerable to these concerns than traditional software systems because of the dependency of an AI system on the data used to train and test it.”

A joint report authored by the U.S. National Security Agency’s Artificial Intelligence Security Center (AISC), the Cybersecurity and Infrastructure Security Agency (CISA), the Federal Bureau of Investigation (FBI), and government agencies from Australia, Canada, New Zealand, and the United Kingdom (the Joint Report) offers several best practices for deploying externally developed, secure, and resilient AI systems. In this post, we examine the top 10 tips for securing AI systems from the Joint Report.

Top 10 Measures That Companies Should Consider Taking to Protect AI Systems

  1. Access Controls

Enforcing access controls is critical for a strong cybersecurity program. Access controls help to prevent unauthorized access or tampering with AI models and they can be implemented in a number of ways. Such controls may include role-based access controls (RBAC) or attribute-based access controls (ABAC) where feasible to limit access to only necessary personnel. Distinguishing between users and administrators, as well as requiring multifactor authentication (MFA) and privileged-access workstations (PAWs). for administrative access are also best practices.

  1. Training

Training AI users, administrators, and developers about security best practices is also important. Strong password management, phishing prevention, and secure data handling all go a long way in reducing risk. If possible, companies should use a credential-management system to limit, manage, and monitor credential use to minimize risks for AI systems.  Moreover, promoting a security-aware culture in general will minimize the risk of human error.

  1. Audit and Penetration Tests

Retaining external security experts to conduct audits and penetration testing is also a best practice. Conducting these types of assessments on ready-to-deploy AI systems provides an additional layer of security by identifying vulnerabilities and weaknesses that may have been overlooked during the development process.

  1. A Secure Deployment Environment

Before a company deploys an AI system, it should ensure that the IT environment is based on sound security principles, a well-designed architecture and secure configurations. The same best practices that a company would typically follow for any IT environment are equally as important to apply to its AI systems.

  1. Continuous Protection

AI systems should be validated before and during use. Validation may take any number of forms, including creation of hashes and encrypted copies of each release of an AI model or system, storage of all forms of code in a version-control system with proper access controls, evaluating and securing a supply chain for any external AI models and data, and carefully inspecting models inside a secure development zone prior to deployment.

  1. Logging and Monitoring

Implementing robust logging and monitoring of AI systems’ behavior, inputs, and outputs will help detect any abnormal behavior or potential security incidents. The monitoring should include watching for data drift or high-frequency or repetitive inputs, which could be signs of model compromise or automated compromise attempts. Without sufficiently robust logging, companies may be unable to identify, investigate, and address suspected security issues to regain confidence in their AI systems.

  1. Alert Systems

Because logging and monitoring are most valuable when anomalies are discovered and addressed quickly, companies should also establish alert systems to notify administrators of potential oracle-style adversarial compromise attempts, security breaches, and anomalies. Early detection of security concerns and anomalies can help limit the scope of harm, minimize impact, and allow companies to confirm or restore their AI systems’ integrity, each of which is critical in safeguarding AI systems.

  1. Regular Updating and Patching

Like other information systems, certain AI systems must be updated and patched regularly. After installing updates and deploying patches, companies should run a full evaluation to ensure that accuracy, performance, and security test results are within acceptable limits before redeploying.

  1. Secure Deletion Capabilities

The volume and kinds of data fed to AI systems means that ensuring secure deletion is critical. Companies should plan secure deletion capabilities that perform autonomous and irretrievable deletion of components, such as training and validation models or cryptographic keys, without any retention or remnants at the completion of any process where data and models are exposed or accessible. Sound deletion practices can effectively limit the scope of potential security incidents.

  1. High Availability/Disaster Recovery

As AI systems become engrained in companies’ operations and business practices, they should be carefully considered as part of business continuity and disaster recovery planning. Companies can prepare for high availability and disaster recovery by using an immutable backup storage system, depending on the requirements of the system, to ensure that every object, especially log data, is immutable and cannot be changed.

* * *

Financial institutions and other companies that have already invested heavily in cybersecurity programs and risk management should not overlook the additional risks posed by the growing use of AI. By assessing the specific AI risks and evaluating where existing measures should be expanded or supplemented, companies can mitigate the cyber risks while benefiting from the advances that AI offers.

To subscribe to the Data Blog, please click here.

The cover art used in this blog post was generated by DALL-E.

Author

Charu A. Chandrasekhar is a litigation partner based in the New York office and a member of the firm’s White Collar & Regulatory Defense and Data Strategy & Security Groups. Her practice focuses on securities enforcement and government investigations defense and cybersecurity regulatory counseling and defense.

Author

Avi Gesser is Co-Chair of the Debevoise Data Strategy & Security Group. His practice focuses on advising major companies on a wide range of cybersecurity, privacy and artificial intelligence matters. He can be reached at agesser@debevoise.com.

Author

Erez is a litigation partner and a member of the Debevoise Data Strategy & Security Group. His practice focuses on advising major businesses on a wide range of complex, high-impact cyber-incident response matters and on data-related regulatory requirements. Erez can be reached at eliebermann@debevoise.com

Author

Stephanie Cipolla is an associate in the Litigation Department and a member of the firm’s Data Strategy & Security practice. Her practice focuses on cybersecurity and data privacy issues, including incident preparation and response. She can be reached at smcipolla@debevoise.com

Author

Noah L. Schwartz is an associate in the Litigation Department and a member of the Data Strategy & Security practice group. His practice focuses on incident response, crisis management and regulatory counselling. He can be reached at nlschwartz@debevoise.com.