On Monday 27 November, 18 nations signed an agreement on AI safety, which were based on the AI Security Guidelines. Led by the UK’s National Cyber Security Centre and the U.S Cybersecurity and Infrastructure Security Agency, it was the first agreement of its kind to address the secure development of AI systems. The full set of guidelines can be accessed from the Australian Signals Directorate.
Aim of the guidelines
The overall aim of these guidelines is to ensure the secure development of AI technology. That means creating an environment where cybersecurity becomes an essential component during the development of AI systems. The secretary of homeland security, Alejandro Mayorkas, described the guidelines as a pathway for AI providers to design, develop, deploy and operate AI in a manner that prioritises cyber security as part of its core. The implementation of these guidelines will help providers realise the opportunities of AI, without revealing sensitive data to unauthorised parties.
What are the guidelines?
The guidelines are broken down into four key areas:
- Secure design
- Secure development
- Secure deployment
- Secure operation and maintenance
These areas provide specific and applicable guidelines to all four stages of the AI development life cycle:
- Operation and Maintenance.
The introduction of these guidelines is a response to the novel security vulnerabilities and standard cyber security threats that AI is constantly subject to. As the pace of AI development continues to rapidly advance, the threats and vulnerabilities increase, while the consideration of safety becomes secondary.
Key Takeaways for Australian businesses
The AI security guidelines are another form of regulation that will impact key industries that provide AI systems. AI providers will be required to review and apply the relevant guidelines to their entire systems to ensure compliance, and develop internal AI policies that implement these guidelines and any ensuing legislation that is implement by the Federal Government in due course.