The NCSC, in conjunction with CISA and a total of 21 international cyber agencies and ministries, have released the Guidelines for Secure AI System Development which they hope will help developers make informed decisions regarding cybersecurity with regards to employing new Artificial Intelligence systems.
The guidelines are split into four main areas.
Secure Design - guidance on how to understand risks and threat modelling, as well as trade-offs to consider on system and model design
Secure Development - features information on supply chain security, documentation, and asset and technical debt management
Secure Operation - protecting infrastructure and models from compromise, threat or loss, as well as how to develop incident management processes, and responsible release
Maintenance - provides guidelines on actions to take once a system has been deployed, including logging and monitoring, update management and information sharing
“We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up,” said NCSC CEO Lindy Cameron.
This guidance is aimed primarily at providers of AI systems who are using models hosted by an organisation, or are using external application programming interfaces (APIs). The NCSC urge all stakeholders (including data scientists, developers, managers, decision-makers and risk owners) to read these guidelines to help make informed decisions about the design, development, deployment and operation of their AI systems.
The guidance document is available to read here.
Report all Fraud and Cybercrime to Action Fraud by calling 0300 123 2040 or online. Forward suspicious emails to firstname.lastname@example.org. Report SMS scams by forwarding the original message to 7726 (spells SPAM on the keypad).