New multi-national AI security guidelines are toothless and weak

Posted:
in General Discussion

The United States, United Kingdom, and 16 other countries want to keep the development of AI systems secure, but a framework issued by the group offers common sense recommendations, and lacks firm action points.




Introduced on Sunday by the US Cybersecurity and Infrastructure Security Agency and the UK's National Cyber Security Centre, the "Guidelines for Secure AI System Development is a document that advises on the development of artificial intelligence systems.

The intention is to help developers of systems that use AI to "make informed cybersecurity decisions at every stage of the development process," explains CISA. They also provide recommendations that "emphasize the importance of adhering to Secure by Design principles."

The guidelines were created in conjunction with 22 other security agencies around the world, as well as contributions from various companies involved in the AI landscape. The list includes Amazon, Google, IBM, Microsoft, OpenAI, and Plantir, but Apple isn't mentioned in the acknowledgments, despite its history of work in the field.

"We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy," according to Secretary of Homeland Security Alejandro N. Mayorkas. "By integrating secure by design' principles, these guidelines represent an historic agreement that developers must invest in, protecting customers at each step of a system's design and development."

Mayorkas continued, adding "Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology."

"We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up," said NCSC CEO Lindy Cameron. "These Guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.

The guidelines



Laid out in a 20-page PDF file hosted by the NCSC website, the guidelines are broken down into four areas of concern.

For Secure Design, the guidelines covers understanding risks and threat modeling, along with further discussion on trade-offs for system and model design. Secure Development deals with supply chain security, documentation, and asset and technical debt management.

Secure Deployment enters the realm of infrastructure and model protection from compromise, threat, or loss, and incident management processes. Lastly, Secure Operation and Maintenance handles post-deployment actions, such as logging, monitoring, updating, and information sharing.

Grandiose and ineffective



The introduction of guidelines from security agencies over a hot-button topic such as AI sounds important. The concept is important, but the document isn't even a good foundation.

These are all guidelines, not rules that must be obeyed. Companies developing their own AI systems could read the document but have no obligation to actually put any of it into practice.

There are no penalties for not following what is outlined, and no introduction of laws. The document is just a wish list of things that governments want AI makers to really think about.

As is the case for such frameworks that offer guidance, there's no guarantee that anyone will actually take on board the recommendations at all. And, it's not clear when or if legislation will arrive mandating what's in the document.

That said, the companies working on AI are almost certainly already taking security concerns into account when developing their products. Given the potential importance of AI in the future, and the chance of it being misused or attacked, many developers are already adding security measures.

The new inter-agency framework certainly offers common sense guidance to AI producers, but information that is likely to be ignored by developers already keenly aware of the need for security.

Read on AppleInsider

FileMakerFeller

Comments

  • Reply 1 of 1
    Guidance is probably the best first step. Mandating approaches can be done when people with expertise can agree on the best path forward; not only do governments currently lack that expertise but the industry is experimenting with multiple approaches to see what works best so it's very unlikely that consensus will be reached.

    I'm not an expert in the field by any means, but all of the developments I've seen so far indicate that artificial general intelligence - machines that can think and have agency - is far, far in the future. The biggest threat from the current landscape is misuse by human actors, so a focus on security and "defence in depth" allows for commercialisation with a measure of safety.
    appleinsideruserJaiOh81
Sign In or Register to comment.