Developing AI Systems That Are Safe And Secure 03

What is Secure AI? 

Keeping AI systems robust and secure

Many organisations are concerned about potential cyber threats directed at AI components, it’s important to assess the ease with which adversarial techniques could be used to ‘fool’ AI systems used in applications such as video surveillance.

We developed adversarial patches – 2D images to be attached to the flat surface of a 3D object. These patches are able to fool AI image classifiers, creating false-positives by placement on roads, attachment to walls, display on clothing or on accessories such as bags/hats.

Key features & benefits

Provides a clear understanding of the potential for physical-world adversarial attacks on AI systems

Allows capability to experiment and illustrate these attacks in operational environments

Potential to build other adversarial attacks and novel defence measures, all operating within real-world environments and associated cyber threats

Creates clear understanding of threats to complex systems that incorporate AI components, ensuring a holistic approach to system security

Uses a methodology to protect AI systems that explores explainability, assurance and security in the design of the AI system under consideration.

Developing AI Systems That Are Safe And Secure 02

Talk to the experts

Interested in Secure AI or other artificial intelligence capabilities? Talk to an expert. 

Get in touch