People Katy Ai 01

Roke meets

Katy

Artificial intelligence (AI) is now a fixture in our working and personal lives. From automated diagnosis of medical images through to vision processing in self-driving cars, machine-learned (ML) algorithms are making sense of increasingly complex and large quantities of unstructured data.

We invited Katy, Head of Profession for artificial intelligence and machine learning at Roke to share more about her career, explore whether AI can be made fool proof and discuss the publication of her book ‘Strengthening Deep Neural Networks’.

Tell us about your entry into the industry

I became interested in programming in my early teens. I started practising on a ZX81, which was state of the art at the time, and thoroughly enjoyed it. I then went on to read for a degree in AI and computer science at Edinburgh University. AI was in its infancy in those days, and the bulk of the course focused on robotics, basic computer vision, knowledge representation, natural language processing, and more philosophical questions such as is it possible to produce artificial consciousness. This is a far broader interpretation of what is often considered ‘AI’ nowadays.

AI today is now synonymous with machine learning, in particular DNNs. DNNs require huge amounts of data and compute power, which were not readily available until recent years. There are good arguments for replacing the term ‘AI’ with more meaningful terms for specific aspects of the discipline. For example, ‘intelligent automation’ for automotive systems that include learned or intelligence behaviours, or ‘augmented intelligence’ to articulate how AI can assist humans.

When I left university there were very few job opportunities in AI. So, from there, I went straight into a job at IBM, where I spent many years working in software development, specialising in enterprise software solutions.

How has Roke helped develop your career as an engineer?

I joined around nine years ago, looking to try something totally different. I found myself working in big data, and then AI and machine learning. Much of this required learning again from scratch, as it had all changed so much since my time at university.

There’s been a lot to learn, and getting up to speed with modern AI has been a tough but rewarding challenge. Lots of resources are online, and being able to pick the brains of my colleagues at Roke has been invaluable. There are so many tech-minded people here to bounce ideas off of, which creates a really innovative environment. Roke’s also a good size – it’s not so huge that it’s impersonal, but it’s big enough to make a difference and provide variety.

You’ve recently published your book, Strengthening Deep Neural Networks, how was it conceived?

I have a background in software security and am interested in how we use AI in the real world; understanding its robustness and the risks (or not) of incorporating AI components into bigger systems. Much of the research into the area of adversarial examples has been done in laboratory environments where the DNN is considered in isolation. In practice, DNNs are simply algorithms - small parts in bigger processing chains. So there is an interesting question: in what circumstances might the incorporation of ML components into an organisation’s software introduce this type of cyber-threat, and how can such threats be mitigated?

I couldn’t find a book that answered this question and, almost on a whim, I sent an email to O’Reilly (the publisher), because I thought it would be great if there was such a book on the subject that was accessible to a wide audience – not just those with expertise in deep learning. I was pleased, surprised, and a little nervous when they came back and asked me to write it!

To get your hands on a digital copy of Strengthening Deep Neural Networks visit the O’Reilly website. Hardcopies are also available from Amazon.

Tell us about  how deep neural networks (DNNs) can be tricked

It’s possible to fool a DNN by presenting unexpected data to it that appears normal. Unexpected data is essentially data that it is not properly trained to understand. For example, perturbing a few specific pixels in an image of a cat might cause it to be misclassified as a ‘dog’ by a DNN, whilst it still looks like a cat. The important thing is that such a change would not fool a human, which suggests that although DNNs are ‘biologically inspired’, they do not really mimic human thinking. An attempt to deliberately fool AI in this way is known as an ‘adversarial example’.

The implications of this are interesting. For example, if you had some kind of web filter that relied on DNN technology to stop people uploading specific types of images, such as images with firearms to a particular website, then it might be possible to change an image so that it wasn’t caught by the filter, while the image appeared to be unchanged. Similarly, consider if you had a surveillance camera checking for the existence of firearms in a public place. If it was possible to put some, seemingly innocuous, pattern in the scene captured by the camera, and if that pattern was able to confuse the DNN, someone could potentially cause the AI into wrongly reporting false-positive sightings of firearms.

How can these issues be tackled?

The first thing is to decide whether or not there’s actually a risk. A DNN can only be tricked if the attacker is able to change the data that’s presented to it. For example, if an attacker has access to either digital content, or perhaps to real world cameras or microphones that collect data, then there’s potential for them to trick the DNN that processes that data. If there is a risk, it may be that simple measures can be put in place as part of a broader processing chain in an organisation, like having a human in the loop.

There are some very interesting methods for testing whether DNNs are giving confident results – that is, whether the input is ‘safe’ for the model, in that it is representative of the data that the model was trained on. These ideas are very new and are developing all the time because the situation is fluid. Defenders can also strengthen a DNN against a threat but then an attacker develops something more complicated in order to work around it. It’s all about being aware that the attack is possible.