Like it or not, artificial intelligence (AI) is becoming more ubiquitous. Algorithms are increasingly being tested and used for decisions as diverse as policing patterns, monitoring educational outcomes, and loan decisions. 

Unfortunately, however, that doesn’t mean that there is growing understanding of how AI works. Instead, we are seeing a growing divide between those who do understand AI, and those who don’t. This is coupled with increasing fear in both groups about the consequences of AI. We therefore need a way to educate people about AI, on an ongoing basis. Enter Mieke De Ketelaere, programme director of AI for the Interuniversity Microelectronics Centre (imec) in Belgium, and author of the book Wanted: Human–AI Translators.

AI is not just a technical issue

De Ketelaere argues that we must not underestimate the technical challenges of AI, but we must also be aware that there are other, and greater, issues involved.

Engineers will never be able to foresee all possible future scenarios or fully understand the complexity of our world. The AI systems they build will therefore never be perfect. This means we must allow as many disciplines as possible to take part in the debate. And when something goes wrong, we should not blame the engineer, because they are just one part of a more complex whole.”

De Ketelaere suggests that we need people who can ‘translate’ between different groups, and build bridges between them. Wanted: Human–AI Translators is, in itself, an excellent first step into bridge-building. It aims to explain and demystify AI for non-technical readers. De Ketelaere has been working in and on AI for many years, since doing her master’s thesis on the subject back in the 1990s. She has seen huge changes in the field since then, and is particularly concerned about the social aspects of the technology.

The book brings together all my experience to provide a manual for future AI translators. It discusses the technical aspects of AI, but also its social implications. I look at technical issues, but more importantly, I look at what they mean for our privacy and our futures, and especially how people and machines can work better together.”

Not ‘apocalypse now’

De Ketelaere has no time for apocalyptic scenarios, where robots start to take over the world. She does not even believe that they will ever become ‘more intelligent’ than humans.

I dont share that view. There is a principle in robotics and AI that states that tasks that are easy for humans, which often revolve around our motor skills, are very difficult for computers and require a lot of computing power. However, tasks that require high-level thinking, such as playing a complex board game, are much less intensive for computers, and more difficult for humans. AlphaGo might have been able to beat Lee Sedol at Go, but that doesn’t mean it can drive. That combination, however, would be no problem for Lee Sedol himself.”

However, dismissing the ‘apocalypse scenario’ does not mean that De Ketelaere has no concerns about AI; quite the reverse. Her concerns centre around the ethics of AI. She describes several issues that may be problematic, from classification systems to bias in algorithms. In particular, she warns against the perils of assuming that AI is always correct. 

Just as human boxesare sometimes wrong, AI systems can also fail in certain situations. We must dare to question our black boxes. Our world is complex, and it is not easy to replicate it in computer systems.”

Squaring the circle

There is no question that AI has huge potential to improve our world, but also raises questions. De Ketelaere believes that the answer lies partly in what she calls AI translators: people who can build bridges between groups to enable individuals to come together to solve the problem of AI. However, we also need to remember that there are far more forms of intelligence in the world than our own narrow definition—and that many of these may be far more useful to us than trying to replicate ‘human’ intelligence in computers.

The time when we only needed closed intelligence, where researchers focused on one area, is over. We now need to build a world of collective intelligence in which people and machines can collaborate with each other in a safe, respectful and transparent way.”

As De Ketelaere herself says, this book may well prove to be both the “starting pistol and a modest manual for that process”.