Saturday, April 20, 2019


At the Barbican

Last week I attended an event at the Barbican, London. The event was “Collisions” a virtual reality production by Lynette Wallworth. “Collisions” takes the immersed viewer into the traditional lands of Australian indigenous elder Nyarri Nyarri Morgan. Without didacticism or preaching, but with great story telling, Wallworth provides insight into Britain’s nuclear testing in Australia. “Collisions” is ‘part of “Life Rewired”, a season at the Barbican exploring what it means to be human when technology is changing everything’. “Collisions” ends 20 April.


While I was at the Barbican I saw promotion (image above) for a forthcoming program, also part of the ‘Life Rewired’ season. This program is called “AI: more than human”. I wish I was going to still be London for this.

The title “AI: more than human” got me thinking. Without a question mark it presents as an assumption, that artificial intelligence is ‘more than human’, the word ‘more’ indicating abundance, betterment, enhancement. But, what is enhanced or bettered, in abundance? Does it mean the good, the bad, or the good and bad? While I am sure serious questions will be raised by the artists, scientists and researchers involved in the program, I wondered if a simple question mark at the end of the title might have been more speculatively interesting.

Another immediate thought I had, was, why not “AI: other than human”? And, with a question mark, “AI: other than human?”, the speculative possibilities are further opened up.

Then, I got to thinking about a whole range of phrases, with and without question marks, that prompt speculation about AI. I have written some alternative phrases about AI, in the mind-map image below. The question marks in brackets indicate that one could be applied or not. While a question mark subtly changes a phrase, I suggest it prompts further perspectives. I propose that these phrases or questions, highlight the assumptive limitations of  “AI: more than human".

The second mind-map image (below), expresses thoughts that are informed by my research into contemporary militarised and militarise-able technologies. The fact that machine learning and artificial intelligence are already incorporated into aspects of modern surveillance, targeting and weapon capabilities raises questions about the role assumption plays in how we might accept, or not, accelerating developments in, and uses for, artificial intelligence. With this perspective, language that presents assumption is a significant risk. By presenting as a fait accompli, an assumption can blind us to alternative ways of thinking and acting.

In the third mind_map (below) the A and I are returned to full words: ‘artificial’ and ‘intelligence’. This enables a game of word separation and alternative word coupling. Thus, further questions that disarm assumption are posed.

I think we have to be careful with the way we use language to describe contemporary technological capabilities, such as artificial intelligence. Many terms anthropomorphise technology, and in doing so we are drawn into a relationship that may not allow the critical space we need to identify and critique assumption.

I'll leave it up to you now.