Friday, October 31, 2014


 Newly prepared canvases [linen] This is the first stage.
Yesterday I received an official request from the State Library of Queensland to allow PANDORA [Australia's web archive - National Library of Australia and partners] to archive my Blog...
YES this one you are reading now! 
PANDORA is an official site for archiving 'online publications and websites of lasting significance' and 'research value' in perpetuity. Check out the State Library of Queensland's selection criteria page and you will see why I am really so very happy that my eight year old Blog has been acknowledged this way. 
My proposal for a paper/presentation was accepted and I am one of the speakers at:
Queensland University of Technology, Kelvin Grove, Queensland, Australia.
Thursday 20 November 8.30am  - 4.30 am
My topic is:
Cosmic Perspectives

 Newly prepared canvases [linen] and some pristine blank canvases
Have you noticed an increase in concern about how technology might negatively affect us in the future? Well, maybe it's what I read, but there has recently been more 'talk' about artificial intelligence [AI] and the potential for it to overtake human intelligence. So, what's the problem in that you might ask? Well, apparently one scenario is that super-genius AI types could ultimately view us as merely ant-like in importance and not think twice about wiping us out!
Nick Bostrom [Professor, Faculty of Philosophy and Oxford Martin School Director, Future of Humanity Institute and Director, Programme on the Impacts of Future Technology at the University of Oxford] asks very serious questions about technology generally and specifically AI and super intelligence. He has just published a new book Superintelligence: Paths, Dangers, Strategies which I intend to read. However, I have listened to a one [and a bit] hour presentation he made about the book. You can read more about his thoughts, on a range of topics, at his fascinating website.
There are others cautioning us about AI and technology. These include founder of Skype, the enterprising and thoughtful Jaan Tallinn, as well as Elon Musk a very 21st century entrepreneur. Stephen Hawking also sounds concerns as does cosmologist and astronomer Martin Rees. And there are others. In fact, there seems to be a heightened concern amongst scientists, cosmologists and philosophers about existential risk from a variety of possible human-made and natural circumstances. There are think-tanks and research centres popping up, with very serious thinkers as team members, advisors, founders. This is absolutely fascinating. Why? Because, whilst they speak about being cautious they do not exclude discussion about the benefits of technology, including AI. Also, by thinking critically, across disciplines, about how research and development takes place, risk is analysed with broader brushstrokes. This illuminates risks that may not have been identified with a more narrow focus. And, of course with broader cross disciplinary thinking there is the potential for new ideas about significant risk mitigation!
A few examples of these research centres and think tanks are:
Future of Humanity Institute University of Oxford
The Future Of Life Institute in the USA.
Centre For The Study Of Existential Risk at Cambridge University 
Thank goodness for people like the ones mentioned above AND thanks to all of the others involved in seriously thinking and talking about the potential ramifications, good and bad, of technology in the future.

Newly prepared canvases [linen] and some blank canvases
This week I went to the fantastic musical The Lion King at the Queensland Performing Arts Centre [QPAC]. The singing, staging, costumes, puppetry, the was all just fabulous, like many performances I've been to at QPAC.
Watching The Lion King I felt a great sense of awe for human endeavour and creativity. Whilst the performance seemed flawless to me, it probably was not...each performance will bring its own set of issues and surprises for the performers and support crew...and that's what makes creativity, exhibition and performance so exhilarating. It's professional practice to deal with surprises [good and bad], 'mistakes', the unintended. But, could artificial intelligence deal with them? With each challenge, could AI improvise? If there was a group of AI beings would there be a divergence amongst them in reaction to surprise and the unintended? If so, how would they choose which improvisation to work with? Above all, would they feel exhilaration in response to what is essentially a intrinsic part of the creative process?

Many years ago I was a guest artist in a grade 4 class. The children were going to paint on paper.  Within the first couple of minutes of starting one young fellow put up his hand to ask if he could have another piece of paper, 'because he had made a mistake.' I said 'No, you cannot, because you have been given an opportunity to problem solve and improvise.' I counselled him that artists do not give up that easily. We reflect upon the unexpected to see how we can develop its potential.

Well...that's what I do...and I am sure others do as well.

And, you can imagine my kids' frustrations with me over the years!

After the class the teacher commented on my approach to a 'mistake'. She said she would have given the young fellow another piece of paper, but my response had really made her think.

How would an AI being cope with paint dropping or smudging? And, think about this question literally and metaphorically.

Maybe if we lose the ability to see the accidental, mistakes, the unintended, surprise and the unexpected as holding creative potential, then we are no longer really human and thus more vulnerable to technological manipulation? We are, in fact, no longer important...

Maybe if this happens technology can stomp on us like we are ants?

So, why the photographs above? Each photo shows newly prepared and stretched canvases. The prepared canvases show my process of embracing accident! What you see is just the first stage. I allow the paint to do its own thing. How will they end up...who knows?!


No comments: