On 11th May 1997, Deep Blue, a chess-playing computer developed by IBM, beat the then world champion Gary Kasparov at chess. It won the six game match with two wins and three draws. Kasparov accused IBM of cheating and demanded a rematch, but IBM refused and dismantled Deep Blue. Kasparov had beaten a previous version of Deep Blue in 1996.
Today, Watson is marketed as a tool for people to explore and use. Watson is not alone, Microsoft have launched Azure ML, their machine learning platform, and everyday new companies are opening for business, promising to provide the answers to humanities toughest problems.
Computer scientists like Geoffrey Hinton, Yann Lecun and Andrew Ng are leading the way with improved machine learning techniques that have recently led to great advances in deep learning systems.
Software advances are being matched in hardware by the unstoppable Moore's law, which is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon Moore, who described the trend in his 1965 paper. His prediction has proven to be most accurate - in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development. The capabilities of many digital electronic devices are strongly linked to Moore's law: processing speed, memory capacity, sensors and even the number and size of pixels in digital cameras. All of these are improving at exponential rates as well.
Where will it all end? Each stage of technical development and each computerised victory brings us inevitability closer to the day that machines will outsmart humans…
There are those who call themselves Singularitarians who believe that the creation of a super intelligence, the Singularity, will happen in the near future and that deliberate action ought to be taken to ensure that this intelligence benefits humans. Singularitarians are distinguished from other futurists who speculate on a technological singularity by their belief that the Singularity is not only possible, but desirable if guided prudently.
On the flip side, there are some prominent figures, including Elon Musk and Stephen Hawking, who warn against major advances in artificial intelligence. In a recent interview with the BBC Hawking stated:
“The primitive forms of artificial intelligence we already have, have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
I’m certainly not up there on the intelligence scales with Stephen Hawking, but I do have a view. We are undoubtedly developing computers that are becoming more intelligent. The problems these computers solve are very useful: self-driving cars and speech recognition – where would I be without Siri?!
However, these computers are in no way sentient – they are merely very good at recognising patterns - they have no personal goals or desires. Animals made this jump with the evolution of the neocortex. In many ways this is what allows mammals to learn new behaviours and for humans to develop conscious thought and language.
To match a human level intelligence, with goals and desires, we must make monumentous advances in learning algorithms and develop fundamentally new approaches. We must learn to create the equivalent of a neocortex that sits over lower level learning algorithms.
That’s not to say we won’t get there one day – I’m certain we will - but we’re a long way from that just yet and have plenty of time to think about necessary safety concerns.
I, for one, welcome our new machine overlords..!