With recent news revealing the interest of large corporations in massive supercomputers running Artificial Intelligence, such as IBM's $1 Billion investment in its Watson A.I., and Google's acquisition of the startup DeepMind, now is as good a time as ever to discuss the implications of Moore's Law, the integration of technology into every aspect of our lives, and the possibility of a "singularity".
Currently, the greatest known "supercomputer" would have to be the human brain. As much as we have CPU's with thousands of micro-transistors performing billions of floating point operations per second (FLOPS), no software has yet gained sentience or been able to consistently emulate a human being. Programs like Cleverbot and Evie come close when scored in the Turing Test, making us question whether or not it is indeed just a bot on the other end of the screen, but even then they're just applications spitting back contextually learned phrases stolen from past user interactions. There's no "life" there, no concept of self and understanding. Such has been the form of the past decades in the evolution of software design; we've become better and better at tricking users into empathizing with simulations, but the simulations themselves are fundamentally unchanged.
The greatest example of this evolution in technological sophistication comes from the gaming industry, where real-time graphics are constantly being pushed to the limits of the current available hardware. And with this latest generation of consoles, as a society we are coming closer and closer to having full photo-realism in real-time, and that begs the question... at what point do we cross the "uncanny valley", and how will this affect our views of A.I. in the future?
To begin with, it helps to define what we're talking about here: What is the "uncanny valley", and what bearing does this have on sentient programs? The "Uncanny Valley" is a term coined by Masahiro Mori in his publication "The Uncanny Valley" in the journal Energy. It describes the phenomenon in which things that come closer and closer to appearing like a real human being become creepy at the point where the realism and surrealism conflicts; where something looks very nearly real, but noticeably enough not so to be disturbing. We can observe this effect in art and sculptures, and even more so in films and video games. According to Mori's theory, objects in motion have a steeper slope on the uncanny valley because motion adds another dimension of realism that an object can fail in your mind. Games like "LA Noir" demonstrate this effect wonderfully with animation that approaches hyper-realism, held back lackluster visuals that condemn it back into the unbelievable. The lesson to be learned from simulations like this: finding the limits of what's "creepy".
On the opposite end of the spectrum we have "Cuteness". If we learn from B grade movies and the current generation of video games what "creepy" is, and how to avoid the uncanny valley, than it helps to understand what its polar opposite is. In studies of 'darwinian aesthetics', it's been shown that things become more "cute" to us when they become rounder, smaller, and with a larger ratio of head size to body size-- features we find in our own offspring. From an evolutionary standpoint, it makes perfect sense that such features as are found in our own infants would inspire a feeling of protectiveness and adoration. Artists and designers take advantage of this all the time, with children's programming often featuring much smoother lines and disproportionately drawn characters. Robots designed to interact with humans often uses more curves and appeals to this idea of "cuteness".
So... what does any of this have to do with the ethics of creating advanced Artificial Intelligences? What does cuteness or creepiness in design have to do with supercomputer boxes sitting in massive processing farms that power these entities? Well... it all comes back to Moore's Law. According to Moore's Law (named after Intel founder Gordon Moore), every two years the number of transistors on an integrated circuit doubles. With this progress we also see the price of chips fall at a similar rate, which explains how in a few short decades we've gone from massive radios for phones to mobile PC's that fit in our pockets. Which suggests that these massive processing farms that Google and IBM are currently investing in could conceivably end up running inside something the size of a wristwatch one day, or at the very least stream via the cloud to such devices. When it comes to the design and application of these products, we could be seriously lead astray. As much as current DARPA and Boston Dynamics projects look like clunky engine parts held together with a prayer, we are living in a generation where we could see the rise of robotics in an Isaac Asimov sense... or perhaps the rise of Skynet. And therein lies the core issue, how do we control the ethics of robotics? At what point do we draw the line between program and sentience, and at what point do we stop being in control? When the designers making the hardware "bodies" that host these entities understand the factors that make us empathize or reject an object, are we still in control of our emotions and reactions to these A.I.?
As we move forward with these Artificial Intelligences, the real question we have to ask ourselves is, how would we react to a "singularity" or the creation of a self-aware software? Independent films such as the excellent "Sync" by Corridor Digital have attempted to beg this question in way that engages the audience with a modern action plot, and the very last episode of their periodic release of the film as a series elegantly demonstrated the dilemma.
As human beings we attempt to find comfort in our lives through a sense of control. We believe in our free will and independence, and this idea that we are somehow superior to all other forms of life and have command of any situation. Honestly though, this isn't always true. We are constantly evolving and expanding our knowledge and understanding of the universe, and presented with an equivalent or superior being, I think we would feel threatened and panic. The questions we should be asking now are ones philosophers have been asking since the dawn of recorded history. "What is life?", "What is the soul?", "What is free will?", "What is learning and knowledge?", and "What does it mean to be God?". We have a responsibility to make sure we understand our place and the implications of our actions now, and how that might affect the course of the future... we already have devices and software that we carry with us everyday that can track our movements, our interests, our patterns, and they can dissect our personalities through algorithms that derive intentions from our actions.
What would it mean if all those devices suddenly understood they existed...?
My name is Jeffrey Hepburn, and I'm a young writer, graphic design artist, and aspiring filmmaker.