Have you noticed the term “artificial intelligence” is oxymoronic? Artificial flowers are not flowers. If intelligence is artificial it is not intelligence. If something intelligent emerged from a machine, it would not be artificial. Although different from human intelligence, it would be the real thing.
Clever but Not Intelligent
First, we must distinguish cleverness from intelligence. We can develop machines that are not just clever but cleverer than we are. However, these are still computing machines, machines that help us work things out. These machines can be dangerous. There is nothing to stop us programming them to kill. I have every confidence the military have that under control or they will have until someone nicks one and works out how to control it.
But note however clever these machines may be they are still controlled by human beings.
In science fiction the autonomous machine has a long-term pedigree. I suppose the earliest was the Golem, programmed by a scroll of scripture and so subject to human control.
Isaac Asimov’s robot series is perhaps one of the more sophisticated approaches to the genre. His three laws of robotics are still influential.
So, is a fully autonomous artificial being possible? I don’t doubt it is possible to build a machine that replicates autonomy. The Turing test was designed to demonstrate this. Turing argued that if we can’t tell the difference between a machine and a human, when we don’t know which is communicating, then we have the real deal.
I have my doubts. Human consciousness is the product of living in community. A machine would need to live among us to develop its own personality. It is hard to imagine what true autonomy would be like without personality. Such a being would raise issues such as whether switching off the machine would be murder.
A human child’s personality emerges over several years and their personality continues to grow as it accumulates experiences throughout their life. A parent, looking back can discern the seeds of the adult child in the baby. Could a machine evolve its personality faster? Maybe but it would still need others to interact with and this interaction would set the pace at which the machine became conscious.
Does AI Already Exist?
I doubt anyone really knows what they are trying to program to make a machine autonomous. Most people think it is some function of complexity. But the Internet is complex and there is no way it could ever become conscious.
There is a great deal of human experience that cannot be digitised; our perception of colour, for example. A machine can recognise the colour red from its wavelength but does not experience red as we do. These experiences are sometimes called qualia and they are one thing that distinguishes conscious beings from machines. It seems many animals experience qualia but machines so far cannot. But how would we know?
If AI is possible, how do we know we don’t have it already? In science fiction it is usually associated with a humanoid body. This is something still far from reality. Perhaps the machine needs a body of some sort to become conscious but if AI is possible why not any hardware format?
If AI is possible it may exist already in a box in someone’s lab. How would they know it was autonomous? It would be terribly clever and might be able to work out how to propagate itself but how would we know?
Perhaps the most useful aspect of this debate is the insight it gives us into our own consciousness. What is it that makes us human and unique? Maybe we’re the product of a unique accident or perhaps there are others like us aware of their own mortality. And perhaps it is our mortality that forms our consciousness and makes us unique.
What do you think about AI? Why do you think it is or is not in principle practical? If it is, do you think it may already exist?