Thanks to tech gurus like Elon Musk discussions about Artificial Intelligence are everywhere. But despite its ubiquity very little is accurately or clearly stated about what it is. Contemporary culture has easily accepted that it is a thing and we have to deal with it. This has much more to do with the stories we tell then it does our access to reliable info about AI. As Andrew Breitbart infamously said “politics is down stream from culture.” And the AI story is far more political than most realize.
The west has been priming itself for the reality of AI since at least Fritz Lang’s Metropolis. C3-PO was actually inspired by the “Robot Maria” character in that silent film. Over the ensuing decades there have been hundreds of intelligent robots across film, TV, and literature. The vast majority of fictional androids that we’ve become accustomed to have personalities and emotions. And not just simulated or functional ones. In Star Trek First: Contact Data experiences sensation. In T2 the governator finally learned why puny humans cry. In 2001 (one of the great “hard” Sci Fi films) Hal makes decisions and fears death. These are all things real robots will never be able to do.
This isn’t a scientific claim, it’s a philosophical one. Because the AI question is partially technological but mostly philosophical. Specifically the philosophy of mind. Sadly scientists, neuro or otherwise, have mostly decided to ignore philosophy of mind or pretend to be studying the mind by studying the brain. This is a category mistake, one of which they are blissfully unaware. Every time you hear a “scientist” wax eloquently about the soul or free will or ethics, it will almost certainly be a waste of your time. You would do better to click away from YouTube and pick up some Epictetus.
Because the truth is that an honest evaluation of AI from a philosophical perspective leads some (myself included) to believe that computers cannot have minds. Understand that this is not the claim that they don’t now but might develop them someday. It is the claim that the sort of thing that consciousness is can’t be achieved through fabrication. In other words robots don’t have souls. Unless of course Hogarth and his Iron Giant are right.
The Iron Soul
We mostly just accept the fictional convention that R2-D2 has a mind. The same way we accept other impossible things like FTL travel. But it’s almost never discussed in or out of films because these stories have mostly came from another cultural level: our generally unexpressed Neo Darwinian worldview. More on that later. But within this robotic mythology The Iron Giant stands apart as a uniquely interesting film because it deals with the concept of a robot soul in a philosophically astute manner.
One of the most touching moments of the now classic animated feature is when Hogarth tries to help the Giant come to grips with death.
Hogarth: Things Die. It’s part of life. It’s bad to kill but it’s not bad to die.
Giant: You Die?
Hogarth: Well yes…someday.
Giant: I die?
Hogarth: I don’t know. You’re made of metal. But you have feelings and you think about things. And that means you have a soul. And souls don’t die.
Surprisingly enough this is pretty much right. Issues of immortality aside, sensations and thoughts are actually by definition immaterial. That is they lack extension. They are spiritual realities. I’m not saying this because I’m religious. There are religious people that deny everything I’m arguing for here. I’m saying this because of philosophy. And while science has been able to explain how a brain works it cannot explain the most basic thing about human experience: experience itself.
What it is like to be like something
There is nothing “it is like” to be a rock. Rocks have no first person perspective. They have no experiences. But there is something it is like to be a cat. They see and perceive things. They can learn and be taught. All at a very rudimentary level of course. But regardless they have a perspective on things. Being a cat feels like something. That means Cats have primitive minds and souls. Believe it or not this view is not only relatively common but undergirded by the biblical concept of naphesh, the enlivening or animating spirit given by God to his creatures. Something like this has been the standard view throughout most intellectual history.
The great philosopher Thomas Nagel wrote an infamous paper on this in the 70s called “What is it like to be a bat?” A bat’s perspective on the world is singular and subjective. It is unrepeatable and unique. Whatever else a mind may be it is at very least this first person perspective, or a thing that has the capacity for a FPP. I’m not saying that everything that has a perspective is a person. But any subjective perspective (and all perspectives are subjective) is something like a FPP.
And there is no viable physical explanation of this fact of conscious life. There is no physical understanding of subjectivity and “what it is like to be like something.” This is what philosophy TedTalker David Chalmers calls the hard problem of consciousness. And it isn’t really a hard problem. It’s an intractable one. There is no way to explain consciousness with things that are not like consciousness. This is like claiming that semantics and syntax are identical. If semantics and syntax are identical then anyone who has read Shakespeare has understood Shakespeare. But this is obviously not the case since the phrase “to thine own self be true” is the opening line to every pervert’s autobiography. Which ironically enough is appropriate since Polonious is one of Bill Shakes’ great dumb dumbs.
Syntax vs Semantics
This syntactical/semantical gap is actually why Hogarth knows that the Giant has a soul. The Giant isn’t great with syntax but he gets semantics. Semantics like death and heroism. AI can create perfect syntax and yet baring a miracle no AI will ever comprehend a word of English or any other language. This is very hard to convince people of in the age of Siri.
I hate to break it to you but your iPhone doesn’t even know you exist. Your iPhone doesn’t know anything at all. It’s a tool that you utilize. That’s it. Member books? Member when we had to look things up in them? Books are a form of information technology. Books contain syntax that can impart semantics to us. But without human minds to comprehend the meaning conveyed by written words they’re just words. And if our world can be reduced to the physical then all we would have left is syntax. And unless “god” or something like “god” grants a computer a soul they’re never gonna get beyond syntax either.
John Searle demonstrates this with his famous Chinese room argument. He has us imagine a hypothetical person stuck in a room. This person knows only English. From here on we will simplify the original example and say this guy has a device that can take a piece of paper with questions written in Chinese and answer those questions also in Chinese. So he receives sheets of paper with questions in Chinese and plugs them into his “answering device.” The device pops out another piece of paper with Chinese writing on it, the answers to the original questions. He goes and knocks on the door to let “them” know the answering is done. The door quickly opens and a hand grabs the new sheet taking it away.
This is basically how google translate works. You plug something in and something else pops out. Like the monolingual English speaker in the Chinese room a program like google translate doesn’t actually know any Chinese. It decodes what is given to it and then re encodes it. Computers don’t speak or know any languages they are merely told how to arrange languages. They can utilize syntax but not semantics. So the Iron Giant isn’t merely a computer because he clearly has access to semantics. He has a soul.
Darwinism logically necessitates that AI be conscious
But the reason these truths are kept locked away in an ivory tower isn’t due to some conspiracy. It’s because our contemporary western worldview necessitates that consciousness be a physical by product of evolution. This became very clear during a Q&A at a google talks event where Searle was lecturing.
Questioner: You seem to take it as an article of faith that we’re conscious, your dog is conscious. And that that consciousness comes from biological material the likes of which we can’t really understand. But forgive me for saying this but that makes you sound like an intelligent design theorist who says that because evolution and everything in this creative universe that exists is so complex that it couldn’t have evolved from inert material. So somewhere between an amoeba and your dog there must not be consciousness so I’m not sure where you would draw that line. If consciousness in human beings is emergent or even in your dog at some point in the evolutionary scale why couldn’t it emerge from a computation system that’s sufficiently distributed networked and the ability to perform many calculations and is hooked into biological systems.
Searle: About could it emerge. Miracles are always possible ya know…the mechanisms by which consciousness is created in the brain are quite specific and remember this is the key point any system that creates consciousness has to duplicate those powers…
This is both true and not true. Consciousness doesn’t exist in your brain nor is it created by your brain. This is a very specific form of supervenience theory of mind and it’s incoherent for a variety of reasons. One being that Jeffrey Schwartz has shown that your mind can change your brain. But Searle is correct that whatever does cause consciousness would have to be duplicated in some thing else in order for that thing to have consciousness. But the questioner was not deterred.
Questioner: But machines can improve themselves and you’re making the case for why an amoeba could never develop into your dog over a sufficiently long period of time.
Searle interrupts saying no I didn’t but the man finishes:
you’re refuting that consciousness could emerge from a sufficiently complex computation system.
Here’s the video.
Searle’s response doesn’t really get at the problem. The questioner rightly pointed out what almost everyone takes for granted in the western world: everything we are is the product of a purely physical process. Our attempts at creating new better AI are just an extension of this physical process. If this physical process created consciousness once then of course it could do it again, especially while consciously trying to.
But a physical process can’t lead to a non physical process. How could it? That’s called “magic” and essentially turns all the AI places like google and OpenAI into C.S. Lewis’ N.I.C.E (National Institute of Coordinated Experiments). Why else do you think these pin heads write trash books like Deus Homo? They think if they arrange enough computers with enough complicated neural quantum flux capacitors in the shape of Stonehenge that a soul will just emerge!
That’s what happens when you replace Philosophy with the dogmatism of Neo Darwinism. If the Iron Giant exists somewhere he was made by “god” not Darwin. Or maybe both…