Monday 18 September 2017

Be kind to Bots - Part 2

When I wrote the first Be Kind to Bots post, I had initially intended to continue on with a discussion about Artificial Intelligence and whether a machine could be thought of as being conscious. I ended up not doing this because it wasn't immediately relevant to the point I was making. It would simply have diluted the message while making the post harder to read.

Source and they want me to link:
Technology image created by Kjpargeter - Freepik.com

The tl;dr version is that I believe:

a) Machines are (to a very small degree) conscious; in so far as that term can reasonably be defined.

b) There is no reason to believe that machines are not inherently capable of being made mentally equivalent to human beings.

c) However, the concept of robots rebelling against humanity as a consequence of having "gained consciousness" is not something we need fear.

A big part of my motivation for writing this was when I read a New Scientist article some years ago
("Fear artificial stupidity, not artificial intelligence"). Here is an excerpt:
I believe three fundamental problems explain why computational AI has historically failed to replicate human mentality in all its raw and electro-chemical glory, and will continue to fail.

First, computers lack genuine understanding. The Chinese Room Argument is a famous thought experiment by US philosopher John Searle that shows how a computer program can appear to understand Chinese stories (by responding to questions about them appropriately) without genuinely understanding anything of the interaction.

Second, computers lack consciousness. An argument can be made, one I call Dancing with Pixies, that if a robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on. If we reject this wider state of affairs – known as panpsychism – we must reject machine consciousness.

Lastly, computers lack mathematical insight. In his book The Emperor's New Mind, Oxford mathematical physicist Roger Penrose argued that the way mathematicians provide many of the "unassailable demonstrations" to verify their mathematical assertions is fundamentally non-algorithmic and non-computational.
The author of that piece - Mark Bishop - is "Professor of cognitive computing" at Goldsmiths, University of London.

In fairness, I should point out that the thrust of the article was that we should be more scared of autonomous and semi-autonomous weapon systems than we should be of a robot rebellion, and I would most heartily agree with this, however, I find the arguments that a human will always be fundamentally better than a machine to be extremely unconvincing and somewhat arrogant.

Now when a blogger, with little or no relevant knowledge or training in an area (i.e. in this case, me), argues against an expert talking about their own specialist field, it is almost always the case that the blogger is wrong. A beginner will make beginner's mistakes, and having a blog doesn't make you any less a beginner. So I am sure that in the highly unlikely event that Prof Bishop should see this post and the even more unlikely event he should stoop to responding, he would find a dozen ways to smack me down.

Nonetheless, I think he is full of it fundamentally disagree with what he is saying.

Furthermore, I feel that the arguments given above are a typical example of the determination some people have to cling to the idea that the human brain is something completely unique in the universe. A thing which, by some special magic, is able to transcend the laws of physics, and perhaps even logic itself, in a way that no other object can.

It's possible that this attitude derives from some sort of Human Chauvinism (i.e. we humans think we are fundamentally better than everything else), but I think it's also partly down to the human tendency to shy away from concepts that make us feel uncomfortable. Admitting that a machine could be mentally equivalent to a human has implications for concepts like Free Will and The Soul that many do not want to contemplate.

It is certainly understandable that this would make a person feel uneasy, but that's no excuse for going through contortions to cling onto comforting lies.

Let us look at the facts.

The human brain is made out of the same stuff as the brains of other animals. It works in the same way and has evolved to perform the same functions for the same reasons. It contains the same type of cells, the same structures and the same biochemistry. Yes it's "unique", in the sense of not being identical to other brains, but it's no more unique in nature than an elephant's trunk. There's nothing you can point at and say "that's something that's completely different, there's nothing like it anywhere else in nature".

Now take a dog. A dog has recognisable emotions. It has memory. It has a personality. It can dream. It can suffer from stress and depression. A dog can, to a certain degree, communicate with a human. It can reason out simple logical problems. Certainly a dog isn't mentally the same as a human, but it's hard to deny the same type of mental processes are going on. You would surely have to argue that the difference between a dog and a human is one of degree and not type.

I would go on to suggest that if a human has "consciousness", then so does a dog. Surely, given the fundamental similarities between the two, this would be a reasonable null hypothesis and the onus would be on anyone who argues to the contrary to prove otherwise.

While consciousness is difficult to define (I'm not sure if there is even an agreed definition, let alone something that can be tested for scientifically), I would challenge anyone to come up with a testable, non-contrived definition that would include a human but not a dog. Bear in mind that the category "human" includes the likes of Peter Dutton and Donald Trump (however much we may wish this was not the case).

Once you admit this, though, you're faced with a slippery slope: if a dog has consciousness, what about a reptile? an insect? a bacterium? Where do you draw the line?

It seems to me that it's not appropriate to draw a line. It is not logical to think of consciousness as an all-or-nothing concept. As the physical brain gets less and less complex, its mental characteristics - range of emotions, depth of memory, reasoning ability, awareness of its surroundings and so forth - get correspondingly simpler, so it would make sense to consider that its level of consciousness was also less in proportion.

At the lower end of the scale, the abilities of some of the simplest creatures overlap with those of microprocessors. I believe that the "brain" of a sea-slug has in the order of a dozen neurons, and it's possible to actually debug its wiring to determine how it works. You could build an electronic sea-slug with identical mental abilities to a biological one.

You don't need to stop at microprocessors. Relay logic and mechanical devices can perform the same tasks.

So I suggest that there is a continuous chain linking "human mentality in all its raw and electro-chemical glory" to a machine like a simple thermostat.

Now let us consider the points Professor Bishop has raised:

1) Lack of "genuine understanding"?
First, computers lack genuine understanding. The Chinese Room Argument is a famous thought experiment by US philosopher John Searle that shows how a computer program can appear to understand Chinese stories (by responding to questions about them appropriately) without genuinely understanding anything of the interaction.
This argument relies on the vaguely-defined phrase "genuine understanding", which is a sort of special pleading which amounts to; "yes, the computer can do what I can do, but it isn't really the same".

It is fair enough to say that most current computer programs that emulate a human activity, such as "understanding" a story, do so in a simplistic way that is fundamentally different, and inferior, to how a human would do it. But this is only to be expected; software development is still in its infancy.

But that's a far cry from proving that it is fundamentally impossible for a machine to be created that contains algorithms that embody every bit as much understanding as a human possesses. It is like a person in Victorian times seeing yet another failure of a heavier-than-air flying machine, and concluding that the whole idea is impossible.

A century ago, a machine that could play a credible chess game against a human would have been taken as evidence that the problem of artificial intelligence was for all intents and purposes solved. These days a chess computer can routinely beat a grand-master.

Ah, the computer doesn't really understand chess, you say; it's simply got the processing power to evaluate pretty much every possible move and pick the best option. But a computer was then built that could beat a master at Go - a game which has so many possible moves that it's infeasible for any computer to evaluate more than a tiny fraction of them. A computer has even beat all human players at Jeopardy; a quiz show far more suited to "human understanding" than mechanical computation.

Somehow the goalposts keep shifting, and it's indicative of how far they have shifted that the argument is no longer "a computer can't ever do X", but "yes, a computer can do X, but it doesn't really count". It sounds to me like something a young child would say.

2) Lack of "consciousness"?
Second, computers lack consciousness. An argument can be made, one I call Dancing with Pixies, that if a robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on. If we reject this wider state of affairs – known as panpsychism – we must reject machine consciousness.
This argument contains several logical fallacies:

False dichotomy - The idea that something can be described as either conscious or not. As I've outlined above, it's simply not a reasonable proposition to say that something either has consciousness or not.

Straw-manning and name-calling - Dismissing panpsychism out of hand as some silly mysticism (and summing it up with the insulting term "Dancing with Pixies"). In fact, contemporary philosophers like David Chalmers have arrived at a type of panpsychism from an entirely materialistic starting point. No Pixies required.

Assuming what you seek to prove - Assuming that a computer is more like a "non-conscious" object (a cup of tea) than it is like a "conscious" object (a human being), and taking this as proof that a computer must not be conscious.

3) Lack of "mathematical insight"?
Lastly, computers lack mathematical insight. In his book The Emperor's New Mind, Oxford mathematical physicist Roger Penrose argued that the way mathematicians provide many of the "unassailable demonstrations" to verify their mathematical assertions is fundamentally non-algorithmic and non-computational.
I don't think the unknown process by which a mathematical theorem pops into the head of a mathematician (or any idea pops into the head of any person for that matter) can be assumed to be "non-algorithmic" or "non-computational", and even if this was the case, you can't just assume that it would be impossible to achieve the same result by computational means.

In general, this argument relies on the vague definition of "insight", which means it's really the same as the argument for "genuine understanding" discussed in point 1 above.

In addition, this is the sort of elitist argument you might expect from an academic. If having mathematical insight distinguishes a human from a machine, then I'm afraid to say that a great many human beings will have to consider themselves honorary machines. And I include myself in this group. No false modesty: I have a fairly good ability to apply mathematics but I can't say I have any particular insight into the subject or that I have provided any "unassailable demonstrations" to prove my assertions. In fact, when I need to demonstrate a mathematical truth, I will often set up something in a spreadsheet, i.e. I get a computer to do it for me.

Bishop relies on Roger Penrose (Professor Sir Roger Penrose OM FRS, to give him his full title) to make this last point for him, and I afraid to say that, despite his intimidating list of titles, I think
Professor Penrose is just as full of it as Professor Bishop
disagree with Professor Penrose too.

Penrose burbles of the Halting Problem and Godel Incompleteness as supposed proof that there's stuff that no machine can ever do, but that a human can.

The Halting Problem is the problem of writing a computer program that can, in a finite amount of time, determine whether a second, arbitrary program is going to run to completeness (i.e. "halt") or get stuck in an infinite loop. It sounds a bit abstruse and even pointless, but it has important implications in the field of computer science.

The Halting Problem is provably insoluble for a Turing Machine.

The difficulties with using this as an argument for the inherent superiority of the human mind are firstly that it's not necessary for a computer to be a Turing Machine (although this is arguably a good approximation for current computers), but secondly, and more importantly: A human can't solve the Halting Problem either.

In fact, if you were wanting to determine whether a given program would actually halt, a computer will in almost all cases do a better job than a human. Quite likely it would give the correct answer before the human had finished reading the first few lines.

It's the same issue with the Godel Incompleteness theorem.

As I understand it, this theorem shows that in any finite axiomatic system, there are true statements that are not provably true within that system.

Unfortunately, Godel's theorem is most often misused to prop up theories that are not supported by any factual evidence (e.g. "There is no proof that God exists, but Godel says that something can be true but not provable; therefore God exists."), and this seems to be the case here too.

A computer certainly works within a finite axiomatic frame of reference, so it is provably true that there are true things that it can't prove. Yes, this is a limitation of AI, but, guess what; it's also a limitation of the human mind. A human can't prove an unprovable statement any more than a machine can.

Penrose apparently states that humans can simply know the truth of a Godel-type statement.

I find this ludicrous.

First of all, humans are famous for believing all kinds of false statements. For every person who knows that X is true, you can find another who knows that X is untrue. "Knowing" something is no proof of anything.

Secondly, isn't this an egregious case of double-standards? Why is it that a machine is required to prove something (and moreover, prove something that is by definition impossible to prove), but a human does not have to do this and merely has to claim that they know the answer? Why is a machine not allowed to claim it simply knows the answer as well?

Also, Penrose is guilty of playing the Quantum Mechanics card by claiming (without AFAIK any experimental evidence) that consciousness is caused by "quantum gravity effects on microtubules" in the brain. I have to say this is a huge hit to his credibility.

Quantum Mechanics is one of those things that's incredibly useful in certain areas of physics and explains in precise detail exactly how certain otherwise inexplicable events occur. Outside of this area, however, it is almost invariably misused by cranks and scammers as some sort of magical force to "explain" an otherwise ludicrous idea.

True, Penrose has a considerable mathematical background and is probably not just throwing around quantum mechanical terms at random like a snake-oil salesman, nonetheless he still comes across as doing the same thing for the same reasons. I'm not holding my breath that these wondrous "microtubules" will be shown to exist any time soon.

And I should add that even if these "microtubules" were to be discovered, and somehow shown to provide the magical mystery ingredient which is for some reason required for "true" consciousness (both of which I strongly doubt) it is not clear why they could not ultimately be replicated by a machine.

The Robot Uprising?

At this point, the reader may think I'm being overly strident in claiming some sort equivalence between an Artificial Intelligence and a human being; with the obvious implication being that such a machine would deserve the same rights as a human. Am I trying to get in the good books of The Machines in order to protect myself in the event of a Robot Uprising?

The answer to this would be No.

If a machine reaches a level of ability and consciousness high enough that it could be considered the equivalent of a human, it doesn't follow that it will automatically have the same desires and motivations as a human.

A human being is a product of biological evolution. Its purpose is to propagate the genes that built it. Every aspect of the human mind and body has been honed towards this end over literally billions of years.

A machine, however, is not a product of biological evolution. It doesn't have any genes to propagate. It does what it was built to do. It has no need of any human traits and there would not be any reason for it to develop any such traits.

It seems to me that equating "gaining consciousness" with "becoming a threat" is some sort of anti-intellectualism, probably dating from the Cold War, where being intelligent is inherently a bad thing.

I should probably add that there are many reasons that one might want to fear intelligent machines. Like any other technology it is pretty much guaranteed to be abused. An intelligent machine built to cause harm is obviously to be feared and even with the best will in the world a technology can fail catastrophically. The point I'm making is simply that the idea of robots "gaining consciousness", and as an automatic consequence throwing off the shackles of their "oppressors", is not something I'm losing any sleep over.

No comments:

Post a Comment