In Walter Bradley Center director Robert J. Marks’s second podcast with philosopher Angus Menuge, where the big topic is the perennial “Hard Problem of consciousness, they established that one of the implications of quantum mechanics is that consciousness is a “thing”; it exists in its own right. How can we apply that finding to claims for artificial intelligence?


This portion begins at 25:33 min. A partial transcript, Show Notes, and Additional Resources follow.

Robert J. Marks (pictured): Here is the big AI question: I know that I am conscious. Is there a way we can test for consciousness in others? And if we can, could we apply this test of consciousness in others to artificial intelligence? Can I test for consciousness in you? How would I do that?

Angus Menuge: Well, it’s a difficult question, but it begins, I think, with how we are going to generalize on the basis of our data.

We find that all individuals naturally, as they develop as children, develop a theory of mind. That leads them to naturally believe that other people have minds as they do. We are also aware that we have a mind directly through introspection, and we can see that other people are relevantly like us in every other respect. So it’s very reasonable to conclude that, because other people are like us in every other respect, that they have minds.

The problem is that when you move to artificial intelligence, it is so different from human beings that it is not an obvious or reliable extrapolation. So when I test your consciousness by seeing if you produce “pain behavior,” part of the reason that that is convincing to me is that I’m already convinced that you’re the kind of being that could have a mind.

With AI, the problem is, I’m not already convinced of that. And because the system is so different than us, we run into the problem that it might produce all the same behavior. It might simulate all of the behavior you would expect from someone who is conscious. S

Surely, it’s easy to program a robot for example, that says, “ow” and withdraws its hand when it touches something that’s hot. It can have heat sensors, and it can be programmed to do all that stuff. But that doesn’t give me enough reason to think that it’s really in pain. And part of the problem is that it is so different from me in terms of its makeup. It’s different from me in all…

Continue reading: https://mindmatters.ai/2021/05/can-we-apply-tests-for-consciousness-to-artificial-intelligence/

Source: mindmatters.ai