Neuroscientist Ryota Kanai, founder and CEO of Tokyo-based startup Araya, aims to “understand the computational basis of consciousness and to create conscious AI.” He isn’t sure, he says, if we want AI to be conscious. But, technically, he doesn’t see it as an insurmountable problem:
If we can’t figure out why AIs do what they do, why don’t we ask them? We can endow them with metacognition—an introspective ability to report their internal mental states. Such an ability is one of the main functions of consciousness. It is what neuroscientists look for when they test whether humans or animals have conscious awareness. For instance, a basic form of metacognition, confidence, scales with the clarity of conscious experience. When our brain processes information without our noticing, we feel uncertain about that information, whereas when we are conscious of a stimulus, the experience is accompanied by high confidence: “I definitely saw red!” …
If we consider introspection and imagination as two of the ingredients of consciousness, perhaps even the main ones, it is inevitable that we eventually conjure up a conscious AI, because those functions are so clearly useful to any machine. We want our machines to explain how and why they do what they do. Building those machines will exercise our own imagination. It will be the ultimate test of the counterfactual power of consciousness.
Ryota Kanai, “Do you want AI to be conscious?” at Nautilus (June 9, 2021)
Between the two statements quoted above, Kanai offers information about his team’s various efforts at causing machines to think like people, assuming that the basis of consciousness is computational.
It all seems confused. First, “metacognition” means “thinking about what we are thinking.” To do that, we must actually be thinking, not computing. Developing a machine that can think, as opposed to merely compute, would seem like a good first step. Anything like consciousness (which includes metacognition) is well beyond that.
Also, what does it mean to say, “it is inevitable that we eventually conjure up a conscious AI, because those functions are so clearly useful to any machine”? Immortality seems “so clearly useful” to human beings too. Is it inevitable that we will conjure it up?
The essay is a classic in promissory thinking: Past successes predict future successes. But not so fast. Everything has limits. It is easy to make great strides when we…
Continue reading: https://mindmatters.ai/2021/06/neuroscientist-conscious-ai-is-not-an-insurmountable-problem/