Imagine you undergo a procedure in which every neuron in your brain is gradually replaced by functionally-equivalent electronic components. Let’s say the replacement occurs a single neuron at a time, and that behaviorally, nothing about you changes. From the outside, you are still “you,” even to your closest friends and loved ones. 

What would happen to your consciousness? Would it incrementally disappear, one neuron at a time? Would it suddenly blink out of existence after the replacement of some consciousness-critical particle in your posterior cortex? Or would you simply remain you, fully aware of your lived experience and sentience (and either pleased or horrified that your mind could theoretically be preserved forever)? 

This famous consciousness thought experiment, proposed by the philosopher David Chalmers in his 1995 paper Absent Qualia, Fading Qualia, Dancing Qualia, raises just about every salient question there is in the debate surrounding the possibility of consciousness in artificial intelligence. 

If the prospect of understanding the origins of our own consciousness and that of other species is, as every single person studying it will tell you, daunting, then replicating it in machines is ambitious to an absurd degree. 

“Creating this kind of advanced software requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.”

Will AI ever be conscious? As with all things consciousness-related, the answer is that nobody really knows at this point, and many think that it may be objectively impossible for us to understand if the slippery phenomenon ever does show up in a machine. 

Take the thought experiment just described. If consciousness is a unique characteristic of biological systems, then even if your brain’s robotic replacement allowed you to function in exactly the same manner as you had before the procedure, there would be no one at home on the inside, and you’d be a zombie-esque shell of your former self. Those closest to you would have every reason to take your consciousness as a given, but they’d be wrong. 

The possibility that we might mistakenly infer consciousness on the basis of outward behavior is not an absurd proposition. It’s conceivable that, once we succeed in building artificial general intelligence—the kind that isn’t narrow like everything out there right now—that can adapt and learn and apply itself in a wide range of contexts, the…

Continue reading: https://interestingengineering.com/will-ai-ever-be-conscious

Source: interestingengineering.com