Biology and Large Language Models
We were promised a conscious being, but what we got instead is a semantic calculator.
We were promised a conscious being, but what we got instead is a semantic calculator — and honestly, that's good enough for me.
Not being conscious doesn't mean LLMs can't create value. It just means: "I'm just a tool." And in many practical applications, a tool is exactly what we need.
From my perspective, the current debate is too heavily influenced by media-driven sci-fi expectations and abstract philosophical questions. Instead, we should be focusing on how this technology solves specific, real-world problems in various industries.
Some AI industry leaders claim we're on the brink of developing algorithms that think smarter and faster than humans. But according to a recent paper by researchers at Anthropic, current AI systems are not capable of understanding their own “thought processes.” In other words, they are far from anything we could reasonably call conscious.
If you're curious, here's a great breakdown of the paper: Watch the video explanation.