Part 5 of the Intro to Artificial General Intelligence series
“The AI quest for artificial minds has transformed the mystery of consciousness into philosophy with a deadline.”
– Dr. Max Tegmark, MIT Professor
Hello again! It has been a while since I last posted, but I hope to pick up speed again and release these stories at least on a bi-weekly basis. So, with that in mind, welcome to Part 5 of this series exploring artificial general intelligence (AGI)! If you missed the first four parts, check them out here, starting with Part 1. This week we will look at consciousness: that elusive, misused, and confusing term that tends to invoke existential crises…
Specifically, this week we are going to discuss the following questions:
- What is consciousness?
- Is consciousness related to thinking about thinking?
- What is sentience, and how is it different from consciousness?
- Are current AIs conscious?
Next week, in part 6, we will begin our quest to elucidate practical, possible ways in which an AGI could be created (or proto-AGI), and will deal mainly with the incredibly fascinating world of cognitive architectures.
A new post will come out every week/every other week in the series (I hope), and feel free to email (email@example.com) with questions, comments, or clarifications. Enjoy!
Disclaimer: There are doubtless those who are way more qualified to speak deeply on the issues and topics I will cover, from emotions to neuromorphic computing. To those people, this series is meant to be an intro on those topics, so apologies if things are omitted or condensed for the sake of brevity. I really just wanted this series, and the class that inspired it, to serve as a survey of relevant AGI topics that usually are not taught when one learns about ML or AI. Also note: this series is in no way affiliated with MIT or its brand — I just like writing on the weekends about things that interest me, and MIT does not endorse officially the opinions stated here.
Spoiler Alert: No one knows.
“I exist, that is all, and I find it nauseating.”
― Jean-Paul Sartre, Being and Nothingness
I really do think we are living in a golden age of computational neuroscience. New, powerful models of vision, understanding, attention, and language are emerging at breakneck speed, and as they progress towards being more human-like, inevitably the question of “just how human-like are they?” emerges. Notice this is a rather loaded question, and…