How infants learn speaking 🗣️?; Harvard 🏛️ defines a new maths framework for biologically 🧠 inspired Reinforcement Learning; Neuronal avalanches modelling 🏂

Why should you care about Neuroscience?

Neuroscience is the root of nowadays artificial intelligence 🧠🤖. Reading and being aware of the evolution and new insights in neuroscience not only will allow you to be a better “Artificial Intelligence” guy 😎, but also a finer neural network architectures creator 👩‍💻!

August proposes three complicated papers, that I tried to make simple and digestible. I guess many of you have always wondered how infants can put all the sounds together and learn languages 🗣️ Rohrlich and O’Reilly from the University of California Davis try to assess this question, developing fully biologically inspired neural networks, which proves experimental findings on infants — already tested in 1996! The second paper comes from Harvard researchers. It’s a mathematical one, a fantastic read, where Reinforcement Learning is reviewed in terms of reward prediction-error and based on biological assumptions: 🔝. The third paper is something I loved, as it’s related to statistical physics — my hidden passion. Authors showed how a simple Ising model, near to the non-equilibrium, can model phase transitions in the brain, mimicking antithetic dynamics such as neuronal avalanches 🔥 and neuronal oscillations 📶. Enjoy 🙂

John Rohrlich, Randall C. O’Reilly, Paper

How can we learn languages from pure raw sensory experiences when infants? The more you think of it the more sounds complicated how infants’ brain manages to fix and arrange sounds and transform them, later in few months, in sounds and a real language. This problem is known as statistical learning in neuroscience and it has been the centre of years and years of research. Indeed, decades of studies have been devoted to building up the predictive learning approach, a mechanism that could be the driving process in statistical learning. In this paper, the authors built an artificial neural network, to provide insights into statistical learning, based on mimicking corticothalamic circuitry, neocortex interconnections and thalamic nuclei. The model sheds light on the speech segmentation paradigm in infants. In a nutshell, infants’ predict the next in-word syllable in a speech — view as raw auditory experiences — and from there they start…

Continue reading:—-7f60cf5620c9—4