In this podcast, Carlos R. Ponce, MD, Ph.D., Assistant Professor, Department of Neuroscience, Washington University School of Medicine, discusses consciousness of vision.
How do brain regions communicate for motion processing and visual object recognition? By utilizing a combination of reversible deactivation and sophisticated microstimulation techniques, along with some advanced modeling, Dr. Ponce and his team seek to understand the complex brain processes, cells, and signals, that create consciousness of vision. Cells in the occipital lobe respond to basic information such as lines and dots, but the cells ‘talk’ with other cells that react to even more complicated information (corners and curvature, etc.), and then these cells communicate with still other cells that respond to increasingly complex information such as a texture for example. These steps, in general, lead to the final goal—achieving conscious understanding.
Dr. Ponce elaborates on the steps the eye embarks upon as light enters, from the very first light hitting the retina, eventually hitting the primary visual cortex cells, and making its way to the temporal cortex where visual comprehension is achieved. He talks about AI, and how neural networks react to complex shapes and he discusses how neural networks receive information. Dr. Ponce discusses his various thoughts about the human brain, and he explains how it is actually just a form of a neural network. The Harvard Ph.D. provides an overview of the current state of machine learning, and discusses machine vision, providing examples of early experimentation in the area of machine vision.
Podcast: Play in new window | Download | Embed