Our sense of sight mimics the mechanism of camera or maybe actually it’s the other way around. But that’s only half the story. Lights reflected by the dog’s face enter via lens and excite sensors at retina – the photographic film of the eye. That’s where the analogy ends I am afraid. The camera doesn’t interpret the image for us. We fall back on the eyes – rather the brain behind the eyes – for that, which is what you are doing right now if you are looking at the picture.
What is happening inside the brain is nothing short of miracle. Let alone the mechanisms we are yet to fathom, what we understand now is extremely complex processing. The retina map in the form of electrical signals are processed in many layers. The first layer deciphers the edges of the dog’s face and passes on that outcome to the next layer for a bit more detailed processing. And it goes on until we know it’s a dog’s face. But it needs to compare the outcome with something it knows. That aspect is taken care of thanks to our past experiences. We have seen many dogs before. This sums up how we experience seeing, most books say. The image analysis part of artificial intelligence is based on this processing. The ‘comparison with past experiences’ part is mimicked in deep learning. Recent success of AI then means that the above mechanism of ‘outside in’ processing really reflects how the conscious brain works. Right? Wrong – according to Anil Seth.
Seth in his book Being You says it all starts with our predictions based on our experiences. Sensory signals merely assist with error correction. In the end what we see is our best guess after all corrections. So it is ‘inside out’ processing and not ‘outside in’ that gives us our consciousness. The world our conscious mind sees is hallucination indeed, albeit a controlled one. That also explains why sometimes we only see what we want to see (the dog’s face in this picture for instance). The extra focus added by the camera kind of mimics the ‘precision weighting’ aspect of the brain’s mechanism.