Sunday, June 3, 2018

Vision, Accuracy, and the Right Brain


In a TED talk I can't seem to forget, Iain McGilchrist, in RSA fashion, animates the two sides of ourselves. These are the two sides of our brains, the two hemispheres. I don't think I need to explain the Left Brain - Right Brain distinction on account of its general popularity. I will only say that this is not an absolute thing; it's a heuristic for understanding how our complicated heads run all that bugged out code up there.

The part of McGilchrist's talk I can't forget lies in his premonition that humans have been heading in the Left direction since as far back as we can remember, but it may soon be time for the pendulum to turn the other direction. It's hard to believe. We can't have megacities without the Left Brain. No spaceships, no biodomes, no nanofabricated body modifications.

Or can we? If you read about contemporary advances in artificial intelligence, you might be thinking that it's quite possible.

Much of the new things happening in this field (which are actually old ideas running on new hardware) are based on a pretty revolutionary paradigm.  I'm referring here to deep learning neural nets and more generally the idea of machine learning. This semantically refers to an approach to computing that uses a kind of brute force instead of a superintelligent program. It's kind of like a Wisdom of the Crowds thing plus computers; instead of one thousand guesses, there's 100 trillion guesses. (I say 'a kind of brute force' because in other instances, some would could the traditional method a brute force of computation.)

So the answer they come up with isn't exact, it's approximate. But when you have to query petabytes of data, you can no longer expect an exact answer.

Check out for example this new thing where scientists Frankenstein a neural net and a cell phone together to makeshift a microscope as powerful as one in a high grade laboratory. We no longer need to see every micropixel to get a clear picture of what we're looking at. Instead we teach an algorithm to see, and it does something that in theory is similar to what our brain does when we see. We make stuff up, we fill in the dots, we approximate. (In actuality, the algorithm is taught not how to see, but how to learn to see.)

We live in this post-truth world, right? Facts don't matter; belief matters. Being right is not as important as convincing people that you're right. That's a tangent. But we do live in a world of Big Data, so big we really don't know what to do with it all. We simply can't make computers powerful enough to handle all of it. But we have this new approach that can scale-up. It approximates, which is not what we're used to, but it does reach the scale of Big Data.

Have we reached this inflection point where our technology is starting to act more like our brain (the whole brain, not just the left side)? Is there where we find out, after hundreds of years (after the Enlightenment / Scientific Revolution) that absolute certainty is not the ultimate goal in all knowledge-gathering endeavors? Just speculating here, but it sure seems like we're headed for a future that looks more like a wet biological mess than a crystallized spreadsheet.

Notes

Deep learning transforms smartphone microscopes into laboratory-grade devices
May 2018, phys.org

Iain McGilchrist, The Divided Brain, 2011
this is a TED talk based on his book

Bicameralism 
Julian Jaynes, The Origin of Consciousness and the Breakdown of the Bicameral Mind, 1976

No comments:

Post a Comment