this is wierd, am i understanding this wrong?
you just have to upgrade the synapses to respond to matches with euclidean distance?
thats a simple idea, and gets you to more stable object detection, unsupervised?
its still not a huge gain, because euclidean distance is so slow to compute, and if a synapse had to do it when it trained, and everytime it activated (well it trains when it activates) your going to be waiting a long time, well, its way off realtime thats all. its terrible memory wise too cause its as if the synapse touches on an all all connect dendrite for every touch...
if this was the case, then it must have been known already way back in the 50's, because its such a simple idea just we still dont have the computation power and custom hardware would have too intricate connections to be feesable at all.
well, how hard is it to make a mechanical brain, well, its harder than a computer, not really cell wise, just the dendrites and synapses are impossibly designed.
whoops i got it wrong theres no distance relationship past the first level. but the strange thing, is he does it on the last level? to what gain?? associate pixels are now not related at all, unless they are?
otherwise why did he do it?
the mystery is what makes programming so thrilling!
I didn't get to the end. When I realised he had limited his work to 32 by 32 pixels images, I lost interest.
He does talk about "activity in the final layer", maybe that's the key.
it could be higher res with a linear growth, but probably steep.