I think it's Samir Zeki that mentions in The Neurology of Ambiguity a certain Law of Constancy - which states that the brain will always look for constant things.
I remember stating in one of my blogs that my opinion is that by contrary, the brain is actually constantly searching for different things. And that I had a hunch that both phenomenons were true, and represented to faces of the same coin.
We cannot understand the world without projecting already known things on it, no matter how different the world is from our internal projection. We could argue that the process of understanding per se is an effort to match the objects of our present to analysed objects of our past, or objects that we don't understand to objects that we think we understand better, or know more about.
We seem to constantly understand things referentially - that is to say that we always understand thing through other things we understood before, or in their relationship (similarity, difference, type of interaction) with other things that we know other things about. The ultimate thing that we relate everything to is ourselves, which is why I guess the I has such an important internal status. (of course we can even deconstruct the I - our knowledge is not perfect about anything and I am not sure that there can be such a thing as perfect knowledge).
This entire corpus of knowledge that we carry with us involves the meaning that we create about things. Is this structure of web of relationships an accident, or is there something in our neural networks that predisposes us to learning in this way?
If the law of constancy is true, we constantly look out in the world to find things that are the same with what we know, because those things already have a meaning for us. We can use them as points of reference for the things that are different, and processing faster what is similar means already acquiring a large amount of info and knowing on what to focus next.
The differences require more processing power.
I remember reading for the first time about the theory of expectancy which states that we always formulate predictions about the future. I think we do the same when we encounter things that are different to what we know - that is we venture guesses, based on our experience with things that are similar to the new ones.
I think they are at least three different strengths that a system could prove in attempting to make predictions - or venture guesses (I prefer the "venture guesses" form when it comes to encountering new objects, as I think the purpose of the process is not to predict the new object completely, but to narrow down if possible its functionality category):
- having a database of knowledge that is close to the new object (in terms of meaning, functionality, etc)
- having the ability to follow for a long term change in objects, families of objects, differences among elements of the same category, object evolution - having as a consequence a better ability to mentally manipulate transformations of objects and thus venture more appropriate guesses as what an object might have been related to in the first place, or what it might become or spawn in the future
-having a general higher-than-average exposure to novel objects, thus generally being better at categorising and manipulating (storing in a partially determined category, or creating a momentary category) novel objects until they are better categorised.
It would be interesting to build systems that instantiate those different evolutionary advantages, and see which one does best, or if they fare similarly, although I suppose the parameters for past experience would take some time to define.
If you could choose which system to be, what to have as an advantage, or you simply could have a cognitive upgrade that would instantiate one and only one of the advantages above, which one would you choose?
I will think of my answer and let you know.
No comments:
Post a Comment