I am reading Semir Zeki's article on the Neurobiology of ambiguity and couldn't help but jump at this paragraph, thinking of the way it could relate to the frame problem:
"The primary law dictating what the brain does to the signals that it receives is the law of constancy. This law is rooted in the fact that the brain is only interested in the constant, essential and non-changing properties of objects, surfaces, situations and much else besides, when the information reaching it is never constant from moment to moment. Thus the imperative for the brain is to eliminate all that is unnecessary for it in its role of identifying objects and situations according to their essential and constant features."
The frame problem is, in simple terms, the difficulty that AI scientist encounter when having to define for an artificial system all the things that a human takes for granted - the common knowledge, the "known". The problem expands when having to teach the same AI system how to detect change. If the AI agent - let's call it robot Tod for simplification - doesn't know what changed, how can he understand what can be dangerous or relevant in a situation for it?
Dangerous or relevant come especially connected here, when one wants to program, evolve or otherwise teach robot Tod to avoid unpredictable dangers. Otherwise one busy programmer (or better yet, and army or them) would have to write all the specific code for Tod to survive, for every potential circumstance it might encounter. And when the circumstances would change, even slightly, Tod would be so confused that the programmer(s) would need to go back to their keyboards for more work.
Therefore one would prefer Tod to be lovely and independent and understand danger, threat, maybe even possibility. For which Tod would need to have attention and know what to focus it on. The frame problem is such that Tod needs to compute and reassess all the objects in its environment to see if anything has changed, before being able to make decisions about anything.
Before proving any type of intelligence, robot Tod has to know what to be intelligent about.
Which brings me back to the point. When Zeki formulated the human brain's interest with what is constant, I thought "but no, the human brain is interested in what's different, that's what's brilliant about the human brain, it doesn't need to focus on the things that are constant, it seems to instinctively and (almost) instantly compute what is different on focus attention resources on it!"And then the complementarity of the two sentences hit me. The complementarity of Zeki's statement that the brain is interested in finding constants (and apparent desire to ignore differences), with the brain's obvious ability to find differences, to detect change and focus on it.
The brain's preference for constancy might very well be what allows it to detect change after all! If we wouldn't have such an ability for abstraction and for building categories, we could never dedicate spare resources to things that look different.
We store this plethora of characteristics about phones, computer screens and bookcases that makes us insensitive to seeing one of them. If something belongs to a category we have encountered before, unless we have a particular interest in analysing that type of object, we won't see it at all. The brain will consider it irrelevant and won't bother to call attentional resources to analyse it. The corresponding neural networks for those categories will be at rest. But when our friend will manipulate a phone with a big yellow dot on the screen we will look surprised at it, while our brain will keep on computing, comparing it alarmed with other "normal" phones encountered in our past.
So emitting constant expectations (at various levels of abstraction) and only focusing on an element of our environment when this defies our expectations is part of our brain's normal activity.
This is where another important point arises - on which Zeki's article and general views can throw some light. What is enough knowledge for the brain to stop focusing on an object?
Many AI experiments have failed because they tried to endow their creations with what one could call high-resolution representations of objects.
Think of a cat! The representation that first springs to mind is not gonna be that high-res, unless you have a cat that is special to you or you are trying purposefully to defy this thought experiment. Of course you could create a higher resolution representation of a cat, but even then, it would take lots of focus to build it in your "inner mind" and I doubt it would reach picture-resolution. If it did, and you can keep it in your mind for long, you might want to get into graphical arts :D If you are into graphical arts and it didn't, that is why we need art and we search for perfection (read richness of information+aesthetics).
The point of this is that although we can create in real life marvellous objects with impressive graphical qualities and breath-taking resolution, in our mind we really store the most important features about objects (which it's hard to program into robot Tod). Upon recall (unless you really got stubborn in trying to imagine as much detail as possible on your cat) we tend to retrieve just those important features. More than that, each sensory modality seems to store different features about the same objects: think about stroking a cat for a moment without imagining how the cat actually looks like. It's a distinct furry feeling, and you might imagine the actual size of the cat (which is a tactile-detectable feature). You might be tricked into caressing a different, similarly furry animal, and thinking it is a cat, until you detect a different body-size. (that is how we create metaphors). Each sensory-type has limits, and the visual input is definitely the most high-res for us humans. Finding something that looks like a cat but isn't one is much harder, which is what makes vision such a great tool.
It isn't that we just compare the image of a cat to an internal representation, but through vision we have access to information about its movement, its behaviour, and so on. Better yet, most of us are lucky enough to explore the world with all of our senses at the same time. So finding something that looks like a cat and smells like one while in the same time purring is close to impossible. And if it would happen, we might even be entitled loosen up the boundaries of our Cat category to include the unexpected object. Because, for our senses, it would be relevant to classify it as a cat, or a something that is very similar to one, with the only difference that *you_fill_in*(you would have to study why it isn't completely one; or maybe you shouldn't, if its not relevant for your survival or curiosity(i.e. u can live without it - survival again)).
It's not the first time when I think that understanding how us humans form our mobile, imperfect but extremely reliable categories would enable us to move AI forward a notch or two. If robot Tod could only recognise cats that easily, and ignore them, to pay attention to the threats or changes in the room!
The secret must lay somewhere with our ability for abstraction, for only paying attention to something if it proves in vast contradiction with what we expected about it. And with encoding just enough features to differentiate between things, without making it hard to upload and compare representations).
Maybe in one of the future blogs I will explain how I imagine that to happen in AI, and some of my experiments with modelling something in that direction through neural networks.
No comments:
Post a Comment