Monday 22 November 2010

Prof.Zeki's Neurology of Ambiguity and the Frame Problem

I am reading Semir Zeki's article on the Neurobiology of ambiguity and couldn't help but jump at this paragraph, thinking of the way it could relate to the frame problem:
"The primary law dictating what the brain does to the signals that it receives is the law of constancy. This law is rooted in the fact that the brain is only interested in the constant, essential and non-changing properties of objects, surfaces, situations and much else besides, when the information reaching it is never constant from moment to moment. Thus the imperative for the brain is to eliminate all that is unnecessary for it in its role of identifying objects and situations according to their essential and constant features."
The frame problem is, in simple terms, the difficulty that AI scientist encounter when having to define for an artificial system all the things that a human takes for granted - the common knowledge, the "known". The problem expands when having to teach the same AI system how to detect change. If the AI agent - let's call it robot Tod for simplification - doesn't know what changed, how can he understand what can be dangerous or relevant in a situation for it?
Dangerous or relevant come especially connected here, when one wants to program, evolve or otherwise teach robot Tod to avoid unpredictable dangers. Otherwise one busy programmer (or better yet, and army or them) would have to write all the specific code for Tod to survive, for every potential circumstance it might encounter. And when the circumstances would change, even slightly, Tod would be so confused that the programmer(s) would need to go back to their keyboards for more work.
Therefore one would prefer Tod to be lovely and independent and understand danger, threat, maybe even possibility. For which Tod would need to have attention and know what to focus it on. The frame problem is such that Tod needs to compute and reassess all the objects in its environment to see if anything has changed, before being able to make decisions about anything.
Before proving any type of intelligence, robot Tod has to know what to be intelligent about.
Which brings me back to the point. When Zeki formulated the human brain's interest with what is constant, I thought "but no, the human brain is interested in what's different, that's what's brilliant about the human brain, it doesn't need to focus on the things that are constant, it seems to instinctively and (almost) instantly compute what is different on focus attention resources on it!"And then the complementarity of the two sentences hit me. The complementarity of Zeki's statement that the brain is interested in finding constants (and apparent desire to ignore differences), with the brain's obvious ability to find differences, to detect change and focus on it.
The brain's preference for constancy might very well be what allows it to detect change after all! If we wouldn't have such an ability for abstraction and for building categories, we could never dedicate spare resources to things that look different.
We store this plethora of characteristics about phones, computer screens and bookcases that makes us insensitive to seeing one of them. If something belongs to a category we have encountered before, unless we have a particular interest in analysing that type of object, we won't see it at all. The brain will consider it irrelevant and won't bother to call attentional resources to analyse it. The corresponding neural networks for those categories will be at rest. But when our friend will manipulate a phone with a big yellow dot on the screen we will look surprised at it, while our brain will keep on computing, comparing it alarmed with other "normal" phones encountered in our past.
So emitting constant expectations (at various levels of abstraction) and only focusing on an element of our environment when this defies our expectations is part of our brain's normal activity.
This is where another important point arises - on which Zeki's article and general views can throw some light. What is enough knowledge for the brain to stop focusing on an object?
Many AI experiments have failed because they tried to endow their creations with what one could call high-resolution representations of objects.
Think of a cat! The representation that first springs to mind is not gonna be that high-res, unless you have a cat that is special to you or you are trying purposefully to defy this thought experiment. Of course you could create a higher resolution representation of a cat, but even then, it would take lots of focus to build it in your "inner mind" and I doubt it would reach picture-resolution. If it did, and you can keep it in your mind for long, you might want to get into graphical arts :D If you are into graphical arts and it didn't, that is why we need art and we search for perfection (read richness of information+aesthetics).
The point of this is that although we can create in real life marvellous objects with impressive graphical qualities and breath-taking resolution, in our mind we really store the most important features about objects (which it's hard to program into robot Tod). Upon recall (unless you really got stubborn in trying to imagine as much detail as possible on your cat) we tend to retrieve just those important features. More than that, each sensory modality seems to store different features about the same objects: think about stroking a cat for a moment without imagining how the cat actually looks like. It's a distinct furry feeling, and you might imagine the actual size of the cat (which is a tactile-detectable feature). You might be tricked into caressing a different, similarly furry animal, and thinking it is a cat, until you detect a different body-size. (that is how we create metaphors). Each sensory-type has limits, and the visual input is definitely the most high-res for us humans. Finding something that looks like a cat but isn't one is much harder, which is what makes vision such a great tool.
It isn't that we just compare the image of a cat to an internal representation, but through vision we have access to information about its movement, its behaviour, and so on. Better yet, most of us are lucky enough to explore the world with all of our senses at the same time. So finding something that looks like a cat and smells like one while in the same time purring is close to impossible. And if it would happen, we might even be entitled loosen up the boundaries of our Cat category to include the unexpected object. Because, for our senses, it would be relevant to classify it as a cat, or a something that is very similar to one, with the only difference that *you_fill_in*(you would have to study why it isn't completely one; or maybe you shouldn't, if its not relevant for your survival or curiosity(i.e. u can live without it - survival again)).
It's not the first time when I think that understanding how us humans form our mobile, imperfect but extremely reliable categories would enable us to move AI forward a notch or two. If robot Tod could only recognise cats that easily, and ignore them, to pay attention to the threats or changes in the room!
The secret must lay somewhere with our ability for abstraction, for only paying attention to something if it proves in vast contradiction with what we expected about it. And with encoding just enough features to differentiate between things, without making it hard to upload and compare representations).
Maybe in one of the future blogs I will explain how I imagine that to happen in AI, and some of my experiments with modelling something in that direction through neural networks.

Saturday 20 November 2010

Depression with Dorothy Rowe - Preamble (I)

I just found Dorothy Rowe's book - Depression, the way out of our prison - in Oxfam :).
From the preface, Rowe seems to build on the hypothesis that depression is more of a cognitive malfunction than an illness in some people (or both), which appears as a reaction to one realising a big discrepancy between the life that one leads and the life one thought was supposed to be living.
This statement has interesting implications for both Behavioural Cognitive Therapy and AI, which hopefully I will have the pleasure to write about in a future post.
In anticipation of having the time to read the book, here are some of my thoughts on the matter:
I think depression has a lot to do with what one can anticipate in one's own future. I've seen people going through huge amounts of pain and still remaining optimistic as long as there was something they could do about their life, to steer it in the right direction. But depression is just hopelessness, the anticipation of perpetual punishment or discord (I wrote dissaccord but will have to search the English term for that) between what life throws at you and what you are. And I doubt anybody can live easily through perpetual hopelessness.
Which puts light on an interesting effect of our cognitive abilities to anticipate our future constantly, to wonder about the meaning of our lives, to try and gain unity between our goals and what we think we are and our external manifestations.
The mechanism that we use to imagine the rights and wrongs that can happen in our life if we do one thing or another is a state-of-the-art cognitive tool. It helps us try to plan a life path or solutions paths in an uncertain world. But the same mechanism might be the downfall of us when all we can anticipate is pain.
With classical piano as a craft that needed lots of practice time, I haven't supported in my teenage years a hedonistic view of the world. But then again, hedonism is slippery, it is whatever gives us pleasure, and one can find pleasure in an ascetic or warrior-like lifestyle. One can definitely find pleasure in creating and accomplishing things, despite the effort that takes.
The truth is that we need rewards in our present and future (and perhaps in our past, as proofs that good things can happen to us). We cannot function without motivation, no matter how internalised this motivation is. And the pleasure of self-expression, of doing things of interest for one's self and being at least partially the self (or living the life ) that you imagined you will/should be are very internalised types of reward.
One could speculate that there is not much in terms of external reward that can equate to these internal ones.
The nature of reward is rather controversial in my opinion, as one can only achieve pleasure in one's brain. Thus it feels that one subjectively (but not necessarily consciously) decides if to enjoy or feel pleasure from something or not. That gives us some sort of upper hand on our own pleasure, but points at something more significant too. We cannot fool our own brain into having pleasure if we don't stick by our values, ideals, desires, etc.
This is why I think drug use for pleasure purposes can only provide fleeting and elusive glimpses of pleasure to its pursuers, and not the fully fledged satisfaction that one wants to achieve. In terms of informational functionality, pleasure seems to mark something worth seeking. If one is not happy in one's majority of deciding functions about one's actions and life, pleasure - which is put there as an internal reward for doing the right things - seems to elude the seeker.

Friday 19 November 2010

The search for a Phenomenological flavour

There is talk about a God spot in the brain, and areas which, while stimulated, provoke feelings of a "sensed presence" or of "being one with the universe".
This begs a philosophical question - do we want religion for how it feels like ? For its phenomenology? Does religion come with a type of aesthetic ideals about how one should feel about one's world, self and others? Does it come necessarily as a phenomenological search for meaning, protection or integration in a beautiful structure (i.e. an ideal society in which the others behave to you and you to them according to a moral code)?
And if it does, could we really blame it? Isn't love and all the odes to it the search for a particular phenomenological locus, isn't the search for our vocation the search for the things that provoke in us the phenomenological taste of passion and devotion, of work that is poignant, that we suit and suits us?
In the end, it's all about how we feel about things. The rewards that can be administered to us (and we do search for them) are many times internal - phenomenological flavours of doing the right thing, of reaching that type of being in control or in pure flow or elated or devoted or transcendental or *add your own* feeling.
But don't get me wrong, this is no simple or self-indulgent matter, as trying to make one's self feel right about something might be the hardest thing one ever attempts. Depending on what our exigencies and expectations are about, on what we think we should feel when we have the things that we want, when our life is going the right way, we might actually set ourselves up for quite a complicated challenge.
Trying to create the circumstances that fit one's phenomenological desires is a long process that oscillates between trying to make one's life fit one's phenomenological tastes, and refining those tastes through further exposure to art/ideas or bringing those tastes "down to earth" through further exposure to real life and we can really expect of it. They are also those people that ignore what they actually want but they don't count in here, as they are not playing.
How real are our goals to us, our phenomenological needs? Are they our creation? Are they important enough for us to strive to get them? Or are they just an aesthetic way of looking at the world - of what the world (mostly internal) could be like? Is asking that from the reality of life wasting resources that could be directed to more realistic needs?
Someone who has seen relatively accomplished and well off people suffering from depression or schizophrenia can attest to the fact that we are not that much without our phenomenology. So we do need to take care of it, understand it, as it is intrinsic to having a personality. Having a personality means having a view on the world, preferences and all sorts of fancy ideas about how things feel like and how we would like them to feel like. But is that enough to solicit (of ourselves) a life of endeavour in trying to please one's phenomenological particular aesthetic sense?
People get lost in their own phenomenology all the time, and it's not always cause they are not taking care of it, but sometimes its because they build up entire cathedrals of phenomenological expectation. And let's face it, not everybody is a Gaudi to afford the internal or external resources to build those cathedrals (which is not what positivist mass-market will have you think, as supposedly you are to believe that you can have, build or be anything you like according to that phenomenological flavour, something just as absurd as thinking you have endless resources, but let's face it, very tasty). Therefore it's probably wise to regularly revise how big and fanciful of a cathedral you do have resources to build. It's also wise to remember that we do build our phenomenological preferences out of bits and pieces that we pick up from the outside, as well as things that we process internally. So one doesn't need to be insulted while discovering that a particular piece that one thought quite intimate is present in a neighbour(that we don't particularly like)'s construction, not should one feel very lonely for discovering one has different pieces than one's friends have.
After all, we live in times in which is very trendy to be an individual, a personally constructed and opted for Self (giving quite a different flavour to the concept of "a self-made man" - but we live in a different phenomenological time than the one in which that concept was first mentioned, proof of that being the fact that some might have frowned upon the gender choice in the above concept).
As this search for phenomenological flavours is trendy and in the spotlight, on a par with being aware of what one's search is all about, maybe you should ask yourself today: why am doing the things that I'm doing - what is it that I am trying to feel? And maybe even more: are my phenomenological expectations healthy for me?