Wednesday 22 December 2010

So why exactly is Consciousness more astonishing than the existence of zombies?

Listening and reading about all the consciousness talk lately one can't help but wonder - would some cognitive neurologists and philosophers have preferred us to be all zombies? Do they really find it easier to explain or more likely for us to be zombies rather than entities possessing consciousness?
It's strange that, considering the characteristics of our nervous system.
It's common sense that we are able to classify things and observe them. One of the most fundamental characteristics of human cognition is our ability to abstract away from examples and put together specific features that appear more or less in all those examples.
So why is it so hard to believe that we would abstract ourselves and talk about our own processes as if we would be discussing about an object? There is no endless loop there, just our illusion that it might be so. Of course we can turn the light upon the interpreter of things, and then upon the one that interpreted that. But that is merely repetitive action which loses meaning and content after several levels of abstraction (or perhaps informational resolution).
We are not infinite loops, nor can we analyse or turn the light upon some of the processes that go on in our head, the ones that actually do most of the low-level analysing.
Of course we can pay attention and duplicate an abstraction - even if it was the discourse we were just making regarding some object - and analyse it afterwards as if it would be an object in itself. But that doesn't mean that we are still in that abstraction. We are the analyser, always and what we analyse might be what we've been or what we said a moment ago - a trace of our own activity.
The fact that we have been a second ago that person that we are now analysing might feel a bit as if we are everywhere - but we must remember that our entire self is an abstraction, based on a (hopefully) unitary system with many parts. That various parts can observe other parts is natural. As soon as we observe them we might think we are not them, or they belong to us but we are mainly the observer, not the observed.
This slippery path is what has been tormenting philosophers of mind for ages mainly because they prefer to forget that the I is an abstraction, and fundamentally we can only identify with one I at a time - mainly because that is what we defined "I" to be - an unity that contains the most pertaining characteristics of ourselves.
I for one can't see how it would seem more likely that a system that holds the neural complexity of humans would decide not to have a peak inside its own skull, not to observe its own activities - after all we are always with ourselves, it's rather normal that we notice what we are up to, both in our heads and in the physical world.
I think what philosophers confuse at the time with the problem of consciousness is the problem of creating living systems. We would like our AI conscious, yet for that I think AI would have to first have properties that most living systems have - including need for self-preservation, ability to defend, own goals, a general capacity and desire for survival.
So I think that the main question is what is the difference between living systems and non-living ones, and only after that how much and what type of neural complexity a system needs to acquire to manifest consciousness.


Plus, if zombies are really more likely to exist than consciousness, why haven't we found any so far, and we keep finding humans instead, which stubbornly insist on following their goals? :P

Dualistic Reflections

I was thinking today how dualists used to think about the universe being split in two very different substances, the matter and the mental stuff - and as I usually do when I find quite an untenable position, I tried to imagine how I could argue for dualism. In fact I tried to imagine how I could argue for the mental essence being a fundamentally different substance, and why.

Here it goes:

Let’s take the symbol for number 3. It’s 3 physical? Well yes and no. Yes because if u just imagined a written number 3, you definitely imagined something that had a physical representation.

But there is no 3 in the 3. There is nothing to prove its 3-ness, except what it means for us.

So yes, 3 has a physical basis – but as a concept is mostly in our head. As a concept, 3 is not contained in its symbol, it’s just suggested. So where is 3? 3 is in our head – 3 is mental stuff!

But surely – you will say – in someone’s head, 3 has a physical basis as well! So 3 is as physical as it gets.

Well yes and no. 3 might be symbolised in our head in the same way in which it is symbolised on paper: there is this phenomenal image that we have of three – or mental representation in you prefer. And it has, of course, neural correlates that activate to bring about the concept or image of number 3 in our minds. But there is nothing to say that the neural correlates of our representation contain the concept of 3 more than the trace of pen on the paper.

One thinking like that, you might say, only keeps moving the Interpreter of subjects deeper and deeper within, until it becomes unapproachable and irreducible to matter.

But I think it’s simpler than that. I think meaning is learned.

What do I think 3-ness is then? I think 3-ness is spread around the network, and it takes a specifical network activation to experience it.

I think concepts like one or two (or maybe even none) are the hardest to learn, after which we keep on going with adding one to what we already have, having learned the concept of more.

I think in fact that one is all around us, in the plethora of unique objects that we encounter. I think revolution starts with two. And I have a particular experience in mind, that of encountering two similar objects. What experiences exactly does one need to understand two? Two things of the same kind in our visual field, perhaps repeated a number of times – perhaps one’s own hands, can make one thinking. A collection of this experiences must be necessary in order to abstract away the quality of two-ness.

It is my start-level hunch that it helps if we understand and can abstract away two-ness from two things of the same kind before having to apply the concept of two to different objects. Because that already refers to counting the belonging of those objects to a specific category, so it involves one level of abstraction – you must understand “toys” as a category before you can go on and count the toys.

Anyway, back to three. I think most people would agree that some of the basic properties of intelligence are abstraction and synthesis, to which I would probably add filling in and removal.

What three is all about as a concept, to start with, is abstraction. In fact many of the things that we discuss are one or multiple levels of abstraction from their reality counterparts.

One cannot encounter 3 in the nature that surrounds humans. One can encounter 3 objects, but 3-ness is an abstraction, a case of our mother or father (and later our nursery teachers) having presented to our initially untrained mind’s eye enough examples of 3-ness for it to stick in our head.

Why holding abstract concepts is so useful? Abstraction in general is useful for information processing. As we don’t hold reality in our heads, but abstract concepts about it, we do need reality to perform actions on, and to feed our concepts further. However, some of us thrive on just playing around with those mental stuff.

Hitting the point of this entire post here: are mental things different enough to be considered a different substance than matter? Mental things are, in a certain sense, not matter – not the matter that they represent that is. They are abstractions from that matter. That doesn’t mean they are not jotted down in neural tissue. It mainly means that the organisation of that type of matter has particular properties – of emerging a concept out of similar (actually encountered) cases. One could call it an ability to extract features, but I like staying away from that, as that implies somehow that the features where there somehow preformed to be extracted. Out of that similarity emerges a concept which can be stored and projected further unto different objects. This acts in a very creative way, as in the mental drawing board many cases can be instantiated without them being actually experimented in real life. Or they can be instantiated on a material reflection of a mental drawing board – sand on a beach, pen and paper, an LCD screen.

Are these mental properties in any way transcendent of material ones? One can see why one could regard them as such. Abstraction can be seen as transcendence from many cases. Though it’s worth noting again that abstraction is only better than the objects it abstracts only in terms of being easier to process. Abstraction doesn’t contain the objects it refers to. So in a sense it doesn’t actually contain a material object.

One could push it even further and say that “I”, the sense of being someone, is actually a very useful high-level abstraction that represents that totality of problems a system might encounter and brings together the most important information the system needs to deal with.

As long as we are unitary systems, it is to be expected that we will represent this unity somehow internally – it is, after all, a logical unit. The fact that this unity is phenomenally experienced as an “I” and a presence in this “I” is probably an abstraction masterpiece - but about consciousness and zombies in a different post.

Monday 20 December 2010

On venturing guesses

I think it's Samir Zeki that mentions in The Neurology of Ambiguity a certain Law of Constancy - which states that the brain will always look for constant things.
I remember stating in one of my blogs that my opinion is that by contrary, the brain is actually constantly searching for different things. And that I had a hunch that both phenomenons were true, and represented to faces of the same coin.
We cannot understand the world without projecting already known things on it, no matter how different the world is from our internal projection. We could argue that the process of understanding per se is an effort to match the objects of our present to analysed objects of our past, or objects that we don't understand to objects that we think we understand better, or know more about.
We seem to constantly understand things referentially - that is to say that we always understand thing through other things we understood before, or in their relationship (similarity, difference, type of interaction) with other things that we know other things about. The ultimate thing that we relate everything to is ourselves, which is why I guess the I has such an important internal status. (of course we can even deconstruct the I - our knowledge is not perfect about anything and I am not sure that there can be such a thing as perfect knowledge).
This entire corpus of knowledge that we carry with us involves the meaning that we create about things. Is this structure of web of relationships an accident, or is there something in our neural networks that predisposes us to learning in this way?
If the law of constancy is true, we constantly look out in the world to find things that are the same with what we know, because those things already have a meaning for us. We can use them as points of reference for the things that are different, and processing faster what is similar means already acquiring a large amount of info and knowing on what to focus next.
The differences require more processing power.

I remember reading for the first time about the theory of expectancy which states that we always formulate predictions about the future. I think we do the same when we encounter things that are different to what we know - that is we venture guesses, based on our experience with things that are similar to the new ones.
I think they are at least three different strengths that a system could prove in attempting to make predictions - or venture guesses (I prefer the "venture guesses" form when it comes to encountering new objects, as I think the purpose of the process is not to predict the new object completely, but to narrow down if possible its functionality category):
- having a database of knowledge that is close to the new object (in terms of meaning, functionality, etc)
- having the ability to follow for a long term change in objects, families of objects, differences among elements of the same category, object evolution - having as a consequence a better ability to mentally manipulate transformations of objects and thus venture more appropriate guesses as what an object might have been related to in the first place, or what it might become or spawn in the future
-having a general higher-than-average exposure to novel objects, thus generally being better at categorising and manipulating (storing in a partially determined category, or creating a momentary category) novel objects until they are better categorised.

It would be interesting to build systems that instantiate those different evolutionary advantages, and see which one does best, or if they fare similarly, although I suppose the parameters for past experience would take some time to define.

If you could choose which system to be, what to have as an advantage, or you simply could have a cognitive upgrade that would instantiate one and only one of the advantages above, which one would you choose?
I will think of my answer and let you know.

Saturday 4 December 2010

Detached of one's Self (I)

We like to think that memory is our ultimate store - be it conscious or unconsciously activated. And when it comes to a self, we like to think we own this self, we have it mostly under control, it is us, it can never go away or do mad things. So how do things like "losing one's self", "self-detached", "my younger self", "my bad self" can happen? More, how can one feel locked outside of one's self, so detached that one can't possibly imagine how to get back in?
Sometimes a part of us might be severed from us to the point that we cannot even imagine how that part used to be us, how we used to behave when it was active. We might remember our younger selves and think of them as impersonators. We might have a recollection of what that self did, but we might not identify with it at all - be unable to imagine ourselves having those impulses or making the decisions of doing those things.
There are proofs of this happening to people in relationship to their bodies. Oliver Sacks describes in A Leg to Stand on how the experience of having a paralyzed leg involved losing an entire arsenal of phenomenology relating to having and using his leg in the past:
"The leg has vanished, taking its "place" with it. Thus there seemed no possibility of recovering it - and this irrespective of the pathology involved. Could memory help, where looking forward could not? No! The leg had vanished, taking its "past" away from it! I could no longer remember having a leg. I could no longer remember how I have ever walked and climbed. I felt inconceivably cut off from the person who had walked and run and climbed just five days before. There was only a "formal" continuity between us. "
I think of phenomenology as kind of a user interface through which one controls in a user-friendly manner all the muscle spindles, neuron action potentials and other things one has conscious access to. If such a detachment, such a cut can occur between the "main" self and one's physical body, including one's entire phenomenology pertaining to it; maybe a similar mechanism applies to the cases when one feels cut off from psychological parts of one's self.
We build and change our persona throughout our life. The characteristics pertaining to our self must be encoded in a variety of neural networks. So what keeps the entire machinery of the self together? Is there an index-like network? A loop that activates most important personal characteristics when booting up the system? (when waking up) How do we know when we lost something from the chain of networks? Do we even notice a difference?
There must be incremental differences, as well as definite important moments which imply us making a choice about our life, which reflects back on a choice of whom we will be next.
We tend to identify with who we are in the present - the neural networks connected to our current workspace. We mostly have memories about who we've been and how that felt like. We try to keep in line, achieve a continuity with certain aspects of ourselves (be true to ourselves), and run away like mad from others, that we don't like or consider "an experimental mistake".
One could argue that we get to know whom we are and whom we like being through interaction with our environment, through instantiating various aspects of ourselves.
An informational overload would happen if we would have more than a certain number of characteristics readily available, preloaded in our accessible personality space. In the same time, we base our social interactions on people holding responsibility to who they are, and trying to
keep a personality as coherent as possible.
Of course, personalities are not always that coherent. As good news we have some knowledge about where personality might be neurologically influenced in one's brain. But on those on a future post.

Monday 22 November 2010

Prof.Zeki's Neurology of Ambiguity and the Frame Problem

I am reading Semir Zeki's article on the Neurobiology of ambiguity and couldn't help but jump at this paragraph, thinking of the way it could relate to the frame problem:
"The primary law dictating what the brain does to the signals that it receives is the law of constancy. This law is rooted in the fact that the brain is only interested in the constant, essential and non-changing properties of objects, surfaces, situations and much else besides, when the information reaching it is never constant from moment to moment. Thus the imperative for the brain is to eliminate all that is unnecessary for it in its role of identifying objects and situations according to their essential and constant features."
The frame problem is, in simple terms, the difficulty that AI scientist encounter when having to define for an artificial system all the things that a human takes for granted - the common knowledge, the "known". The problem expands when having to teach the same AI system how to detect change. If the AI agent - let's call it robot Tod for simplification - doesn't know what changed, how can he understand what can be dangerous or relevant in a situation for it?
Dangerous or relevant come especially connected here, when one wants to program, evolve or otherwise teach robot Tod to avoid unpredictable dangers. Otherwise one busy programmer (or better yet, and army or them) would have to write all the specific code for Tod to survive, for every potential circumstance it might encounter. And when the circumstances would change, even slightly, Tod would be so confused that the programmer(s) would need to go back to their keyboards for more work.
Therefore one would prefer Tod to be lovely and independent and understand danger, threat, maybe even possibility. For which Tod would need to have attention and know what to focus it on. The frame problem is such that Tod needs to compute and reassess all the objects in its environment to see if anything has changed, before being able to make decisions about anything.
Before proving any type of intelligence, robot Tod has to know what to be intelligent about.
Which brings me back to the point. When Zeki formulated the human brain's interest with what is constant, I thought "but no, the human brain is interested in what's different, that's what's brilliant about the human brain, it doesn't need to focus on the things that are constant, it seems to instinctively and (almost) instantly compute what is different on focus attention resources on it!"And then the complementarity of the two sentences hit me. The complementarity of Zeki's statement that the brain is interested in finding constants (and apparent desire to ignore differences), with the brain's obvious ability to find differences, to detect change and focus on it.
The brain's preference for constancy might very well be what allows it to detect change after all! If we wouldn't have such an ability for abstraction and for building categories, we could never dedicate spare resources to things that look different.
We store this plethora of characteristics about phones, computer screens and bookcases that makes us insensitive to seeing one of them. If something belongs to a category we have encountered before, unless we have a particular interest in analysing that type of object, we won't see it at all. The brain will consider it irrelevant and won't bother to call attentional resources to analyse it. The corresponding neural networks for those categories will be at rest. But when our friend will manipulate a phone with a big yellow dot on the screen we will look surprised at it, while our brain will keep on computing, comparing it alarmed with other "normal" phones encountered in our past.
So emitting constant expectations (at various levels of abstraction) and only focusing on an element of our environment when this defies our expectations is part of our brain's normal activity.
This is where another important point arises - on which Zeki's article and general views can throw some light. What is enough knowledge for the brain to stop focusing on an object?
Many AI experiments have failed because they tried to endow their creations with what one could call high-resolution representations of objects.
Think of a cat! The representation that first springs to mind is not gonna be that high-res, unless you have a cat that is special to you or you are trying purposefully to defy this thought experiment. Of course you could create a higher resolution representation of a cat, but even then, it would take lots of focus to build it in your "inner mind" and I doubt it would reach picture-resolution. If it did, and you can keep it in your mind for long, you might want to get into graphical arts :D If you are into graphical arts and it didn't, that is why we need art and we search for perfection (read richness of information+aesthetics).
The point of this is that although we can create in real life marvellous objects with impressive graphical qualities and breath-taking resolution, in our mind we really store the most important features about objects (which it's hard to program into robot Tod). Upon recall (unless you really got stubborn in trying to imagine as much detail as possible on your cat) we tend to retrieve just those important features. More than that, each sensory modality seems to store different features about the same objects: think about stroking a cat for a moment without imagining how the cat actually looks like. It's a distinct furry feeling, and you might imagine the actual size of the cat (which is a tactile-detectable feature). You might be tricked into caressing a different, similarly furry animal, and thinking it is a cat, until you detect a different body-size. (that is how we create metaphors). Each sensory-type has limits, and the visual input is definitely the most high-res for us humans. Finding something that looks like a cat but isn't one is much harder, which is what makes vision such a great tool.
It isn't that we just compare the image of a cat to an internal representation, but through vision we have access to information about its movement, its behaviour, and so on. Better yet, most of us are lucky enough to explore the world with all of our senses at the same time. So finding something that looks like a cat and smells like one while in the same time purring is close to impossible. And if it would happen, we might even be entitled loosen up the boundaries of our Cat category to include the unexpected object. Because, for our senses, it would be relevant to classify it as a cat, or a something that is very similar to one, with the only difference that *you_fill_in*(you would have to study why it isn't completely one; or maybe you shouldn't, if its not relevant for your survival or curiosity(i.e. u can live without it - survival again)).
It's not the first time when I think that understanding how us humans form our mobile, imperfect but extremely reliable categories would enable us to move AI forward a notch or two. If robot Tod could only recognise cats that easily, and ignore them, to pay attention to the threats or changes in the room!
The secret must lay somewhere with our ability for abstraction, for only paying attention to something if it proves in vast contradiction with what we expected about it. And with encoding just enough features to differentiate between things, without making it hard to upload and compare representations).
Maybe in one of the future blogs I will explain how I imagine that to happen in AI, and some of my experiments with modelling something in that direction through neural networks.

Saturday 20 November 2010

Depression with Dorothy Rowe - Preamble (I)

I just found Dorothy Rowe's book - Depression, the way out of our prison - in Oxfam :).
From the preface, Rowe seems to build on the hypothesis that depression is more of a cognitive malfunction than an illness in some people (or both), which appears as a reaction to one realising a big discrepancy between the life that one leads and the life one thought was supposed to be living.
This statement has interesting implications for both Behavioural Cognitive Therapy and AI, which hopefully I will have the pleasure to write about in a future post.
In anticipation of having the time to read the book, here are some of my thoughts on the matter:
I think depression has a lot to do with what one can anticipate in one's own future. I've seen people going through huge amounts of pain and still remaining optimistic as long as there was something they could do about their life, to steer it in the right direction. But depression is just hopelessness, the anticipation of perpetual punishment or discord (I wrote dissaccord but will have to search the English term for that) between what life throws at you and what you are. And I doubt anybody can live easily through perpetual hopelessness.
Which puts light on an interesting effect of our cognitive abilities to anticipate our future constantly, to wonder about the meaning of our lives, to try and gain unity between our goals and what we think we are and our external manifestations.
The mechanism that we use to imagine the rights and wrongs that can happen in our life if we do one thing or another is a state-of-the-art cognitive tool. It helps us try to plan a life path or solutions paths in an uncertain world. But the same mechanism might be the downfall of us when all we can anticipate is pain.
With classical piano as a craft that needed lots of practice time, I haven't supported in my teenage years a hedonistic view of the world. But then again, hedonism is slippery, it is whatever gives us pleasure, and one can find pleasure in an ascetic or warrior-like lifestyle. One can definitely find pleasure in creating and accomplishing things, despite the effort that takes.
The truth is that we need rewards in our present and future (and perhaps in our past, as proofs that good things can happen to us). We cannot function without motivation, no matter how internalised this motivation is. And the pleasure of self-expression, of doing things of interest for one's self and being at least partially the self (or living the life ) that you imagined you will/should be are very internalised types of reward.
One could speculate that there is not much in terms of external reward that can equate to these internal ones.
The nature of reward is rather controversial in my opinion, as one can only achieve pleasure in one's brain. Thus it feels that one subjectively (but not necessarily consciously) decides if to enjoy or feel pleasure from something or not. That gives us some sort of upper hand on our own pleasure, but points at something more significant too. We cannot fool our own brain into having pleasure if we don't stick by our values, ideals, desires, etc.
This is why I think drug use for pleasure purposes can only provide fleeting and elusive glimpses of pleasure to its pursuers, and not the fully fledged satisfaction that one wants to achieve. In terms of informational functionality, pleasure seems to mark something worth seeking. If one is not happy in one's majority of deciding functions about one's actions and life, pleasure - which is put there as an internal reward for doing the right things - seems to elude the seeker.

Friday 19 November 2010

The search for a Phenomenological flavour

There is talk about a God spot in the brain, and areas which, while stimulated, provoke feelings of a "sensed presence" or of "being one with the universe".
This begs a philosophical question - do we want religion for how it feels like ? For its phenomenology? Does religion come with a type of aesthetic ideals about how one should feel about one's world, self and others? Does it come necessarily as a phenomenological search for meaning, protection or integration in a beautiful structure (i.e. an ideal society in which the others behave to you and you to them according to a moral code)?
And if it does, could we really blame it? Isn't love and all the odes to it the search for a particular phenomenological locus, isn't the search for our vocation the search for the things that provoke in us the phenomenological taste of passion and devotion, of work that is poignant, that we suit and suits us?
In the end, it's all about how we feel about things. The rewards that can be administered to us (and we do search for them) are many times internal - phenomenological flavours of doing the right thing, of reaching that type of being in control or in pure flow or elated or devoted or transcendental or *add your own* feeling.
But don't get me wrong, this is no simple or self-indulgent matter, as trying to make one's self feel right about something might be the hardest thing one ever attempts. Depending on what our exigencies and expectations are about, on what we think we should feel when we have the things that we want, when our life is going the right way, we might actually set ourselves up for quite a complicated challenge.
Trying to create the circumstances that fit one's phenomenological desires is a long process that oscillates between trying to make one's life fit one's phenomenological tastes, and refining those tastes through further exposure to art/ideas or bringing those tastes "down to earth" through further exposure to real life and we can really expect of it. They are also those people that ignore what they actually want but they don't count in here, as they are not playing.
How real are our goals to us, our phenomenological needs? Are they our creation? Are they important enough for us to strive to get them? Or are they just an aesthetic way of looking at the world - of what the world (mostly internal) could be like? Is asking that from the reality of life wasting resources that could be directed to more realistic needs?
Someone who has seen relatively accomplished and well off people suffering from depression or schizophrenia can attest to the fact that we are not that much without our phenomenology. So we do need to take care of it, understand it, as it is intrinsic to having a personality. Having a personality means having a view on the world, preferences and all sorts of fancy ideas about how things feel like and how we would like them to feel like. But is that enough to solicit (of ourselves) a life of endeavour in trying to please one's phenomenological particular aesthetic sense?
People get lost in their own phenomenology all the time, and it's not always cause they are not taking care of it, but sometimes its because they build up entire cathedrals of phenomenological expectation. And let's face it, not everybody is a Gaudi to afford the internal or external resources to build those cathedrals (which is not what positivist mass-market will have you think, as supposedly you are to believe that you can have, build or be anything you like according to that phenomenological flavour, something just as absurd as thinking you have endless resources, but let's face it, very tasty). Therefore it's probably wise to regularly revise how big and fanciful of a cathedral you do have resources to build. It's also wise to remember that we do build our phenomenological preferences out of bits and pieces that we pick up from the outside, as well as things that we process internally. So one doesn't need to be insulted while discovering that a particular piece that one thought quite intimate is present in a neighbour(that we don't particularly like)'s construction, not should one feel very lonely for discovering one has different pieces than one's friends have.
After all, we live in times in which is very trendy to be an individual, a personally constructed and opted for Self (giving quite a different flavour to the concept of "a self-made man" - but we live in a different phenomenological time than the one in which that concept was first mentioned, proof of that being the fact that some might have frowned upon the gender choice in the above concept).
As this search for phenomenological flavours is trendy and in the spotlight, on a par with being aware of what one's search is all about, maybe you should ask yourself today: why am doing the things that I'm doing - what is it that I am trying to feel? And maybe even more: are my phenomenological expectations healthy for me?

Thursday 6 May 2010

The cognitive power of satisfaction (I)

I have been, since childhood, obsessed with the power of satisfaction. Very worried that something in me might be satisfied with less than the things I really wanted. I think I was considering even then how being satisfied with something can make you stop there, and repeat that cycle, stay there, build your home/habits, spend your time around those things that satisfy you.
Recently I realised how that related to some problems in cog sci.
We know that the limbic system adds emotional value to our experience. We also know that during cognitive processing of certain stimuli, like the visual perception of an object, the brain sends back and forth some signals to the limbic system. Why that happens we are not sure, but Ramachandran is one of the ones that think that might have something to do with our inability to easily get rid of an image, of an object, of a way of looking at a visual picture once we discerned a way of looking at it that makes sense to us (that has coherence). Some of these apply for the double images, in which one is prompted by some cues, and groups the image in a certain structure, and has to make a conscious effort to shake off the object thus perceived in order to try to get the cues and do the groupings for the other object.
If the constant appeal that the processing makes to the limbic system is anything like getting some sort of marker of satisfaction, or added interest (or arousal, or other type of emotional value) for that particular found solution, or possibility of coherence to jump the queue in order to get our attention and be studied in further depth, we might build a theory on how priority is set in the brain and how that functions, and also on how we make some of our problem-searching.
We will discuss priority first. The problem of priority has relevance for some very disputed and unsolved yet basic issues, like the frame problem.
In a world in which we know how our brain prioritises information, we might not need to explain how to specify in AI what remains unchanged, or consider all the elements of the environment, we might have danger cues in the agent's experience, the sensation gathering data that is similar to the danger cues, and the priority system only alerting the system when something significantly similar to danger cues happens and needs the attention and decision-to-action of the higher cognitive functions.
We all know the sensation of being suddenly alerted by something in the environment, without exactly knowing what prompted us. We know how it feels when something is wrong or different, although we need our perceptual abilities to take more time and search through the available stimuli until we get a more detailed report on what prompted our attention. But the feeling of arousal gets priority - we are stopped in our task and look around, and a little bit later it strikes us what is different in that picture, relevant, or dangerous.
The limbic system thus seems to work like a full-break, full-ignition system. It can radically hijack our attention from the task at hand. And that ability of ours has negative consequences over the people that live in very stressful environments. It is known that children/teenagers that live in anxiety-filled or unstable environments can't focus on learning and have much more difficulties on achieving at school. One could say they have to put much more effort into making their constantly slipping attention focus on something that for the brain probably doesn't seem a priority at the time (considering the other things happening around). Also, one knows how calm is needed in order to absorb information, or think clearly.
So our alarm limbic-system can work against us, against our power of reasoning, getting the necessary information, learning about the situation or about new things that can be applied in the situation at hand, and acting thus in a fully informed manner.
Which creates a paradox. It means that the alert system can mainly put us in the position of focusing on the right thing, or give us the extra edge of motivation (adrenalin?) that we needed to achieve a task now, and faster than normally. But we need our calm to gather information, learn (memorise, compare new info to old info and build new categories). So how do we get calm? Can a limbic response only hijack? Does it take no emotional colouring for us to be able to process cognitively at full efficiency in some ways?
Or is calm itself an emotional colouring?

Time to wrap our thoughts again, in a tighter coat. Back to where we started, the limbic response that we fear was satisfaction. For me, satisfaction used to be a more powerful type of hijack, if happening in the wrong moment, for the wrong reasons.
Because you wouldn't know any better, knowing is feeling in some ways, and your feelings might prevail and what you verbally knew might have no strength to fight them, or take priority in front of them.
So how do we know if it works how it's supposed to? How do we know we are "right" to be satisfied when we are, and alerted when we are?
The costs for being alerted much more often than the danger being relevant is little, compared with the cost of not being alerted when one has to.
What about the costs of being satisfied?

If Ramachandran's hypothesis is true... one might imagine a neural network which weights are updated, increased or decreased through the strength of the limbic response. Thus, before even getting cues or achieving any groupings, features that match each other can attract our limbic boost to be further considered as bases of a higher order grouping. This might be repeated a number of times before the highest order of grouping being achieved. The cues themselves are only the first stage of processing, when some features already fitted together neatly in middle level structures like edges, vertexes, contours. The highest order of grouping represents getting a meaningful object, and having enough information about the area to know there isn't any other object around, or not needing more information (in the case of the pictures, one can make assumptions about other objects not existing when one already found one covering a significant amount of the space).

The trick with double images is that they both cover the same space, they just use different cues, or the same cues to have different functions in the various unitary structures.

The reinforcement through limbic response, and the persistence of the image reinforced might explain other things as well.
Let's also consider what that means from a thought-processing perspective. It means that our meanings are in our head. Not that they can't be outside as well, they just need to be built in our head in order to exist for us. We could potentially lose them, if not reinforced properly. Some might get reinforced but not be necessarily useful in the future processes of categorization and interpretation of events.
That is one of the reasons why we have psychotherapies - for people to get rid of some of their own way of interpreting things, of associating present things with bad meanings from their past. Learning to see new images, new structures of meaning in our life is sometimes a very important lesson, the thing that motivates us to walk further.

Saturday 20 February 2010

The Cartesian Divide(I)

Descartes used to think that the body works like a machine, and is controlled by the soul, which is a non-material entity. This move towards comparing the human body to a mechanism couldn't have been possible without the emergence and flourishing of automaton-building. We can compare Descartes' leap of thought when making that comparison to the leap of thought that put the basis of cognitive science: comparing the human mind to a machine.

So, what is the difference between the two moves? Descartes only had automatons that could mimic movement and behaviour, so he could only assimilate the human body to the concept of a mechanism (which can be considered the basis for the robotics field). Modern cognitive science had the entire realm of information processing by machines to assimilate the human mind to. We could actually say that both analogies belong to the same kind of thinking, that they are two complementary steps in the same direction.

But as Descartes didn't had anything to stand in for "mind", or anything more scientifically minded to compare it with, he assimilated it to a non-material substance, different than the mechanistic body. And that mainly because human mind doesn't come in an external form, is hard to be measured, although it's results and actions can be clearly seen, and in fact a body without a mind wouldn't make that much sense to us, not in a human world anyway. So it's obvious why to Descartes and his contemporaries would find this "essence" as having no weight, no particular form, no colour - i.e. being spiritual in nature.

A little aside on the way we form concepts here, that would become more and more obviously important...

We tend to grasp things through our perception of them. We form concepts considering their qualities, the categories they belong or are related to, and the history of humanity is perhaps, in a way, a history of evolving concepts. If you would live in an era were most of the concepts you acquire would be based on the representation or comparison to the objects existing then, your knowledge-base would be somehow limited by that. So if you would live in the 17nth century, and you would have a substance, let's call it mind, that you couldn't see, measure form-wise, colour-wise, that you couldn't observe with your own senses, although you would see its manifestation through indirect means - i.e. people acting in different ways and being "possessed" by different "spirits" (passions, moods), some being more apt and resourceful and creative in various fields of endeavours than others, people coming out of the blue with unexpected actions (planning somewhere inside themselves in an invisible, perhaps threatening, if not only surprising way). If you would see all of this, and you wouldn't know how to call it, wouldn't you call it "non-material" substance just because you can't reduce it to something that you can perceive with your own senses?

I probably would. And it makes more sense to think of the development of concepts through a historical perspective. Also, I think analogy, metaphor, symbolism, hardly being useful only for poetry or various arts, are also the tools that helped us make progress in science.

We tend to get a different view on a concept through comparing it to another one - for Descartes the way he could embed information from the concept of automatons in his thinking about the human body clearly enriched his latter concept, and it enriched it for us as well. (perhaps I'll post soon on concept shaping and enrichment).

Now, in modern times, he have a beautiful comparison to make - the one to the personal computer. The existence of software would perhaps have helped Descartes' contemporaries, who found it hard to imagine an intelligent thing that would have no weight or shape, yet would not be supernatural. Software is clearly not supernatural or even spiritual (if you ever heard a stuck programmer swearing - otherwise an analytical and calm, therefore quite civil specie), although is all about organising information and using different commands or information tools to act upon it. Quite a resourceful analogy for CogSci to draw upon.

Today most of us assume a physicalist position – that means we think intelligence, personality, soul are physically grounded, and we don’t believe in substance dualism anymore – meaning we dont think in a physical substance making up our body and a spiritual type of other substance making up the essence of life (that belief is called vitalism).

However, there is still a big debate about the fact that we might still function in Cartesian terms,and think about the mind as a different thing from the body. I will try to explain why that is not only unavoidable in some ways, but perfectly natural if we think about how our culture has evolved, and how concepts are born. (to be continued....)

Thursday 18 February 2010

Multiagent systems - do we know the full story?(II)

I want to emphasize that, in a real world, there is no simple agent-environment play, because the agent is not born with knowing his needs. WIth the complexity of the human psyche, the human agent might learn to ignore his needs, and sometimes ignore the wrong ones, putting up with pressure from the environment and not really satisfying himself.
I also think that defining the additional space of constant negotiation with the internal self and needs is important as it makes more realistic the interaction with the environment, if it would be properly modeled.
We assume normal agents have goals, but how they synthesize goals from the plethora of needs to be satisfied hasn't been defined yet. In fact the term goal is not a primitive term, but a constructed one. We have needs, we see circumstances around us, we classify them in order of their possible outcomes and of the needs they can satisfy, and then we define a goal - which is an image, a hybrid between our need and the circumstance that we think is going to satisfy it best.
As an example, we are motivated to go to work because we need to eat, have a roof over our heads, and (but not necessarily) because we need to feel useful and valuable and achieve something. Quite a number of our needs might be tied up into our going to work, but going to work is a circumstance which we chose to satisfy those needs. The same circumstance can act as both a satisfier and a frustrating agent - I might be very satisfied with the pay of my job, so that could cover up my feeding myself and keeping a roof over my head needs quite nicely, but I can feel that because of having to satisfy those needs, I occupy my time with a job that doesn't let me follow my own destination, use my talents, develop my strengths, and focus on my real interests. This is more like the human complexity we encounter than the agent's simplified world, and not because we made the environment more complex, but also because we made the internal scenery of the agent more refined and complex.
The algorithm of chosing what needs to satisfy with maximum utility could therefore be a very complex one, considering the number of choices we can make in our very resourceful environments, and the fact that we can't only compute outcomes for a specific set of choices, without taking into consideration that the environment itself might change. And we are not even discussing dramatic changes. Small changes can make quite a difference.
Let's say you are working in that utterly boring job to get the money to put yourself through uni, but what you can really save up every month is about 10% of your earnings. Also you know how much the uni is gonna cost you, and that it's gonna take you a couple of years to save up for it. In this case, being frustrated and having to take lots of days off from work, or simply spending more on things that you like that are going to make you feel that you still like yourself enough to get the real things that you want, might work against you, and you might end up not putting aside those money, although you are still putting yourself through the frustration of the job. What happens here? The need of doing what you want to do is overwhealming you, and without being aware you are satisfying it in a different way. You promised yourself to stay in that frustrating situation so that you can give yourself the expected reward, but the circumstance frustrates you more than you could think of, and you are really not making much progress towards your goals at all.
That is all because us humans don't have perfect self-control, so we need to always make assumptions on how many internal resources we still have - and that is not only how much energy we have to work before being hungry again, or how many hours before needing to sleep, it has to do with many other psychological components, that are so much harder to compute.
So how do we orient ourselves in this fuzzy environment? At this point, the external environment seems to be creating us much more problems than the external one, which at least is out there, we can look at it, measure it, it seems more objective.
Well, we learn. We learn ourselves in time, and we are even taught about ourselves through learning about other's people experience of dealing with themselves. That is perhaps why we see this blossoming literature on self-help. Science dealt a lot with our problems of getting resources out of the environment, so we are quite enlightened and can think in non-primitive, non-religious terms about getting our food, building our shelters, keeping ourselves safe from the bad weather. But in terms of our dealings with our own selves, we can be quite primitive.
That is why we still look admiratively to many forms of ancient spirituality - they tell us a lot about how other people have dealt with their own selves, and I think not only psychology, but cog sci should be about that as well. Because we don't only have to deal with our beliefs, emotions, needs, but also with our performance, with internal ways of mobilising our resources, our creativity, our intelligence, of understanding our own thought processes. That is why cog sci is only at its beginning, because all these realms are unexplored yet, not in a scientifical way.
If the shaman used to be the institution (grin) assigned with weather control before, and we find that funny now, our grandkids might find it funny that in the past we used to assign self-knowledge and self-discipline to different kinds of spiritual and religious movements. The cognitive scientist and the psychologist might be then the people for the job of helping you explore the resources of your ownintelligence and personality.
Many things could be said about the human goals, and perhaps I am going to say more in a different post. What I want to do now is link all those things to the way an agent has its first experiences, and how that could be modeled more realistically.
We ourselves come with certain needs to this world, which might be a reflection of our physicality or of our personality (which of course can be physically grounded, but has been studied in conceptual abstract terms, so it makes more sense to us to talk about it in those anyway - more on the cartesian divide in a different post).
So why should agents be different? I think it's ok to pre-program needs in our agents. I just don't think it's ok to program goals.
I think a high dose of realism would be added if an agent would make up his own mind on his goals, and I will soon bring about a little programming example of that. The freedom to decide on your own goals means though a better interface of interacting with your environment, and understanding (gosh, I've used the big U word) what objects of that environment can help or hinder your needs and survival.

Wednesday 17 February 2010

Multiagent systems - do we know the full story?(I)


Multiagent systems are (usually) virtual worlds in which agents (i.e. entities capable of independent, autonomous actions) pursue goals, interact with each other, cooperate, defect, communicate. A very interesting applicative field of game theory and social theories, they are supposed to bring to us more than relevant conclusions about how agents can best apply strategies to accomplish their goals (mainly a game theory interest), or if they would be better off communicating and cooperating, or defecting and being deceptive (a social theory interest).
Neither are they only interesting as to develop future smarter and more realistic behaviour for AI agents in games (hope u were not thinking Wow).

Their main interesting feat, from my point of view at least, is their ability to negotiate on behalf of us. Having my software agent, representing my financial interests, with my financial history, negotiate with my bank's agent for a low-interest mortgage sounds pretty amazing to me. Also, I could definitely benefit from an agent that would gather information about news, books, forms of entertainment that represent my interests and the possibilities of my budget, cooperating, negotiating and sometimes just saying NO to all the advertising agents of the companies that might try to sell me those services in London. What about an agent that would measure my stress levels, go "out there", in the virtual wildness and organise a surprisingly refreshing day, and then treat me to the program of what I have to do for that day in that exact morning? Well, maybe I'm thinking Data :)


But these little thingyes would apply to business as much as to entertainment. What about an agent carrying my CV, applying to all the jobs that would potentially interest me, analysing the witts of the other agents applying, and coming up with a better strategy/cover letter? I would definitely not mind never completing any other 8 page application form that is just a bit different from the others so I can't use a template, for the rest of my life :). Would that be bad for the HR jobs? Well, they can be human, I don't really mind.


But taking a look at how agents are now, and comparing them with the human agency, some differences strike me. Not saying that these differences have to be bridged before uses can be collected out of the field, I must do a little comparisons, and see what's missing, what is not human in the agent's world, how more levels of realism could be implemented, and if they would help whatsoever and have practical apps.
First of all, in a real world, like mine and yours, the system isn't just a collection of states, and we don't even know our goals for sure.


If I would be an agent that simulates the human experience better, I would be born without much knowledge of what systems and states are, and start exploring the world on my own. I would move, because I can, and because that would be part of self-exploring. I would feel the demands that my body has of me, and try to satisfy them. As an agent, I would not only negotiate the satisfaction of my goals with the external environment, but I would also negotiate the best route to take with myself. Sometimes, I might impose restrictions on my self in order to satisfy requests of my environment.


I would definitely not live in a space where all my goals and their utility is defined. After all, one of the tricky parts of achieving happiness is sometimes not my ability to pursue goals, and my ability to make them come true in the environment, but my ability to know, like a mighty precog, which one of those choices is more in tune with my needs of the present and future.


So, as far as applications go, I can see the agent as being occupying the middle box in a three-folded space, made of himself - the reasonably smart, trying to get it right, cute hero, his needs - which he has to take into consideration and know quite a bit about - and they might be nasty and rebellious when they are not satisfied, and the indifferent environment.
As for the interaction with other agents and their agenda, we haven't even started yet :D.