Saturday 20 February 2010

The Cartesian Divide(I)

Descartes used to think that the body works like a machine, and is controlled by the soul, which is a non-material entity. This move towards comparing the human body to a mechanism couldn't have been possible without the emergence and flourishing of automaton-building. We can compare Descartes' leap of thought when making that comparison to the leap of thought that put the basis of cognitive science: comparing the human mind to a machine.

So, what is the difference between the two moves? Descartes only had automatons that could mimic movement and behaviour, so he could only assimilate the human body to the concept of a mechanism (which can be considered the basis for the robotics field). Modern cognitive science had the entire realm of information processing by machines to assimilate the human mind to. We could actually say that both analogies belong to the same kind of thinking, that they are two complementary steps in the same direction.

But as Descartes didn't had anything to stand in for "mind", or anything more scientifically minded to compare it with, he assimilated it to a non-material substance, different than the mechanistic body. And that mainly because human mind doesn't come in an external form, is hard to be measured, although it's results and actions can be clearly seen, and in fact a body without a mind wouldn't make that much sense to us, not in a human world anyway. So it's obvious why to Descartes and his contemporaries would find this "essence" as having no weight, no particular form, no colour - i.e. being spiritual in nature.

A little aside on the way we form concepts here, that would become more and more obviously important...

We tend to grasp things through our perception of them. We form concepts considering their qualities, the categories they belong or are related to, and the history of humanity is perhaps, in a way, a history of evolving concepts. If you would live in an era were most of the concepts you acquire would be based on the representation or comparison to the objects existing then, your knowledge-base would be somehow limited by that. So if you would live in the 17nth century, and you would have a substance, let's call it mind, that you couldn't see, measure form-wise, colour-wise, that you couldn't observe with your own senses, although you would see its manifestation through indirect means - i.e. people acting in different ways and being "possessed" by different "spirits" (passions, moods), some being more apt and resourceful and creative in various fields of endeavours than others, people coming out of the blue with unexpected actions (planning somewhere inside themselves in an invisible, perhaps threatening, if not only surprising way). If you would see all of this, and you wouldn't know how to call it, wouldn't you call it "non-material" substance just because you can't reduce it to something that you can perceive with your own senses?

I probably would. And it makes more sense to think of the development of concepts through a historical perspective. Also, I think analogy, metaphor, symbolism, hardly being useful only for poetry or various arts, are also the tools that helped us make progress in science.

We tend to get a different view on a concept through comparing it to another one - for Descartes the way he could embed information from the concept of automatons in his thinking about the human body clearly enriched his latter concept, and it enriched it for us as well. (perhaps I'll post soon on concept shaping and enrichment).

Now, in modern times, he have a beautiful comparison to make - the one to the personal computer. The existence of software would perhaps have helped Descartes' contemporaries, who found it hard to imagine an intelligent thing that would have no weight or shape, yet would not be supernatural. Software is clearly not supernatural or even spiritual (if you ever heard a stuck programmer swearing - otherwise an analytical and calm, therefore quite civil specie), although is all about organising information and using different commands or information tools to act upon it. Quite a resourceful analogy for CogSci to draw upon.

Today most of us assume a physicalist position – that means we think intelligence, personality, soul are physically grounded, and we don’t believe in substance dualism anymore – meaning we dont think in a physical substance making up our body and a spiritual type of other substance making up the essence of life (that belief is called vitalism).

However, there is still a big debate about the fact that we might still function in Cartesian terms,and think about the mind as a different thing from the body. I will try to explain why that is not only unavoidable in some ways, but perfectly natural if we think about how our culture has evolved, and how concepts are born. (to be continued....)

Thursday 18 February 2010

Multiagent systems - do we know the full story?(II)

I want to emphasize that, in a real world, there is no simple agent-environment play, because the agent is not born with knowing his needs. WIth the complexity of the human psyche, the human agent might learn to ignore his needs, and sometimes ignore the wrong ones, putting up with pressure from the environment and not really satisfying himself.
I also think that defining the additional space of constant negotiation with the internal self and needs is important as it makes more realistic the interaction with the environment, if it would be properly modeled.
We assume normal agents have goals, but how they synthesize goals from the plethora of needs to be satisfied hasn't been defined yet. In fact the term goal is not a primitive term, but a constructed one. We have needs, we see circumstances around us, we classify them in order of their possible outcomes and of the needs they can satisfy, and then we define a goal - which is an image, a hybrid between our need and the circumstance that we think is going to satisfy it best.
As an example, we are motivated to go to work because we need to eat, have a roof over our heads, and (but not necessarily) because we need to feel useful and valuable and achieve something. Quite a number of our needs might be tied up into our going to work, but going to work is a circumstance which we chose to satisfy those needs. The same circumstance can act as both a satisfier and a frustrating agent - I might be very satisfied with the pay of my job, so that could cover up my feeding myself and keeping a roof over my head needs quite nicely, but I can feel that because of having to satisfy those needs, I occupy my time with a job that doesn't let me follow my own destination, use my talents, develop my strengths, and focus on my real interests. This is more like the human complexity we encounter than the agent's simplified world, and not because we made the environment more complex, but also because we made the internal scenery of the agent more refined and complex.
The algorithm of chosing what needs to satisfy with maximum utility could therefore be a very complex one, considering the number of choices we can make in our very resourceful environments, and the fact that we can't only compute outcomes for a specific set of choices, without taking into consideration that the environment itself might change. And we are not even discussing dramatic changes. Small changes can make quite a difference.
Let's say you are working in that utterly boring job to get the money to put yourself through uni, but what you can really save up every month is about 10% of your earnings. Also you know how much the uni is gonna cost you, and that it's gonna take you a couple of years to save up for it. In this case, being frustrated and having to take lots of days off from work, or simply spending more on things that you like that are going to make you feel that you still like yourself enough to get the real things that you want, might work against you, and you might end up not putting aside those money, although you are still putting yourself through the frustration of the job. What happens here? The need of doing what you want to do is overwhealming you, and without being aware you are satisfying it in a different way. You promised yourself to stay in that frustrating situation so that you can give yourself the expected reward, but the circumstance frustrates you more than you could think of, and you are really not making much progress towards your goals at all.
That is all because us humans don't have perfect self-control, so we need to always make assumptions on how many internal resources we still have - and that is not only how much energy we have to work before being hungry again, or how many hours before needing to sleep, it has to do with many other psychological components, that are so much harder to compute.
So how do we orient ourselves in this fuzzy environment? At this point, the external environment seems to be creating us much more problems than the external one, which at least is out there, we can look at it, measure it, it seems more objective.
Well, we learn. We learn ourselves in time, and we are even taught about ourselves through learning about other's people experience of dealing with themselves. That is perhaps why we see this blossoming literature on self-help. Science dealt a lot with our problems of getting resources out of the environment, so we are quite enlightened and can think in non-primitive, non-religious terms about getting our food, building our shelters, keeping ourselves safe from the bad weather. But in terms of our dealings with our own selves, we can be quite primitive.
That is why we still look admiratively to many forms of ancient spirituality - they tell us a lot about how other people have dealt with their own selves, and I think not only psychology, but cog sci should be about that as well. Because we don't only have to deal with our beliefs, emotions, needs, but also with our performance, with internal ways of mobilising our resources, our creativity, our intelligence, of understanding our own thought processes. That is why cog sci is only at its beginning, because all these realms are unexplored yet, not in a scientifical way.
If the shaman used to be the institution (grin) assigned with weather control before, and we find that funny now, our grandkids might find it funny that in the past we used to assign self-knowledge and self-discipline to different kinds of spiritual and religious movements. The cognitive scientist and the psychologist might be then the people for the job of helping you explore the resources of your ownintelligence and personality.
Many things could be said about the human goals, and perhaps I am going to say more in a different post. What I want to do now is link all those things to the way an agent has its first experiences, and how that could be modeled more realistically.
We ourselves come with certain needs to this world, which might be a reflection of our physicality or of our personality (which of course can be physically grounded, but has been studied in conceptual abstract terms, so it makes more sense to us to talk about it in those anyway - more on the cartesian divide in a different post).
So why should agents be different? I think it's ok to pre-program needs in our agents. I just don't think it's ok to program goals.
I think a high dose of realism would be added if an agent would make up his own mind on his goals, and I will soon bring about a little programming example of that. The freedom to decide on your own goals means though a better interface of interacting with your environment, and understanding (gosh, I've used the big U word) what objects of that environment can help or hinder your needs and survival.

Wednesday 17 February 2010

Multiagent systems - do we know the full story?(I)


Multiagent systems are (usually) virtual worlds in which agents (i.e. entities capable of independent, autonomous actions) pursue goals, interact with each other, cooperate, defect, communicate. A very interesting applicative field of game theory and social theories, they are supposed to bring to us more than relevant conclusions about how agents can best apply strategies to accomplish their goals (mainly a game theory interest), or if they would be better off communicating and cooperating, or defecting and being deceptive (a social theory interest).
Neither are they only interesting as to develop future smarter and more realistic behaviour for AI agents in games (hope u were not thinking Wow).

Their main interesting feat, from my point of view at least, is their ability to negotiate on behalf of us. Having my software agent, representing my financial interests, with my financial history, negotiate with my bank's agent for a low-interest mortgage sounds pretty amazing to me. Also, I could definitely benefit from an agent that would gather information about news, books, forms of entertainment that represent my interests and the possibilities of my budget, cooperating, negotiating and sometimes just saying NO to all the advertising agents of the companies that might try to sell me those services in London. What about an agent that would measure my stress levels, go "out there", in the virtual wildness and organise a surprisingly refreshing day, and then treat me to the program of what I have to do for that day in that exact morning? Well, maybe I'm thinking Data :)


But these little thingyes would apply to business as much as to entertainment. What about an agent carrying my CV, applying to all the jobs that would potentially interest me, analysing the witts of the other agents applying, and coming up with a better strategy/cover letter? I would definitely not mind never completing any other 8 page application form that is just a bit different from the others so I can't use a template, for the rest of my life :). Would that be bad for the HR jobs? Well, they can be human, I don't really mind.


But taking a look at how agents are now, and comparing them with the human agency, some differences strike me. Not saying that these differences have to be bridged before uses can be collected out of the field, I must do a little comparisons, and see what's missing, what is not human in the agent's world, how more levels of realism could be implemented, and if they would help whatsoever and have practical apps.
First of all, in a real world, like mine and yours, the system isn't just a collection of states, and we don't even know our goals for sure.


If I would be an agent that simulates the human experience better, I would be born without much knowledge of what systems and states are, and start exploring the world on my own. I would move, because I can, and because that would be part of self-exploring. I would feel the demands that my body has of me, and try to satisfy them. As an agent, I would not only negotiate the satisfaction of my goals with the external environment, but I would also negotiate the best route to take with myself. Sometimes, I might impose restrictions on my self in order to satisfy requests of my environment.


I would definitely not live in a space where all my goals and their utility is defined. After all, one of the tricky parts of achieving happiness is sometimes not my ability to pursue goals, and my ability to make them come true in the environment, but my ability to know, like a mighty precog, which one of those choices is more in tune with my needs of the present and future.


So, as far as applications go, I can see the agent as being occupying the middle box in a three-folded space, made of himself - the reasonably smart, trying to get it right, cute hero, his needs - which he has to take into consideration and know quite a bit about - and they might be nasty and rebellious when they are not satisfied, and the indifferent environment.
As for the interaction with other agents and their agenda, we haven't even started yet :D.