Showing posts with label multiagent systems. Show all posts
Showing posts with label multiagent systems. Show all posts

Thursday, 18 February 2010

Multiagent systems - do we know the full story?(II)

I want to emphasize that, in a real world, there is no simple agent-environment play, because the agent is not born with knowing his needs. WIth the complexity of the human psyche, the human agent might learn to ignore his needs, and sometimes ignore the wrong ones, putting up with pressure from the environment and not really satisfying himself.
I also think that defining the additional space of constant negotiation with the internal self and needs is important as it makes more realistic the interaction with the environment, if it would be properly modeled.
We assume normal agents have goals, but how they synthesize goals from the plethora of needs to be satisfied hasn't been defined yet. In fact the term goal is not a primitive term, but a constructed one. We have needs, we see circumstances around us, we classify them in order of their possible outcomes and of the needs they can satisfy, and then we define a goal - which is an image, a hybrid between our need and the circumstance that we think is going to satisfy it best.
As an example, we are motivated to go to work because we need to eat, have a roof over our heads, and (but not necessarily) because we need to feel useful and valuable and achieve something. Quite a number of our needs might be tied up into our going to work, but going to work is a circumstance which we chose to satisfy those needs. The same circumstance can act as both a satisfier and a frustrating agent - I might be very satisfied with the pay of my job, so that could cover up my feeding myself and keeping a roof over my head needs quite nicely, but I can feel that because of having to satisfy those needs, I occupy my time with a job that doesn't let me follow my own destination, use my talents, develop my strengths, and focus on my real interests. This is more like the human complexity we encounter than the agent's simplified world, and not because we made the environment more complex, but also because we made the internal scenery of the agent more refined and complex.
The algorithm of chosing what needs to satisfy with maximum utility could therefore be a very complex one, considering the number of choices we can make in our very resourceful environments, and the fact that we can't only compute outcomes for a specific set of choices, without taking into consideration that the environment itself might change. And we are not even discussing dramatic changes. Small changes can make quite a difference.
Let's say you are working in that utterly boring job to get the money to put yourself through uni, but what you can really save up every month is about 10% of your earnings. Also you know how much the uni is gonna cost you, and that it's gonna take you a couple of years to save up for it. In this case, being frustrated and having to take lots of days off from work, or simply spending more on things that you like that are going to make you feel that you still like yourself enough to get the real things that you want, might work against you, and you might end up not putting aside those money, although you are still putting yourself through the frustration of the job. What happens here? The need of doing what you want to do is overwhealming you, and without being aware you are satisfying it in a different way. You promised yourself to stay in that frustrating situation so that you can give yourself the expected reward, but the circumstance frustrates you more than you could think of, and you are really not making much progress towards your goals at all.
That is all because us humans don't have perfect self-control, so we need to always make assumptions on how many internal resources we still have - and that is not only how much energy we have to work before being hungry again, or how many hours before needing to sleep, it has to do with many other psychological components, that are so much harder to compute.
So how do we orient ourselves in this fuzzy environment? At this point, the external environment seems to be creating us much more problems than the external one, which at least is out there, we can look at it, measure it, it seems more objective.
Well, we learn. We learn ourselves in time, and we are even taught about ourselves through learning about other's people experience of dealing with themselves. That is perhaps why we see this blossoming literature on self-help. Science dealt a lot with our problems of getting resources out of the environment, so we are quite enlightened and can think in non-primitive, non-religious terms about getting our food, building our shelters, keeping ourselves safe from the bad weather. But in terms of our dealings with our own selves, we can be quite primitive.
That is why we still look admiratively to many forms of ancient spirituality - they tell us a lot about how other people have dealt with their own selves, and I think not only psychology, but cog sci should be about that as well. Because we don't only have to deal with our beliefs, emotions, needs, but also with our performance, with internal ways of mobilising our resources, our creativity, our intelligence, of understanding our own thought processes. That is why cog sci is only at its beginning, because all these realms are unexplored yet, not in a scientifical way.
If the shaman used to be the institution (grin) assigned with weather control before, and we find that funny now, our grandkids might find it funny that in the past we used to assign self-knowledge and self-discipline to different kinds of spiritual and religious movements. The cognitive scientist and the psychologist might be then the people for the job of helping you explore the resources of your ownintelligence and personality.
Many things could be said about the human goals, and perhaps I am going to say more in a different post. What I want to do now is link all those things to the way an agent has its first experiences, and how that could be modeled more realistically.
We ourselves come with certain needs to this world, which might be a reflection of our physicality or of our personality (which of course can be physically grounded, but has been studied in conceptual abstract terms, so it makes more sense to us to talk about it in those anyway - more on the cartesian divide in a different post).
So why should agents be different? I think it's ok to pre-program needs in our agents. I just don't think it's ok to program goals.
I think a high dose of realism would be added if an agent would make up his own mind on his goals, and I will soon bring about a little programming example of that. The freedom to decide on your own goals means though a better interface of interacting with your environment, and understanding (gosh, I've used the big U word) what objects of that environment can help or hinder your needs and survival.

Wednesday, 17 February 2010

Multiagent systems - do we know the full story?(I)


Multiagent systems are (usually) virtual worlds in which agents (i.e. entities capable of independent, autonomous actions) pursue goals, interact with each other, cooperate, defect, communicate. A very interesting applicative field of game theory and social theories, they are supposed to bring to us more than relevant conclusions about how agents can best apply strategies to accomplish their goals (mainly a game theory interest), or if they would be better off communicating and cooperating, or defecting and being deceptive (a social theory interest).
Neither are they only interesting as to develop future smarter and more realistic behaviour for AI agents in games (hope u were not thinking Wow).

Their main interesting feat, from my point of view at least, is their ability to negotiate on behalf of us. Having my software agent, representing my financial interests, with my financial history, negotiate with my bank's agent for a low-interest mortgage sounds pretty amazing to me. Also, I could definitely benefit from an agent that would gather information about news, books, forms of entertainment that represent my interests and the possibilities of my budget, cooperating, negotiating and sometimes just saying NO to all the advertising agents of the companies that might try to sell me those services in London. What about an agent that would measure my stress levels, go "out there", in the virtual wildness and organise a surprisingly refreshing day, and then treat me to the program of what I have to do for that day in that exact morning? Well, maybe I'm thinking Data :)


But these little thingyes would apply to business as much as to entertainment. What about an agent carrying my CV, applying to all the jobs that would potentially interest me, analysing the witts of the other agents applying, and coming up with a better strategy/cover letter? I would definitely not mind never completing any other 8 page application form that is just a bit different from the others so I can't use a template, for the rest of my life :). Would that be bad for the HR jobs? Well, they can be human, I don't really mind.


But taking a look at how agents are now, and comparing them with the human agency, some differences strike me. Not saying that these differences have to be bridged before uses can be collected out of the field, I must do a little comparisons, and see what's missing, what is not human in the agent's world, how more levels of realism could be implemented, and if they would help whatsoever and have practical apps.
First of all, in a real world, like mine and yours, the system isn't just a collection of states, and we don't even know our goals for sure.


If I would be an agent that simulates the human experience better, I would be born without much knowledge of what systems and states are, and start exploring the world on my own. I would move, because I can, and because that would be part of self-exploring. I would feel the demands that my body has of me, and try to satisfy them. As an agent, I would not only negotiate the satisfaction of my goals with the external environment, but I would also negotiate the best route to take with myself. Sometimes, I might impose restrictions on my self in order to satisfy requests of my environment.


I would definitely not live in a space where all my goals and their utility is defined. After all, one of the tricky parts of achieving happiness is sometimes not my ability to pursue goals, and my ability to make them come true in the environment, but my ability to know, like a mighty precog, which one of those choices is more in tune with my needs of the present and future.


So, as far as applications go, I can see the agent as being occupying the middle box in a three-folded space, made of himself - the reasonably smart, trying to get it right, cute hero, his needs - which he has to take into consideration and know quite a bit about - and they might be nasty and rebellious when they are not satisfied, and the indifferent environment.
As for the interaction with other agents and their agenda, we haven't even started yet :D.