Wednesday 25 April 2012

The thought machine


Computers allow the information-processing paradigm, which has profound implications on both cognitive science and artificial intelligence. Information processing seems to have played for cognition the role that the microscope played for biology.
For a long time I thought that the stumbling block which makes people consider intelligent machines as not truly cognitive was not consciousness, but intentionality. Machines don't have their own goals, they can't choose for themselves to take another path than the one the programmer has endowed them with, neither can they come up with problem-solving skills and methods that haven't been put there by a human.
Processing information for a goal seems to be the defining quality that characterizes living systems, which can brings about cognitive behavior and processes which we don't see in machines.
Yet something is missing from this picture. Sometimes we forget that actually organs keep functioning automatically, just by being alive. That them being alive means they keep attempting to fulfill their function. So the function of our brain might be to process information, be it with a goal or not. It might be in its programming to do so, independent of if the owner has that as a goal or not.
This is where curiosity, as a human/animal trait comes into play. Many humans go out of their way to obtain information which couldn't possibly affect their survival. Yet the drive to acquire more information, to compare it to the previous one, to patch the types of information together, can feel as important for these people as eating (or more, at least until they get really hungry).
So it might be that the information-processing machine is simply in overdrive for some people. That it takes over. And of course they are survival advantages in that (if you are in the right job to manifest that talent, and if you can make a difference to the community through what you are processing).
Perhaps we should also ask ourselves why it is that our information-processing machine seems to always be on. Do we really need to do that much information processing? Is that an inherent part of the human condition, that we would have needed to develop this constant stream of thought/awareness?
Of course we feel that our thought machine sleeps at times - we zone out and can't think of the things we would like to think of, can't solve the things that we would like to solve.
Sometimes our minds seem blank and sometimes our ability to focus is slippery. Our machine can't seem to come to terms with what we want to direct it towards, and escapes our goal, while swimming heavily in its own purposes and plays at information-processing. It keeps on asking us questions we don't want or have the time to answer then and there, instead of helping us solve what we put in our schedule as a priority. It keeps focusing on things that seem unrelated to our task, albeit interesting for the context of our lives or the way we understand the world and universe and things.
That is one of the reasons for which some people use diaries. It is not because they need to constantly say what they did that day. But because they need to give their information processing machine a place to express its concerns, and process all those things that are in the back of our head, and that otherwise would stand in the way of our normal productivity. And they are questions in our head that we can't escape, that dominate our mental landscape in such a powerful way, that the old advice applies "the only way to escape a temptation/desire is to fulfill it".
In this context, the only way to get over that preoccupation that our mind has for certain subjects is to give it free rein to think about them at will and in a focused manner, so that we can then direct it at will ourselves after it has settled into having the answers that it needed, or being closer to them.
The mind-body system might not be a dualistic machine, but our own mind can be one at times, and we can remark and reflect on the difference between the things that we want to think about and the things that seem to "think us".
Our information-processing machine is not a completely tamed one, and perhaps it shouldn't be. As it knows better what our deep goals, interests and questions are than our daily task planner ever could.

Back to the intentional/non-intentional debate, I think there that our information-processing device just plays at times. It is a ludic device that takes pleasure (and rewards are significant for motivating behavior) from being active and trying to think and "solve" the world around us.
This can be easily seen in perception illusions, where we can't help ourselves but to solve certain patterns to some known common pattern. Or we can experience it when we look with no purpose at some patterns on the wall while thinking of something else and realize after a few minutes that our mind has kept on trying to arrange those patterns in various configurations.
The major implication of that thought is that it means we are more machine-like and less intentional than we thought. And that many times it is the machine that drives us forward, not the intention.
So if information processing is a function, and our brain can't help but do it, where does that leave the debate on the importance of intentionality as being a main feature that differentiates human cognition (our benchmark) from machine cognition?
First of all having this ludic information processing machine is not an easy process to replicate in itself. I don't know of any artificial cognitive machine that just keeps on adding random observations and drawing random conclusions and inferences out of them. Are they rules or general guiding principles which apply to our mental play? To be found out.
Second, our intentionality - our goals - might predispose us to gather information about how we can solve those even during our ludic time. We might be collecting information on the weird way the patterns are distributed on the wall and associate it through some far analogy to one of the important problems we are struggling with (and which is goal related) - in fact my research focuses on this process.
Second, there is a definite degree of independence, free will and randomness (read uniqueness if you like) that comes with this ludic information-processing machine, and probably defines the human cognitive experience. To be constantly only goal-directed is, after all, what we imagine machines to be. (very funny that we think that, but we also think that us having goals and free will is what makes us human).
Third, our experience in the world and in our thoughts is sometimes what makes new goals emerge. Which is not an experience replicated by a machine yet. And I don't mean refining goals purposefully - like in the case of creating subgoals in ACT-R. I mean simply thinking about some random things then suddenly deciding some possibility is attractive enough to be worth investigating, or experiencing things and then deciding to make something our goal just because we like it (or without particular knowledge of what cognitive goal that investigation might serve).
Therefore I don't see the idea of our brain being an information-processing machine that does just that all day long cause it likes to and that is its function as something that makes intentionality less important. Perhaps it is just bridging a little bit more the gap of our minds being mechanisms that have their own function, and drive us with the expression of this function, rather then other way around (us setting goals, and constantly directing our machine forward).
How would a neural implementation or model of either intentionality or ludic information processing look like? That is what I would really like to see.