Activity theory, a concept defined by Alexey Leont'ev, presupposes that through engaging with their environment, humans create production tools that are exteriorized forms of mental processes. In my previous blog I was defining cognitivism as a tool which abstracts away information-processings itself (Leont'ev would be proud).
The question is, although cognitivism seems reasonable for allopoietic systems which have as a goal the production and transformation of information, can it still be reasonable in autopoietic systems, where information is not predefined as a valuable product, but its value comes through the valence it has for the system itself? Put in a simpler way, a computer is an allopoietic system, it is created to store, process, retrieve, present information to it's user. But the information is not the computer itself. In a human mind, there is no necessary bonus for acquiring random information, we have to dictate and decide ourselves what information is important enough to be worth attention and resources, and that information changes the structure of our future enquiries. Also the context of various pieces of information might change the meaning of the information itself - and it's meaning that we are interested in, not information-production and manipulation. Information production and manipulation is just a second-order goal to that of understanding and defending and ultimately surviving and accomplishing other goals. Information acquiring can be a purpose in itself for curious natures, but even then the curiosity of the system is biased in certain directions that are specific for it, and the nature of the receiving neural networks, their previous informational content and preoccupation will bias the meaning that is gathered in abstract pursuits.
Still, if we abstracted away the tool of information processing in cognitivism, there is a good chance that that is what we do in our heads, even if it isn't our primary goal. And if information-processing is partially what we do, we must ask ourselves what is the impact that the context and meaning of the information that we process has on the information-processing? It's impact might change a few details in the information-processing activity, but it might also move our cognitive house in a completely different realm of existence. It might mean that we fail to understand the most important things, while we do understand some details that are not as fully relevant for knowledge formation. Until we determine exactly the importance that the meaning and the context of the information presented to us has on the information-processing, we can't say to know much about how cognitive processes work in a human brain.
If we are to talk about partial truths of human cognition, there is also connectionism. What I always enjoyed in connectionism is it's focus on emergence. Connectionism says - yes, there is information processing, but what emerges beyond the activity of small units that do this information processing transcends in some way the processing. If connectionism would study more how the temporality or the succession of information that runs through a system influences the result of the processing, the philosopher of mind in me would be interested to see the results. We have yet to build more complex connectionist architectures that monitor context and are not there just to fulfil a predetermined simple computational task.
In short, if connectionism would become less computational and more oriented on monitoring systems and the encoding and interaction of larger spaces of knowledge, I think we might find results that are more realistic and in tune with human cognition.
That is to say I don't think we need to model human cognition perfectly in our artificial neural networks, but we need to model more complex neural networks and interactions between them, and switch our focus to that level of processing.
Copyright - Ana-Maria Olteteanu 2011
This is presuming that the only information is that which is consciously processed, a failure of understanding. Another difficulty presented in modeling complex neural networks is that artificial networks do not process subliminally.
ReplyDeleteInteresting point. I think there is both unconscious and conscious meaning. In fact psychology might quite often have as purpose to dig out the meanings and the values we store unconsciously and to present them to the light of consciousness, thus offering us the opportunity to reflect on them, change them, basically make decisions and exercise that pain of a gift we have: free will!
ReplyDeleteAs about neural networks not processing subliminally, all their processing is unconscious. Subliminality implies that there is something that escapes consciousness, but we can't even talk about achieving consciousness in neural networks yet. They just model lower-level cognitive processing. I think consciousness can only emerge as a property of a system, which is why I am interested in future developments of neural networks into systems or cognitive architectures.
You raised interesting questions, thank you for posting!
Thought you might find this interesting: http://scienceblog.com/45099/scientists-afflict-computers-with-schizophrenia-to-better-understand-the-human-brain/
ReplyDelete