Cognitive Science, Psychology, Artificial Intelligence, Aesthetics, Philosophy of Mind, Musical expression, and the way they mix in my mind
Sunday, 2 January 2011
What happened to our cognitivism? (II)
The question is, although cognitivism seems reasonable for allopoietic systems which have as a goal the production and transformation of information, can it still be reasonable in autopoietic systems, where information is not predefined as a valuable product, but its value comes through the valence it has for the system itself? Put in a simpler way, a computer is an allopoietic system, it is created to store, process, retrieve, present information to it's user. But the information is not the computer itself. In a human mind, there is no necessary bonus for acquiring random information, we have to dictate and decide ourselves what information is important enough to be worth attention and resources, and that information changes the structure of our future enquiries. Also the context of various pieces of information might change the meaning of the information itself - and it's meaning that we are interested in, not information-production and manipulation. Information production and manipulation is just a second-order goal to that of understanding and defending and ultimately surviving and accomplishing other goals. Information acquiring can be a purpose in itself for curious natures, but even then the curiosity of the system is biased in certain directions that are specific for it, and the nature of the receiving neural networks, their previous informational content and preoccupation will bias the meaning that is gathered in abstract pursuits.
Still, if we abstracted away the tool of information processing in cognitivism, there is a good chance that that is what we do in our heads, even if it isn't our primary goal. And if information-processing is partially what we do, we must ask ourselves what is the impact that the context and meaning of the information that we process has on the information-processing? It's impact might change a few details in the information-processing activity, but it might also move our cognitive house in a completely different realm of existence. It might mean that we fail to understand the most important things, while we do understand some details that are not as fully relevant for knowledge formation. Until we determine exactly the importance that the meaning and the context of the information presented to us has on the information-processing, we can't say to know much about how cognitive processes work in a human brain.
If we are to talk about partial truths of human cognition, there is also connectionism. What I always enjoyed in connectionism is it's focus on emergence. Connectionism says - yes, there is information processing, but what emerges beyond the activity of small units that do this information processing transcends in some way the processing. If connectionism would study more how the temporality or the succession of information that runs through a system influences the result of the processing, the philosopher of mind in me would be interested to see the results. We have yet to build more complex connectionist architectures that monitor context and are not there just to fulfil a predetermined simple computational task.
In short, if connectionism would become less computational and more oriented on monitoring systems and the encoding and interaction of larger spaces of knowledge, I think we might find results that are more realistic and in tune with human cognition.
That is to say I don't think we need to model human cognition perfectly in our artificial neural networks, but we need to model more complex neural networks and interactions between them, and switch our focus to that level of processing.
Copyright - Ana-Maria Olteteanu 2011
What happened to our cognitivism? (I)
It seems to me that in the future, postcognitivism might look back at older disciplines like semiotics and hermeneutics, or not-so-old ones, like communication studies, and borrow in some of their metaphors and objects of study to fully mature itself.
What are the crevices and to-be-patched zones of cognitivism?
First of all, argue post-cognitivists, there is context to be taken into account in whatever information-processing matter. We can't speak of information-processing without considering the effect that the information has on the system, as the system might not encode information at all, but meaning. The exact difference between humans and machines, no matter how performant they are at information processing, lays here - information makes a difference to humans, but not to machines. If information is irrelevant for a human being, it fails to excite interest therefore gather attention and other resources needed to be processed.
We should look at information-processing the right way around, which is we created information-processing machines by abstracting a feature that the human mind has away from the human body - rather how we created robotic arms through abstracting away from the mechanics and functionality of the human arm. That doesn't mean that robotic arms can explain all the functions of the human arm.
Still, it's easier to analyse the things and features that we abstracted away, by the simple fact that we can look at them, observe them, talk about them with greater ease - as we are detached from them. And sometimes these objects are a useful source of backwards reflection and analogy, however we should avoid falling in the trap of considering them all explanatory for
all human thinking.
In human thinking, the process of information processing is called upon mostly because the information is relevant in some way. There is lots of previous information in the system, which colors the new processing, or even the questions that are enquired, the direction of the processing itself. And even partial results might invoke massive changes in the brain's state, as a result of the meaning that possible interpretations might have for the system.
Then there is the inherent problem of the basic cell that information processing is performed upon: the representation. Representations, unlike cells in biology, are a theoretical construct, and a slippery one at that. They take the role of more complex symbols upon which actions are to be performed. But unlike computer memory, the human memory is not that keen in making very detailed representations, carrying them around and manipulating them. Rather the human brain is particularly good at carrying around smaller cognitive loads, and using them in a flexible manner, more appropriate for online requests upon a cognitive processing system.
Representationalism and the idea that there is such a thing as representation came to the fore to complete the analogy of information processing - which must happen on something.
The question of imperfect representation calls to the fore the reality of what human representation might be like. If it is based on meaning, experience and continuous reconstruction of perception according to possible or previously encountered species of meaning and experience, then representation becomes a rather very biased tool.
Considering what it has been said before, it striking that cognitivism wants to talk about perfect cognition in a sense that is not necessarily true for humans, but might only be true to machines.
So if we would consider the principles of cognitivism as being part of an attempt to abstract away perfect cognitive principles and perhaps instantiate them in thinking machines, cognitivism would seem to hit the mark more, rather then with explaining to us human cognition.
But it gets better, in a sense that our perfect view of cognition as information-processing might not be that efficient after all.
A detailed description of that on the next post.
Copyright - Ana-Maria Olteteanu 2011
Wednesday, 22 December 2010
So why exactly is Consciousness more astonishing than the existence of zombies?
It's strange that, considering the characteristics of our nervous system.
It's common sense that we are able to classify things and observe them. One of the most fundamental characteristics of human cognition is our ability to abstract away from examples and put together specific features that appear more or less in all those examples.
So why is it so hard to believe that we would abstract ourselves and talk about our own processes as if we would be discussing about an object? There is no endless loop there, just our illusion that it might be so. Of course we can turn the light upon the interpreter of things, and then upon the one that interpreted that. But that is merely repetitive action which loses meaning and content after several levels of abstraction (or perhaps informational resolution).
We are not infinite loops, nor can we analyse or turn the light upon some of the processes that go on in our head, the ones that actually do most of the low-level analysing.
Of course we can pay attention and duplicate an abstraction - even if it was the discourse we were just making regarding some object - and analyse it afterwards as if it would be an object in itself. But that doesn't mean that we are still in that abstraction. We are the analyser, always and what we analyse might be what we've been or what we said a moment ago - a trace of our own activity.
The fact that we have been a second ago that person that we are now analysing might feel a bit as if we are everywhere - but we must remember that our entire self is an abstraction, based on a (hopefully) unitary system with many parts. That various parts can observe other parts is natural. As soon as we observe them we might think we are not them, or they belong to us but we are mainly the observer, not the observed.
This slippery path is what has been tormenting philosophers of mind for ages mainly because they prefer to forget that the I is an abstraction, and fundamentally we can only identify with one I at a time - mainly because that is what we defined "I" to be - an unity that contains the most pertaining characteristics of ourselves.
I for one can't see how it would seem more likely that a system that holds the neural complexity of humans would decide not to have a peak inside its own skull, not to observe its own activities - after all we are always with ourselves, it's rather normal that we notice what we are up to, both in our heads and in the physical world.
I think what philosophers confuse at the time with the problem of consciousness is the problem of creating living systems. We would like our AI conscious, yet for that I think AI would have to first have properties that most living systems have - including need for self-preservation, ability to defend, own goals, a general capacity and desire for survival.
So I think that the main question is what is the difference between living systems and non-living ones, and only after that how much and what type of neural complexity a system needs to acquire to manifest consciousness.
Plus, if zombies are really more likely to exist than consciousness, why haven't we found any so far, and we keep finding humans instead, which stubbornly insist on following their goals? :P
Dualistic Reflections
I was thinking today how dualists used to think about the universe being split in two very different substances, the matter and the mental stuff - and as I usually do when I find quite an untenable position, I tried to imagine how I could argue for dualism. In fact I tried to imagine how I could argue for the mental essence being a fundamentally different substance, and why.
Here it goes:
Let’s take the symbol for number 3. It’s 3 physical? Well yes and no. Yes because if u just imagined a written number 3, you definitely imagined something that had a physical representation.
But there is no 3 in the 3. There is nothing to prove its 3-ness, except what it means for us.
So yes, 3 has a physical basis – but as a concept is mostly in our head. As a concept, 3 is not contained in its symbol, it’s just suggested. So where is 3? 3 is in our head – 3 is mental stuff!
But surely – you will say – in someone’s head, 3 has a physical basis as well! So 3 is as physical as it gets.
Well yes and no. 3 might be symbolised in our head in the same way in which it is symbolised on paper: there is this phenomenal image that we have of three – or mental representation in you prefer. And it has, of course, neural correlates that activate to bring about the concept or image of number 3 in our minds. But there is nothing to say that the neural correlates of our representation contain the concept of 3 more than the trace of pen on the paper.
One thinking like that, you might say, only keeps moving the Interpreter of subjects deeper and deeper within, until it becomes unapproachable and irreducible to matter.
But I think it’s simpler than that. I think meaning is learned.
What do I think 3-ness is then? I think 3-ness is spread around the network, and it takes a specifical network activation to experience it.
I think concepts like one or two (or maybe even none) are the hardest to learn, after which we keep on going with adding one to what we already have, having learned the concept of more.
I think in fact that one is all around us, in the plethora of unique objects that we encounter. I think revolution starts with two. And I have a particular experience in mind, that of encountering two similar objects. What experiences exactly does one need to understand two? Two things of the same kind in our visual field, perhaps repeated a number of times – perhaps one’s own hands, can make one thinking. A collection of this experiences must be necessary in order to abstract away the quality of two-ness.
It is my start-level hunch that it helps if we understand and can abstract away two-ness from two things of the same kind before having to apply the concept of two to different objects. Because that already refers to counting the belonging of those objects to a specific category, so it involves one level of abstraction – you must understand “toys” as a category before you can go on and count the toys.
Anyway, back to three. I think most people would agree that some of the basic properties of intelligence are abstraction and synthesis, to which I would probably add filling in and removal.
What three is all about as a concept, to start with, is abstraction. In fact many of the things that we discuss are one or multiple levels of abstraction from their reality counterparts.
One cannot encounter 3 in the nature that surrounds humans. One can encounter 3 objects, but 3-ness is an abstraction, a case of our mother or father (and later our nursery teachers) having presented to our initially untrained mind’s eye enough examples of 3-ness for it to stick in our head.
Why holding abstract concepts is so useful? Abstraction in general is useful for information processing. As we don’t hold reality in our heads, but abstract concepts about it, we do need reality to perform actions on, and to feed our concepts further. However, some of us thrive on just playing around with those mental stuff.
Hitting the point of this entire post here: are mental things different enough to be considered a different substance than matter? Mental things are, in a certain sense, not matter – not the matter that they represent that is. They are abstractions from that matter. That doesn’t mean they are not jotted down in neural tissue. It mainly means that the organisation of that type of matter has particular properties – of emerging a concept out of similar (actually encountered) cases. One could call it an ability to extract features, but I like staying away from that, as that implies somehow that the features where there somehow preformed to be extracted. Out of that similarity emerges a concept which can be stored and projected further unto different objects. This acts in a very creative way, as in the mental drawing board many cases can be instantiated without them being actually experimented in real life. Or they can be instantiated on a material reflection of a mental drawing board – sand on a beach, pen and paper, an LCD screen.
Are these mental properties in any way transcendent of material ones? One can see why one could regard them as such. Abstraction can be seen as transcendence from many cases. Though it’s worth noting again that abstraction is only better than the objects it abstracts only in terms of being easier to process. Abstraction doesn’t contain the objects it refers to. So in a sense it doesn’t actually contain a material object.
One could push it even further and say that “I”, the sense of being someone, is actually a very useful high-level abstraction that represents that totality of problems a system might encounter and brings together the most important information the system needs to deal with.
As long as we are unitary systems, it is to be expected that we will represent this unity somehow internally – it is, after all, a logical unit. The fact that this unity is phenomenally experienced as an “I” and a presence in this “I” is probably an abstraction masterpiece - but about consciousness and zombies in a different post.