Wednesday 22 December 2010

So why exactly is Consciousness more astonishing than the existence of zombies?

Listening and reading about all the consciousness talk lately one can't help but wonder - would some cognitive neurologists and philosophers have preferred us to be all zombies? Do they really find it easier to explain or more likely for us to be zombies rather than entities possessing consciousness?
It's strange that, considering the characteristics of our nervous system.
It's common sense that we are able to classify things and observe them. One of the most fundamental characteristics of human cognition is our ability to abstract away from examples and put together specific features that appear more or less in all those examples.
So why is it so hard to believe that we would abstract ourselves and talk about our own processes as if we would be discussing about an object? There is no endless loop there, just our illusion that it might be so. Of course we can turn the light upon the interpreter of things, and then upon the one that interpreted that. But that is merely repetitive action which loses meaning and content after several levels of abstraction (or perhaps informational resolution).
We are not infinite loops, nor can we analyse or turn the light upon some of the processes that go on in our head, the ones that actually do most of the low-level analysing.
Of course we can pay attention and duplicate an abstraction - even if it was the discourse we were just making regarding some object - and analyse it afterwards as if it would be an object in itself. But that doesn't mean that we are still in that abstraction. We are the analyser, always and what we analyse might be what we've been or what we said a moment ago - a trace of our own activity.
The fact that we have been a second ago that person that we are now analysing might feel a bit as if we are everywhere - but we must remember that our entire self is an abstraction, based on a (hopefully) unitary system with many parts. That various parts can observe other parts is natural. As soon as we observe them we might think we are not them, or they belong to us but we are mainly the observer, not the observed.
This slippery path is what has been tormenting philosophers of mind for ages mainly because they prefer to forget that the I is an abstraction, and fundamentally we can only identify with one I at a time - mainly because that is what we defined "I" to be - an unity that contains the most pertaining characteristics of ourselves.
I for one can't see how it would seem more likely that a system that holds the neural complexity of humans would decide not to have a peak inside its own skull, not to observe its own activities - after all we are always with ourselves, it's rather normal that we notice what we are up to, both in our heads and in the physical world.
I think what philosophers confuse at the time with the problem of consciousness is the problem of creating living systems. We would like our AI conscious, yet for that I think AI would have to first have properties that most living systems have - including need for self-preservation, ability to defend, own goals, a general capacity and desire for survival.
So I think that the main question is what is the difference between living systems and non-living ones, and only after that how much and what type of neural complexity a system needs to acquire to manifest consciousness.


Plus, if zombies are really more likely to exist than consciousness, why haven't we found any so far, and we keep finding humans instead, which stubbornly insist on following their goals? :P

Dualistic Reflections

I was thinking today how dualists used to think about the universe being split in two very different substances, the matter and the mental stuff - and as I usually do when I find quite an untenable position, I tried to imagine how I could argue for dualism. In fact I tried to imagine how I could argue for the mental essence being a fundamentally different substance, and why.

Here it goes:

Let’s take the symbol for number 3. It’s 3 physical? Well yes and no. Yes because if u just imagined a written number 3, you definitely imagined something that had a physical representation.

But there is no 3 in the 3. There is nothing to prove its 3-ness, except what it means for us.

So yes, 3 has a physical basis – but as a concept is mostly in our head. As a concept, 3 is not contained in its symbol, it’s just suggested. So where is 3? 3 is in our head – 3 is mental stuff!

But surely – you will say – in someone’s head, 3 has a physical basis as well! So 3 is as physical as it gets.

Well yes and no. 3 might be symbolised in our head in the same way in which it is symbolised on paper: there is this phenomenal image that we have of three – or mental representation in you prefer. And it has, of course, neural correlates that activate to bring about the concept or image of number 3 in our minds. But there is nothing to say that the neural correlates of our representation contain the concept of 3 more than the trace of pen on the paper.

One thinking like that, you might say, only keeps moving the Interpreter of subjects deeper and deeper within, until it becomes unapproachable and irreducible to matter.

But I think it’s simpler than that. I think meaning is learned.

What do I think 3-ness is then? I think 3-ness is spread around the network, and it takes a specifical network activation to experience it.

I think concepts like one or two (or maybe even none) are the hardest to learn, after which we keep on going with adding one to what we already have, having learned the concept of more.

I think in fact that one is all around us, in the plethora of unique objects that we encounter. I think revolution starts with two. And I have a particular experience in mind, that of encountering two similar objects. What experiences exactly does one need to understand two? Two things of the same kind in our visual field, perhaps repeated a number of times – perhaps one’s own hands, can make one thinking. A collection of this experiences must be necessary in order to abstract away the quality of two-ness.

It is my start-level hunch that it helps if we understand and can abstract away two-ness from two things of the same kind before having to apply the concept of two to different objects. Because that already refers to counting the belonging of those objects to a specific category, so it involves one level of abstraction – you must understand “toys” as a category before you can go on and count the toys.

Anyway, back to three. I think most people would agree that some of the basic properties of intelligence are abstraction and synthesis, to which I would probably add filling in and removal.

What three is all about as a concept, to start with, is abstraction. In fact many of the things that we discuss are one or multiple levels of abstraction from their reality counterparts.

One cannot encounter 3 in the nature that surrounds humans. One can encounter 3 objects, but 3-ness is an abstraction, a case of our mother or father (and later our nursery teachers) having presented to our initially untrained mind’s eye enough examples of 3-ness for it to stick in our head.

Why holding abstract concepts is so useful? Abstraction in general is useful for information processing. As we don’t hold reality in our heads, but abstract concepts about it, we do need reality to perform actions on, and to feed our concepts further. However, some of us thrive on just playing around with those mental stuff.

Hitting the point of this entire post here: are mental things different enough to be considered a different substance than matter? Mental things are, in a certain sense, not matter – not the matter that they represent that is. They are abstractions from that matter. That doesn’t mean they are not jotted down in neural tissue. It mainly means that the organisation of that type of matter has particular properties – of emerging a concept out of similar (actually encountered) cases. One could call it an ability to extract features, but I like staying away from that, as that implies somehow that the features where there somehow preformed to be extracted. Out of that similarity emerges a concept which can be stored and projected further unto different objects. This acts in a very creative way, as in the mental drawing board many cases can be instantiated without them being actually experimented in real life. Or they can be instantiated on a material reflection of a mental drawing board – sand on a beach, pen and paper, an LCD screen.

Are these mental properties in any way transcendent of material ones? One can see why one could regard them as such. Abstraction can be seen as transcendence from many cases. Though it’s worth noting again that abstraction is only better than the objects it abstracts only in terms of being easier to process. Abstraction doesn’t contain the objects it refers to. So in a sense it doesn’t actually contain a material object.

One could push it even further and say that “I”, the sense of being someone, is actually a very useful high-level abstraction that represents that totality of problems a system might encounter and brings together the most important information the system needs to deal with.

As long as we are unitary systems, it is to be expected that we will represent this unity somehow internally – it is, after all, a logical unit. The fact that this unity is phenomenally experienced as an “I” and a presence in this “I” is probably an abstraction masterpiece - but about consciousness and zombies in a different post.

Monday 20 December 2010

On venturing guesses

I think it's Samir Zeki that mentions in The Neurology of Ambiguity a certain Law of Constancy - which states that the brain will always look for constant things.
I remember stating in one of my blogs that my opinion is that by contrary, the brain is actually constantly searching for different things. And that I had a hunch that both phenomenons were true, and represented to faces of the same coin.
We cannot understand the world without projecting already known things on it, no matter how different the world is from our internal projection. We could argue that the process of understanding per se is an effort to match the objects of our present to analysed objects of our past, or objects that we don't understand to objects that we think we understand better, or know more about.
We seem to constantly understand things referentially - that is to say that we always understand thing through other things we understood before, or in their relationship (similarity, difference, type of interaction) with other things that we know other things about. The ultimate thing that we relate everything to is ourselves, which is why I guess the I has such an important internal status. (of course we can even deconstruct the I - our knowledge is not perfect about anything and I am not sure that there can be such a thing as perfect knowledge).
This entire corpus of knowledge that we carry with us involves the meaning that we create about things. Is this structure of web of relationships an accident, or is there something in our neural networks that predisposes us to learning in this way?
If the law of constancy is true, we constantly look out in the world to find things that are the same with what we know, because those things already have a meaning for us. We can use them as points of reference for the things that are different, and processing faster what is similar means already acquiring a large amount of info and knowing on what to focus next.
The differences require more processing power.

I remember reading for the first time about the theory of expectancy which states that we always formulate predictions about the future. I think we do the same when we encounter things that are different to what we know - that is we venture guesses, based on our experience with things that are similar to the new ones.
I think they are at least three different strengths that a system could prove in attempting to make predictions - or venture guesses (I prefer the "venture guesses" form when it comes to encountering new objects, as I think the purpose of the process is not to predict the new object completely, but to narrow down if possible its functionality category):
- having a database of knowledge that is close to the new object (in terms of meaning, functionality, etc)
- having the ability to follow for a long term change in objects, families of objects, differences among elements of the same category, object evolution - having as a consequence a better ability to mentally manipulate transformations of objects and thus venture more appropriate guesses as what an object might have been related to in the first place, or what it might become or spawn in the future
-having a general higher-than-average exposure to novel objects, thus generally being better at categorising and manipulating (storing in a partially determined category, or creating a momentary category) novel objects until they are better categorised.

It would be interesting to build systems that instantiate those different evolutionary advantages, and see which one does best, or if they fare similarly, although I suppose the parameters for past experience would take some time to define.

If you could choose which system to be, what to have as an advantage, or you simply could have a cognitive upgrade that would instantiate one and only one of the advantages above, which one would you choose?
I will think of my answer and let you know.

Saturday 4 December 2010

Detached of one's Self (I)

We like to think that memory is our ultimate store - be it conscious or unconsciously activated. And when it comes to a self, we like to think we own this self, we have it mostly under control, it is us, it can never go away or do mad things. So how do things like "losing one's self", "self-detached", "my younger self", "my bad self" can happen? More, how can one feel locked outside of one's self, so detached that one can't possibly imagine how to get back in?
Sometimes a part of us might be severed from us to the point that we cannot even imagine how that part used to be us, how we used to behave when it was active. We might remember our younger selves and think of them as impersonators. We might have a recollection of what that self did, but we might not identify with it at all - be unable to imagine ourselves having those impulses or making the decisions of doing those things.
There are proofs of this happening to people in relationship to their bodies. Oliver Sacks describes in A Leg to Stand on how the experience of having a paralyzed leg involved losing an entire arsenal of phenomenology relating to having and using his leg in the past:
"The leg has vanished, taking its "place" with it. Thus there seemed no possibility of recovering it - and this irrespective of the pathology involved. Could memory help, where looking forward could not? No! The leg had vanished, taking its "past" away from it! I could no longer remember having a leg. I could no longer remember how I have ever walked and climbed. I felt inconceivably cut off from the person who had walked and run and climbed just five days before. There was only a "formal" continuity between us. "
I think of phenomenology as kind of a user interface through which one controls in a user-friendly manner all the muscle spindles, neuron action potentials and other things one has conscious access to. If such a detachment, such a cut can occur between the "main" self and one's physical body, including one's entire phenomenology pertaining to it; maybe a similar mechanism applies to the cases when one feels cut off from psychological parts of one's self.
We build and change our persona throughout our life. The characteristics pertaining to our self must be encoded in a variety of neural networks. So what keeps the entire machinery of the self together? Is there an index-like network? A loop that activates most important personal characteristics when booting up the system? (when waking up) How do we know when we lost something from the chain of networks? Do we even notice a difference?
There must be incremental differences, as well as definite important moments which imply us making a choice about our life, which reflects back on a choice of whom we will be next.
We tend to identify with who we are in the present - the neural networks connected to our current workspace. We mostly have memories about who we've been and how that felt like. We try to keep in line, achieve a continuity with certain aspects of ourselves (be true to ourselves), and run away like mad from others, that we don't like or consider "an experimental mistake".
One could argue that we get to know whom we are and whom we like being through interaction with our environment, through instantiating various aspects of ourselves.
An informational overload would happen if we would have more than a certain number of characteristics readily available, preloaded in our accessible personality space. In the same time, we base our social interactions on people holding responsibility to who they are, and trying to
keep a personality as coherent as possible.
Of course, personalities are not always that coherent. As good news we have some knowledge about where personality might be neurologically influenced in one's brain. But on those on a future post.