Wednesday, 22 December 2010

So why exactly is Consciousness more astonishing than the existence of zombies?

Listening and reading about all the consciousness talk lately one can't help but wonder - would some cognitive neurologists and philosophers have preferred us to be all zombies? Do they really find it easier to explain or more likely for us to be zombies rather than entities possessing consciousness?
It's strange that, considering the characteristics of our nervous system.
It's common sense that we are able to classify things and observe them. One of the most fundamental characteristics of human cognition is our ability to abstract away from examples and put together specific features that appear more or less in all those examples.
So why is it so hard to believe that we would abstract ourselves and talk about our own processes as if we would be discussing about an object? There is no endless loop there, just our illusion that it might be so. Of course we can turn the light upon the interpreter of things, and then upon the one that interpreted that. But that is merely repetitive action which loses meaning and content after several levels of abstraction (or perhaps informational resolution).
We are not infinite loops, nor can we analyse or turn the light upon some of the processes that go on in our head, the ones that actually do most of the low-level analysing.
Of course we can pay attention and duplicate an abstraction - even if it was the discourse we were just making regarding some object - and analyse it afterwards as if it would be an object in itself. But that doesn't mean that we are still in that abstraction. We are the analyser, always and what we analyse might be what we've been or what we said a moment ago - a trace of our own activity.
The fact that we have been a second ago that person that we are now analysing might feel a bit as if we are everywhere - but we must remember that our entire self is an abstraction, based on a (hopefully) unitary system with many parts. That various parts can observe other parts is natural. As soon as we observe them we might think we are not them, or they belong to us but we are mainly the observer, not the observed.
This slippery path is what has been tormenting philosophers of mind for ages mainly because they prefer to forget that the I is an abstraction, and fundamentally we can only identify with one I at a time - mainly because that is what we defined "I" to be - an unity that contains the most pertaining characteristics of ourselves.
I for one can't see how it would seem more likely that a system that holds the neural complexity of humans would decide not to have a peak inside its own skull, not to observe its own activities - after all we are always with ourselves, it's rather normal that we notice what we are up to, both in our heads and in the physical world.
I think what philosophers confuse at the time with the problem of consciousness is the problem of creating living systems. We would like our AI conscious, yet for that I think AI would have to first have properties that most living systems have - including need for self-preservation, ability to defend, own goals, a general capacity and desire for survival.
So I think that the main question is what is the difference between living systems and non-living ones, and only after that how much and what type of neural complexity a system needs to acquire to manifest consciousness.


Plus, if zombies are really more likely to exist than consciousness, why haven't we found any so far, and we keep finding humans instead, which stubbornly insist on following their goals? :P

No comments:

Post a Comment