Archive for N. Katherine Hayles

The Unbearable Lightness of Being Blue

Posted in Divine Science with tags , , , , , , , , , on November 29, 2008 by Bonni Rambatan

Courtesy of Dr. Pablo de Heras Ciechomski/Visualbiotech

A computer simulation of the upper layer of a rat brain neocortical column

OK, so perhaps this should have been the first post of this category, since it is arguably the top project at the intersection of cognitive science and artificial intelligence. Yes, by “Blue” I am referring to Henry Markam’s Blue Brain project, started back in 2005. What we encounter in the Blue Brain project is nothing less than a possibility of a simulated “consciousness” and other complexities of the brain (although only on the level of a rat’s at the moment, and at 1/10th of its speed), approached from the other side of computation, namely by creating individual neurons instead of the more common top-down approach in grammatical computing — the revolution of third wave cybernetics, as Katherine Hayles would have put it.

Even the strong AI skeptic John Searle in one of his lectures commented that [citation needed], even though the mind is not a software, and the brain is not a supercomputer, the computer may be constructed as such to simulate the brain. He denies by all means the grammatical approach to consciousness with his famous Chinese Room argument, but he never did deny semantic approaches, only stating that such approach is not yet possible. But with the Blue Brain — although this project leans towards cognitive science instead of AI — such an approach is, at least, on the horizon of possibility.

You can read more extensively about the Blue Brain and its progress here and here, but now let us continue on to our analysis. What do we have here with the Blue Brain is no less a science simulating the full complexity of a mind, right down (or, rather, up from) its neuron basis. How do we integrate this into our cognitive mapping? What we need to understand first and foremost is how this project requires an understanding that, 1) the mind comes from a whole, not of any one of its parts, and therefore, 2) non-localizable to any kernel whatsoever.

This, of course, is what Slavoj Žižek, in his The Parallax View, called “the unbearable lightness of being no one”. Are we not, today, with cognitive science, confronted with the fact, already acknowledged with Hume, that the subject does not exist? In today’s context, however deep we pry open the skull and dig the brain, we find nobody home, no kernel of the soul, no subject. This is the paradox of the 21st-century narrative of the subject.

What the Blue Brain project provides is no less than a common ground for us to think about silicon-based versus carbon-based life — whereas before, we see carbon-based life as evolving, as beings whose consciousness comes later, silicon-life were beings programmed through “consciousness” (grammatical understanding of the relations of objects, etc. — which is where I suspect the true “uncanny valley” lies). Artificial Life provided a change by introducing chaos and emergence into the foray, but did not necessarily look into complex nervous systems. If and when the Blue Brain project succeeds, what we will have is no less than a complete brain simulation of a species, a silicon-based brain, “comparable to the Human Genome Project,” as Markam put it on the link above.

While I remain an agnostic to the Moravecian idea of downloading minds into computers (and totally an atheist apropos the idea that the subject will remain the same), I do believe that the Blue Brain project and its completion will require us to rethink our subjectivity and humanity as a whole. Silicon-based and carbon-based life will have a fully similar grounding, and so many new spaces of cognitive science will open up, as well as new spaces of transhumanity and ubiquitous technology. All of us will have to confront not only the fact that there is nobody home, but also that home is temporary, shredding every last bit of our humanistic grounding — the unbearable lightness of being Blue.

If (and possibly when) the Brain is implemented into a body, then things will go much further. Thoughts? Feel free to comment away!

Chinese Room and the Cogito

Posted in Pure Theory with tags , , , , , , , , , , , , on November 26, 2008 by Bonni Rambatan
Chinese Room

The Chinese Room

Cogito ergo sum is perhaps the most abused three-word phrase in our contemporary intellectual sphere, so much so that most of us no longer bothered to read further on the subject and what Descartes really meant. “I think therefore I am” has been recycled over and over by changing the verb into every other activity, from shopping to tweeting. All these has of course one underlying assumption, a false reading of the Cartesian subject as a substantial subject. Truth be told, the mind-body split did not come from Descartes at all — the idea has obviously been around since the pre-Socratic era (why else would we have the narratives of heaven and hell?). The true Cartesian revolution is in fact the opposite one: that of a total desubstantialized subject.

This does not mean, again, a subject desubstantialized of of a body and becoming a free-flowing mind, a (mis)reading found everywhere today in the intellectual sphere, and especially in the area of third-wave cybernetic research. Among the most fiercest proponents of this version of Descartes is none other than John Searle, the proponent of the famous Chinese Room argument. Unknowingly for Searle, however, the Chinese Room argument is, in fact, at one point, an ultimately Cartesian paradox.

What does the res cogitans, the thinking substance, mean, then, if not the common misreading of it as a declaration of a subject of pure thought? Here, it is crucial to look at what kind of thinking the cogito is first formulated under. The thinking that brought upon the cogito is none other than pure doubt — the doubting of reality and my existence within it. This doubt is irreducible, so much so that, in what may pass as a rather desperate move, the doubt itself becomes the only positive proof of the thing that I doubt — I exist only insofar as I doubt my existence. Rather than a substance of pure thought (“that can be downloaded into computers”, as Hans Moravec put it, etc…), the Cartesian subjectivity is a void of pure self-doubt.

(It is of course true that, in Descartes, there is ultimately a mind-body duality: the subject does not depend on the world, res extensa, to truly exist. This is, however, not because they are two separate substances, but because the former is a precondition of the latter; because the cogito is a prior void around which res extensa can only emerge as such.)

Does John Searle not reproduce exactly same motive in his Chinese Room argument, but instead of doubting the true existence of his reality, he doubts the cognition of computer programs? The famous Cartesian doubt, “What if God is a liar” is here replaced by Searle’s “What if I ultimately do not understand the symbols with which I communicate, but only know its perfect grammar?” Of course, the path they take in the end is different: If Descartes were a Searlean, he would have claimed that he cannot prove his own existence; if Searle were a Cartesian, he would have acknowledged that it would not be possible for one to know grammar without knowing semantics, for ultimately meaning is generated from structure, as the Structuralists already have it.

A great answer to the Chinese Room argument, and so far the best, I think, is the systems reply, claiming that it is the whole room instead of the person that understands, because cognition could not be localized to a single source. This would be the true Cartesian revolution, that cognition is separate from any particular subject, and the true Lacanian experience of the subject as barred. Searle rejected this argument by saying that if the entire room is located inside the brain, that would not make a subject understand any more than he does, despite his being able to communicate — which, of course, presupposes an ideal subject that “truly understands.”

Here, Daniel Dennett’s reply is worth noting: Dennett claims that if such non-thinking-but-nevertheless-communicating subjects exist, they would be the ones surviving natural selection, hence we would all be “zombies”. Does this not perfectly mimic the humanists’ fear of the post-struturalist alienation of the subject from language? Dennett, perhaps rather unfortunately, then goes on to say that the Chinese Room is impossible because we are not zombies — which, again, presupposes an ideal, non-alienated subject.

Distributed cognition is where the barred subject takes its place in contemporary cybernetics, and this is, contrary to popular belief, ultimately a Cartesian move that fully separates cognition from its local basis, as the separation of mind from its carbon basis. It turns out that Descartes was not only the first Lacanian, as Žižek put it, but also the first third-wave posthumanist. It is a sad fact, thus, that leaders in the field of cybernetics overlook this fact and, in both sides of the argument, tend to return to Aristotelian ideals, to illusions of wholeness.

Cultured Meat and Totem Culture

Posted in Divine Science with tags , , , , , , , , , , , on November 17, 2008 by Bonni Rambatan
In Vitro Meat (c) DC Spensley/H+ Magazine

In Vitro Meat (c) DC Spensley/H+ Magazine

Let us now go on to discuss further on the issue on how to deal with life (in accordance to this Cat Bag post). It is interesting today to see the debate surrounding cultured meat: meat grown in labs, without any animal being sacrificed. The idea is of course to care more for the animals (which is why PETA would give $1 million to anyone who first come up with a successful way to cultivate the meat), less energy consumption and less pollution by decreasing the number of butcher houses… Basically following the fashionable standard of environmentalist use of science.

It is incredibly hard to miss the Žižekian logic of decaffeinated culture at work here: is not the meat without sacrifice the example of decaffeinated consumption par excellence? But now let us take a moment and look further into the response of society surrounding this very topic: do a quick search for “cultured meat” on the internet, and you will see that most people reject the idea. Why is this? Are we not supposed to celebrate the progressive development of this decaf ideology with joy? In the case of cultured meat, however, even the famed transhumanist RU Sirius commented, “Yuck!”

The answer is not that hard to find: people still find it strange and uncanny to eat meat that was not taken from a live animal. Why? Here we can clearly see the symbolic ideological dimension of a purely biological everyday act of eating, one that Freud has explicated in his Totem and Taboo. In eating meat, are we not also eating the other species’ death? The death of the sacrificed animal is more of a symbolic necessity than an unavoidable fact. This is the reason we have all those kinds of sacrifice rituals and forbidden meals.

What is very interesting, of course, is how this primitive logic of totemic rituals still turn out to play a large role in an age where we are supposed to no longer believe anything anymore. What is the state of affairs of totem and interspecies relations in the world today? Clearly, we are stuck between two conditions: novel technologies enable us to have capacities of which only God himself would be able to do just a little over a hundred years ago — the “divinity of science” that goes with the rapid advancements of quantum physics, bioengineering, and neuroscience — and ancient symbolic necessities, the totem and taboos of our primitive ancestors.

In the end, perhaps Paul Virilio was right: we are caught between the contradicting dromologies: the ecstatic high speed of cyberspace and the slowness of human minds. Or perhaps, Hayles and Haraway was right, that this is not a deadlock after all, and what we need is a new formulation of subjectivity itself. Or, perhaps, all of them are correct in a way, and we need to see — to put it in Kierkegaard’s terms — the primitive totem-and-taboo subject as this new posthuman subjectivity in-becoming instead of its enemy.

What about you? Would you eat meat grown in labs? More ideologies at work you find? Feel free to comment away!

Welcome to a Posthuman Democracy!

Posted in Political Focus with tags , , , , , , , , on November 6, 2008 by Bonni Rambatan

Obamas victory

Obama's victory

The start of this month has been a tense one. As the outcome of the 2008 US Election is finally announced, I am proud to say that I am happy for my American readers that they got a new, decent president in which they can all entrust their hopes. With all the tension relieved, The Posthuman Marxist will now resume its blogging with more critical articles for you to read! And what better way to celebrate the upcoming new administration than a critical analysis of what all this spectacular election had been?

What especially interests me in all this glorious spectacle of an election is how tech-savvy the Democrats had been in conducting their campaign. I have been following Mashable’s takes on this, which has been covering the issue from way back in February 2007, and here’s their quick recap. What we are having today in our politics, especially with Obama, is a head-on collision between the realm of high politics and direct online life. Needless to say, this is the first time such politics is conducted, and the interesting question would thus be: why has our politics evolved in such way?

This is obviously not such a hard question. Is it not only natural for politics to go towards the more popular, transparent, and democratic approach in its conduct? And does the internet not indeed provide such a platform? Furthermore, it is of course very much in line with the appeal of the Democrats to use media that are close to the hearts of the young generations, so all this has been natural. Then it is perhaps better to reformulate the question: why is technology seen as a more democratic means?

We have come a long way from our technophobic past. “Media brainwashing” is a phrase we no longer hear quite as often today as in the past. After all, we have Web 2.0, with all its connectedness and writeability. What is interesting, however, in this “digital democracy” (for lack of a better phrase), is how very much outsourced things are (obviously, Obama does not handle all those Web 2.0 profiles himself; and I very much doubt that it was he who personally clicked “follow” on my Twitter profile). It is not surprising, then, to hear all the buzz about wiki governance and Google transparency. Anderkoo has an excellent take on this matter, which is worth a serious read.

With Obama, the democratic decentralization of politics today, it seems, does not only involve the standard notions of giving power to the people. Already, we are seeing how the job is given to intelligent machines — albeit just in the form of computer codes that work on Web 2.0 platforms. This tech-savvy campaign is very well aware that the question today is not merely to decentralize power, but to decentralize cognition itself, i.e. to conduct better politics not only in terms of creating a more equal humanist society, but also in terms of creating a more intelligent posthumanist environment in which it will only be possible to conduct a better democracy.

Automated Web technology and machine intelligence is now a democratic means that we trust instead of a postmodern artifact of great anti-humanist suspicion, because, recalling the famous Haylesian argument, we have already become posthuman.

Regarding the development of our posthuman future, Obama, at least so far, is taking great steps. Will he continue this tech-savvy grassroots platform? We can only hope. And what will we make of this new conducts? Will it indeed bring better democracy? Will it bring about more trust in intelligent machines? What political subjects will our society turn out to be, when environment itself becomes politically aware in the near-future age of ambient intelligence?

Okami: Divine Subjects and Image-Instruments

Posted in Pop Culture with tags , , , , , , , , , , , on September 9, 2008 by Bonni Rambatan
Okami

Okami

OK, most of you probably know that I’m a hardcore Wii fan by now, so I’m guessing it’s about time to put the gaming geek of myself to elucidate my Lacanian new media theories. One of the core concepts of my thesis is a new subjective experience I call the divine subject. This notion of transcendent subjective experience made possible by technology has its roots in cybernetics and the Macy conferences of the 20th century, as Katherine Hayles has explained in her 1999 book How We Became Posthuman. The postmodern notion of contingent bodies and posthuman transformations is not, as many may argue, a rejection of the liberal humanist subject of Enlightenment (“ethics is deconstructed with biotechnology,” “it’s all about the dehumanizing market,” etc.), but instead, as John Searle is well aware, a faithful move to take the Cartesian subject one step further: a desperate move to preserve the cogito under postmodernity — precisely because thought is the only viable experience, we need no more bodies!

What better piece of literature to illustrate the divinity of the subject than Okami, a game in which you actually take control of God herself? This game in which your avatar is the Japanese sun God, Okami Amaterasu (taking the form of the white wolf avatar, Shiranui), has as its core element the ability for players to create objects in the scenes by painting on them directly — you create suns by painting circles in skies, stars by painting dots, cut enemies by painting slashes, etc. This is a perfect example of what Lev Manovich calls the transformation from image to image-instruments. With the advent of the computer age, signifiers now has a double role: not only a part of the sign, but also something to be acted upon, a portal to another dimension.

What to say of today’s world of signs? It is no longer the Baudrillardian object-dominated world of simulacra in which subjects are fashionably dead, but a world in which the simulacra is an extension of subjective experience. The correct way to read the popular postmodern dystopia in which even our bodies is nothing but a simulacra is not that we are dead, reduced to mere Foucauldian sand imprints, but the opposite one: every simulacra may be our body. (Is this not the ultimate dream of ubiquitous computing, ambient intelligence, etc?) With Žižek, the (Cartesian) subject is not dead, but preserved through its reflexivity.

Okami perfectly illustrates my thesis: with signifiers evolving from its original purpose to include a role as portals of actions, with subjects depending more and more on avatars (the contingent simulacra body, explained further on my theories of the psychoanalytic monitor phase) both for social interaction and individual enjoyment, it is only prudent to note the possibility that there is an evolution going on in the dialectics of the Symbolic and Imaginary orders. Does the divine paintbrush of Okami not show that the Imaginary self may very well lie outside the visible biological self?

The Uncanny Valley of Non-Feminine

Posted in Posthuman Perversion with tags , , , , , , , , , , , on August 23, 2008 by Bonni Rambatan

Slavoj Žižek once mentioned that the true horror of confronting one’s doppelgänger (Edgar Allan Poe’s William Wilson, etc.) is the horror of knowing that one may actually exist out there [citation needed]. This can be understood in the Humean sense, he said, that what the subject knows of himself is that he does not exist but as suppositions of the Other, an empty hole in the topology of social reality, Žižek’s “empty cogito“. Does the same not hold true for society at large when confronted with a prospect of its clones in the form of humanoid artificial intelligence subjects? The true horror of humanoid robots, Masahiro Mori’s uncanny valley, resides in the realm of realizing that we may actually exist fully objectively. It is the horror of the disappearance of lack, the horror of realizing that we may not have a lack after all.

As Katherine Hayles (1999:30) noted, although for Lacan language is not a code, for computers language is perfectly a code. Computer language recognizes symbols purely through computational models with one-to-one correspondence between the signifier and the signified. Here, lack does not have a place. Thus, the cogito of the cyborg is purely a cogito of existence, not an empty one. The uncanny valley is thus the condition in which we have to confront this horror of excessive non-humanity — a robot purely in the form of a human being, machines but existing only qua social human being.

If there exists subjects to which language is a code, we are in very deep trouble. One very certain trouble we directly run into is the Lacanian sexuation: a female AI — the gynoid — is not a barred Other, for we understand her mechanisms perfectly. If “the” woman does not exist, “the” gynoid exists qua computerized cognition. (Even, one may go as far to say that cloning and/or neurobiology will eventually make it possible to produce “the” woman.) In computer codes, we no longer have cognitive functions of x which “does not cease not to write itself”, nor the x which “ceases not to write itself” — all conditions must be preprogrammed in functions of if x then y. Coding is an act of masculine writing. The cyborg is both free of castration but not a feminine as such, for it is a phallic subject and and subject to mutation — a non-feminine. The paradox of mutation is of course the fact that although it has a function of castration, it in fact has a probability for the subject to spawn a greater phallus than the one he has just lost. The uncanny valley is the horrific gap of the experience of the non-feminine.

All this is not science fiction. There has recently been a research on the engendering of the Semantic Web — which is just one among the many gender research in new media. It is notions like these that further signify how new code languages (artificial intelligence, semantic web, etc), fundamentally different from human language and cognition, will undoubtedly trouble the sexuation of contemporary society. As Foucault was already well aware, sex is subject to historical change — only this time, the change may be so much more fundamentally so.

Psychoanalysis, All too Human

Posted in Pure Theory with tags , , , , , , , , , on August 14, 2008 by Bonni Rambatan

Family failing of philosophers: All philosophers have the common failing of starting out from man as he is now and thinking they can reach their goal through an analysis of him. They involuntarily think of ‘man’ as an aeterna veritas, as something that remains constant in the midst of all flux, as a sure measure of things. Everything the philosopher has declared about man is, however, at bottom no more than a testimony as to the man of a very limited period of time. Lack of historical sense is the family failing of all philosophers.

Friedrich Nietzsche, Human, All too Human

We all know that historical consciousness starts in the beginning of the 19th century with Hegel, before strongly re-emphasized by Marx and Nietzsche in the late 19th century. But Hegel never got around to reading On the Origin of Species, which was published 28 years after his death. The stupid straightforward question I would like to pose is thus embarrassingly simple: what would Hegel have thought if he had had a chance to read it? What if he had books of Dawkins beside his collection of Thucydides lying on his bookshelf? Would he perhaps extrapolate his Phenomenology of Spirit to a Phenomenology of Species?

When did the self begin? When did the unconscious begin? When did Freudian triad and Lacanian Borommean rings begin? Psychoanalysis is bound to be hit by these questions, as new researches show mirror recognition in animals and robots, various perverse acts of animals (homosexuality, oral sex, prostitution, even necrophilia), beside older findings of animal dreams and stress disorders. While I am old enough to not entertain the possibility of psychoanalytic therapy for a poop-eating dog or your future Roomba, psychoanalysis for me has so much more to do than individual therapy and, as such, is the basis of my philosophical ontology.

A xenolinguistic psychoanalysis is thus only prudent considering the fact that we are currently in the midst of news about all the achievements of bioengineering and cybernetics (the Haylesian posthuman, the Mitchellian age of biocybernetic reproduction). Lacan was certainly aware of a historical sense of man, as were his intellectual contemporaries — but he was not, I would claim, at the very least, entertaining the possibility of a serious interrogation on Love and Sex with Robots. A recurring subject in my researches is thus an evolutionary Lacanian psychoanalysis by way of a deeper linguistic and communicational research in new realms — new media, psychedelics, animal communication, artificial intelligence, virtual sex….

As with Hegel and Nietzsche, we cannot forget that man has become. If we are to make a comprehensive cognitive mapping, we should avoid falling into the trap of non-historicity, or, in this posthuman age, the trap of non-evolutionarity. Modifying Nietzsche, I would claim that lack of evolutionary sense is the family failing of all psychoanalysts.