Archive for Slavoj Žižek

Facebook between Anti-Semitism and Breastfeeding

Posted in Postmodern 2.0 with tags , , , , , , on May 11, 2009 by Bonni Rambatan

The Real of Facebook

The Real of Facebook

Recently Mike Arrington of TechCrunch posted a polemic of Facebook’s policies. So it turns out that Facebook does not ban denials of Holocaust and yet they ban pictures of breastfeeding women. This predictably disturbs many people, and has lead to paranoiac suggestions that Facebook is actually anti-Semite, anti-feminist, and so on.

Especially interesting here is the juxtaposition of the two taboos, positioning them from the start as a case of mispositioned censorship — one that should be but isn’t censored, and another that should not be but is in fact censored. This proves to be more problematic the closer one looks at it: of course, people would still get angry even if Facebook does not censor breastfeeding (both not censored), while probably they would be content if Facebook censors both. However, this problematic only rises when there is a displacement of censorship, as is the case at the moment.

Which leads us to a more interesting perspective: what do Jews and tits have in common? As Žižek noted, Jews are the anti-Semites’ embodiment of the malignant object-cause of desire, while breasts are of course one of the forms in which the object-cause of desire appears, according to Lacan. The appearance of anti-Semitic comments on Facebook discussions brings into obscene light this object-cause of desire, while breastfeeding, supposedly through its context, arguably does the opposite: it desexualizes the breasts into  a non-sexual, family-friendly object.

The excuse for the latter’s censorship is of course the usual one: “We knowthere is nothing sexual about breastfeeding, but nonetheless people have warped fetishes,” while the excuse for having no ban of Holocaust denial is, presumably, that all opinions are allowed and must be respected. Of course, the hate speech shown on TechCrunch’s screen-caps are already against Facebook’s TOS (which obviously leads to the rash conclusion that everyone who denies the Holocaust are anti-Semites), but it seems that the core problem is not so much the hate speech as the space for denying the Holocaust whatsoever.

It is here (and not only on 4chan!) that one encounters the Real of the Internet. We start off wanting to promote “safe content” and end up censoring arguably trivial things such as breastfeeding, which recalls the proverbial paranoia that nothing is safe on the Internet (how about pictures of feet and socks, or of children at all, should Facebook allow them when after all people can just as easily take them off of facebook with a few simple clicks and use them as a means of warped public masturbation on another site?). The obverse is also true: we start off wanting to encourage discussions from all perspectives, and end up encountering the hard limit of our so-called postmodern tolerance (every historical account is relative and its truth is questionable, except the Holocaust, the truth of which must be maintained at all costs to keep ourselves from the resurgence of anti-Semitism!).

Youd think it was really easy to moderate a group

You'd think it was really easy to moderate a group

Mike Arrington ends his article with a comment, “Yes, Facebook, this is the side of the line you’ve chosen to stand on,” and posted an obscene image of child victims. I would say that Facebook, being the “sixth largest country,” is fated to continue to find itself in dangerous situations on the other side of the line (remember Facebook’s privacy polemic several months back?) — why? Because Facebook is becoming more and more like a government rather than a system (like Wikipedia, Twitter, or 4chan, which are really more public places than a governed home). And the obscene image, what is it but a symptom dedicated to an innocent Other’s gaze for which the truth of the Holocaust must be maintained?

Accommodating society with their own postmodern paranoia and micropractical ethics is a tough, if not impossible, job. There is a line, a primordial split constitutive of society, which looks different from different sides. In some ways 4chan (especially /b/) is luckier, since it embodies nothing but this split itself. Facebook tried to be careful as it always does, but it looks like once again they got back their message in an inverted true form.

Marblecake, Also the Game

Posted in Postmodern 2.0 with tags , , , , , , , on April 29, 2009 by Bonni Rambatan

The Message

That was the message encoded in TIME Magazine’s 100 most influential people of 2009 by none other than Anonymous. These 4chan netizens jumped into action and hacked the poll, not only making sure that moot, the founder of 4chan, tops the list, but also being careful to arrange the order of winners up to the 21st so that the list would read, “mARBLECAKE ALSO THE GAME”. TIME already made the list official — epic win for moot and Anonymous — but completely denied the hack. You can read the details of the hack here.

Here is an excerpt from Mashable regarding this piece of news:

The Internet has different rules. The folks at Time just learned about it in a very amusing way, as their third annual poll for the world’s most influential person was topped by moot A.K.A. Christopher Poole, founder of the legendary memebreeding forum 4chan. And, though the results of the poll are obviously skewed, the list is now official nonetheless.

Remember, it’s not Barack Obama, not Oprah Winfrey, not Pope Benedict XVI, but moot. He received 16,794,368 votes and an average influence rating of 90 (out of a possible 100).

The Internet does play by a very different set of rules indeed. Who is moot? I am not asking about who he really is in real life, his personal history, and so on, but what can the existence of this 21-year-old founder of 4chan who became this year’s most influential person on Earth tell us about the culture we are living in?

moot has been quoted to say, “My personal private life is very separate from my internet life. There’s a firewall in between.” It is very interesting to note that moot did not use the phrase “real life” to denote his “personal private life”. That alone I think is really telling — clearly, moot, like many of us, has an online life more real than his private life.

4chan has been described as the “Wild West” of the Internet. The rawest of lands and coarsest of media, the home to Internet vigilantes as well as the most homophobic, misogynist, and racist users, often with amazing hacking skills, 4chan represents the face of posthuman subculture today. And I am not even trying to romanticize it as we often romanticize a Wild West — go to 4chan today to /b/, the random bulletin board, and you will see what I mean, if you haven’t. There is nothing romantic about it, save perhaps the total assault they do on culture on a daily basis.

The growth of a subculture, a raw resistance of any kind at all, always presents us with the Real of antagonisms of the culture we are living in. 4chan is the antithesis of Facebook, and moot is the antithesis of Mark Zuckerberg. When we discuss the Internet, we often forget today about 4chan, about the nameless, faceless commons that is Anonymous, about the glorified Master signifier that is moot. When we discuss cultures, we often forget about their breeding place, the Wild West, the grafitti brick walls of the anonymous crowd.

The TIME 100 hack tells a lot about the times we are living in. In a way, we can even say that TIME ultimately got what they wanted — they decided to do an online poll, and online, moot is indeed the most influential person. Invisible he may be, but one only needs to see the number of memes penetrating our Internet lives today, from Rickrolls to LOLcats but also “epic” phrases — indeed, 4chan can take any meaningless thing and imbue it with the object-cause of desire because they themselves embody the faceless gaze of the commons — to see how true this is. Because our times are heading towards that critical direction again: where culture can no longer be dictated, when minds can no longer be censored, and when a handful of people can turn a historical media cultural event upside down.

Slavoj Žižek says that in order for our times to grow, “maybe we just need a different chicken [fetish object]”. The problem so far regarding democracy is that “we know the system is corrupt, but does the system know it is corrupt?” — in other words, we continue to do it because we know it works even if we don’t believe that it works. I think the recent message-encoded TIME 100 hack proves otherwise — the system itself knows that it is corrupt, and it is only the big media companies that are losing and continue to pretend everything is OK. And fortunately, the technological apparatuses that be no longer serves them but the faceless crowd.

I’m going to have a marblecake and celebrate.

mARBLECAKE

mARBLECAKE (not mine)

On The Idea of Communism

Posted in Political Focus with tags , , , , , , , , on April 1, 2009 by Bonni Rambatan

Hello hello, TPM readers! Thank you for being faithful even in these times where I am blogging much less than usual — two weeks of unexplained absence, without a drop in the reader count! Thank you for standing by! Well, I have been doing several projects, and am also writing my thesis, but here I am :-)

To start the month, why don’t we review a bit of what happened on March, an event that started on the appropriately dangerous Friday the 13th and ended on the following Sunday. I am talking, of course, about On The Idea of Communism conference, hosted by Slavoj Žižek at Birkbeck College, which included names like Alain Badiou, Terry Eagleton, Peter Hallward, Michael Hardt, Antonio Negri, Jacques Ranciere, Judith Balso, Bruno Bosteels, Alessandro Russo, Alberto Toscano and Gianni Vattimo. Jean-Luc Nancy I think was supposed to be there but could not attend due to Visa problems (which reminds me of my own case last year).

I would have loved it if I had actually attended and this were an actual report, but I didn’t, so for conference notes I would refer you to Andrew Osborne’s post here. I watched several videos on YouTube as well, one of which I linked above.

I want to just comment on this conference. First of all, it is a really exciting conference and perhaps could not have had better timing. We are living in times in which people have less and less faith in both world politics and economy. It is true that people in many places, including my own country, still irrationaly fear communism (the most popular response in my country being that communism is forbidden by religion — LOL?), but it nonetheless should be conceived as the perfect time to think. Žižek suggested us to take Lenin as an example: in the harsh times of 1915, he retreated to Switzerland to read Hegel.

About the times we are facing today, Alain Badiou puts it very nicely. I quote from Osborne’s blog:

Today we are nearer the 19th century than the 20th century  with the arrival of utterly cynical capitalism. We are witnessing the return of all sorts of 19th century phenomena such as pirate nationalisations, nihilistic despair and the servility of intellectuals.

Badiou then of course goes on in his usual manner mentions of the need for a strong subjectivity to change the coordinates of possibilities in order to create the Event, the rupture in existence to which we can militantly assert a new truth. This is important and stressed again by Žižek in the conclusion, that a change is not a change in actuality but a change in possibilities. Thus, our task is to think of the possibility of possibilities, to do the impossible — not the usual Kantian “we must, because we can,” but the Badiouvian “we must, because it is impossible.”

I also love what Michael Hardt and Antonio Negri have to say, although the ideas they mention are nothing new if you have read their work, Empire. How can I not love it when the entire notion is similar to the original theme of this blog (I say original, because lately it has become more and more Lacanian than Marxist, I know), that is, one that interrogates the notion of cognitive capital, digital property, and the commons in this day and age of biocybernetic reproduction. Copyright conflicts are the new terrain of the struggle of the commons — now you know why I love calling myself a pirate.

Antonio Negri stressed another importance of communism, one I tweeted in three tweets. It is an importance already mentioned by Tronti and Lenin, as @semioticmonkey corrected me. Indeed, communism is opposed to socialism, and in the same way that psychoanalysis is opposed to ego psychology. There is no equal State, as there is no healthy ego. Communists must organize the decline of State, as psychoanalysts must sustain the efficacy of the ego. Both communism and psychoanalysis must act with an ethics of the Real and acknowledge the redundancy of the agent.

But all in all, in the end, we still do have a question. Is communism a program, a movement to bring back politics and its efficiency that is faithful to a continuous revolution — do we need to organize a continuous decline of the State in order to change our possibilities, as Žižek would argue? Or is it merely a philosophical idea, and what we need now are militant communists, not communism per se, acting to the fullest extent the ethics of the tragic hero, the ethics of the Real, in order to produce an Event, as Badiou maintained?

The Twitter Hysteria

Posted in Postmodern 2.0 with tags , , , , , , , on March 14, 2009 by Bonni Rambatan
Are all Twitter users insecure like her?

Are all Twitter users insecure like her?

Twittering stems from a lack of identity. It’s a constant update of who you are, what you are, where you are. Nobody would Twitter if they had a strong sense of identity.

Angry? Here’s another one:

Using Twitter suggests a level of insecurity whereby, unless people recognise you, you cease to exist. It may stave off insecurity in the short term, but it won’t cure it.

Curious what it’s all about? Here is the full article for you to read.

Annoying as those statements may be, we should not get caught up in our emotions and just disapprove them as having no degree of truth whatsoever, although we must admit that when they said that tweeples do not say “What do you think of Descartes’s second treatise?” you really know they got things wrong.

After all some of us do ask questions like those on Twitter — and start a terrific discussion while we’re at it. Don’t believe me? Try following these people. There are lots more, but I just linked the ones that happened to debate most recently on the precise issue brought up by the article. Sure, some of them may tweet about mundane daily things (if you don’t want mundane daily things and only philosophical content and computer stuff, I have a twitter friend on this extreme end — perhaps a few others?). But really, the reason I followed them was not because I want to fabricate an imaginary connection with the person (in the loose non-Lacanian sense of the term), but because we spark interesting discussions. And although we don’t, I still follow people like Guy Kawasaki not because I think they’re such great guys but because they post links to interesting articles.

Tempted to continue my rant

Tempted to continue my rant

I’m tempted to continue my rant, but let’s get serious. Just sign up to Twitter if you haven’t, and follow the people I linked to, and you can see right out that Oliver James the clinical psychologist and David Lewis the cognitive neuropsychologist may be less intelligent than the people they talk negatively about.

But as I said what they say deserves a closer look. It’s not pure bullshit. We do have people on Twitter who go on emo rant mode 90% of the time saying how worthless their life is (no, I won’t link, if only so that you vain followers paranoically think it’s you). It’s obvious that they get no better by doing so.

Jacques Lacan said that the art object occupies the place of the analyst. By this he means it occupies the object a, but not necessarily the analytical discourse. So too with the Internet, and Twitter in particular — here is an ultimate proof of that. Why Twitter in particular? Because of the space of speech, of course — an illusion of connection, if you want to call it that, since it does belong to the Imaginary register, which is especially true on Twitter where people don’t listen to you but nonetheless hear you. I told you our unfortunate friends got some things right!

What things right? That connection on Twitter serves as an object-cause of desire. They are wrong, however, in saying that this object-cause of desire must be located along with the subject, producing a hysterical discourse with symptoms such as those James and Lewis mentioned (insecurity, lack of identity, etc.). As I tweeted, the problem is, do you let it speak the truh, or are you too busy trying to speak that object little a?

Slavoj Žižek once said that the Internet merely confirms how virtual our lives already had been. What a beautiful way to put it. If nobody would Twitter if they had a strong sense of identity, we should then ask, what will they do instead? For we have always been living a virtual life.

It’s not about Twitter, after all. Twitter just makes it more visible. We have always been attracted to connection. We have always been attracted to those who hear us without having to really listen to us or know us, those who see us on the streets from the corner of their eyes, those who peek at our sexual lives. We have always been fascinated by those as we are fascinated by art. That’s what Twitter is all about; that’s what the Social Web is all about. We love those things, those object-causes of desire. Consuming them in no way makes us an insecure hysteric all of a sudden.

Just in case your friends on Twitter

Just in case your friend's on Twitter

Your Mind is Now Undead!

Posted in Divine Science with tags , , , , , , , , , , on December 19, 2008 by Bonni Rambatan
Teh ceiling cat is in ur machine, reading ur mind...

Teh ceiling cat is in ur machine, reading ur mind...

Less than a week ago researchers in Japan confirmed a way to extract images directly from brains. Yes, you read that correctly; in a nutshell: by hooking you up to this machine everyone can now see what you are thinking, because it will be shown in a monitor. I had this reply in my Twitter stream when I tweeted about it, and although I have not yet seen that movie it is nonetheless very easy to imagine this invention being taken right out of a science fiction gig. (Being the shameless otaku that I am, my personal memory that this news recalled is none other than Japan’s anime ambassador, Doraemon.)

I often have people asking me what I think of the newest mind-blowing inventions the world has to offer (which is one of the reasons why this blog was created). Perhaps surprisingly to some, I never throw out horrible paranoiac scenarios of nightmarish dystopias people commonly take as “critical” reviews of a certain technology. While I do acknowledge the potential new narratives of paranoia such technologies — and especially mind-reading technologies — will engender, I like to look at technology the way I look at bodies, Lacanian style — i.e., as the false representative, the lacking signifier of the subject.

Being able to record one’s thought into an image on the computer screen is one of the basic tenets of posthuman fluidity. After all, if video games can read your mind, why shouldn’t the computer be able to see your mind?

Here, however, I have a very basic question: will our mind, after being replicated into a computer screen, remain our mind? Will my mind not, rather, take the position of an “undead” mind, a mind that is both mine and not mine at the same time, giving me the uncanny experience akin to listening to a recording of my own voice, a voice both mine and not mine at the same time? In the domain of the voice, we have horror movies like The Exorcist, in which a ghostly intrusion is symbolized by the changing of the voice. Similarly, we also have scenarios like the Imperius Curse in Harry Potter, in which a Death Eater intrusion is symbolized by the changing of a victim’s mind.

What this implies, however, is a much more radical thesis: today, with neuroscience and other mind-reading technologies, the mind reveals its inherent split: my mind is not my mind. (Or, to put it in Hegelian tautology-as-contradiction: my mind is my mind.) It is no longer the age-old “Cartesian” split between the mind and the body — we are now forced to realize that even without the body, the mind is already inherently split from within. Yes, we can extract minds, read them, project them onto screens, record them and store them, build them from individual neurons, etc., but the fact remains that there is an irreducible kernel behind its presence, its irreducible (misrecognizing) reflexivity. After all is said and done, we still have a gaping void in the middle of the thinking mind, its “true” presence (compared to the “undead” simulation of the projections on the screen, which is not fully our mind, etc.), what Žižek calls “the unbearable lightness of being no one”.

It is here that we may come up with another definition of the posthuman subject: the posthuman subject is the subject whose mind is undead, a subject whose externalized mind as such loses its phenomenological vigor of living presence and turns into a zombie.

As an additional note, it is fun to imagine the birth of “mind art” in the future with this technology — far from needing any motoric skills, the artist would only utilize his sharp concentration to create stunning artworks. Like, you know, porn.

Now, replace the snowman with a nude chick.

The snowman is actually a nude chick.

What is a Good Twitter Neighbor?

Posted in Postmodern 2.0 with tags , , , , , , , on December 11, 2008 by Bonni Rambatan
The Twitter village

The Twitter village

As the Web becomes more and more social, as more and more people write how Twitter is a village, we are bound to confront the radical dimension of social interaction, the neighbor in its most elementary form: digital denizens of cyberspace who have “something in them more than themselves” — those whose dimension of enjoyment we could not grasp nor fathom.

In the physical realm, we have all the elementary practices by which we talk about a neighbor: the way they laugh too loudly, the way they count their money, their strange accent, the bad smell of their food, their disgusting table manners, etc, all of which allude to an irreducible kernel of an Other that enjoys differently from us. This dimension, is, of course, the object a, the Lacanian object-cause of desire which arouses spectral apparitions and is the cause of all our prejudices and hatred towards Otherness.

It is interesting to see how this dimension of a neighbor persists even without real contact (the examples above are all little habits that could be seen, smelled, or heard — all needing physical contact). Does not the current trends of categorizing obnoxious people on the Web the ultimate proof that we are very much still prejudiced? OK, it might be a matter of fact that trolls and Grammar Nazis are frustrating idiots without proper knowledge of the big Other of the Internet, but upon reading things like this Top 10 List of People to Unfollow on Twitter or this list of 8 Most Obnoxious Internet Commenters, it becomes clear how social antagonisms in the Social Web is beginning to take its shape.

My point is of course not the standard postmodern multicultural (“defender”) one that demands for more equality for these different types, and so on. (Again, it is funny when we notice how demands for more equality in the physical world is supplemented — and, likely, can only work as such — by the proliferation of online prejudices.) What I would like to call into question is the basic underlying understanding of what being a good citizen means.

There are exceptions, but there is a strong pattern emerging: we tend to find obnoxious those who affiliates too much with his or her beliefs and activities, be it sports, (cynical) politics, plain hobbies, or even attending a conference. If, in the physical world, to use Žižek’s formulation, the neighbor is the one who smells (which is why deodorants are increasingly popular, etc.), in life online, the neighbor is essentially the one who believes.

As the Internet becomes more and more social, the big Other of networked computer systems is born. And the cyborg big Other is the virtual entity for whom we must maintain a safe distance from our own believes and passions, the digital symbolic for which we have to maintain the appearances of disbelief, by tweeting our more “human” side (what we had for lunch, our travels, our day job, etc.) instead. As always, there is an inherent rule which we must understand to fit inside an online community; the obligations behind choices (we are obliged to follow our followers back) and the choice behind the obligations (we can use scripts to follow people or schedule our tweets). A good cyberspace denizen is the one that understands the proper mechanisms of the digital big Other.

Perhaps, even here today, Kierkegaard was right: the only good neighbor is effectively the dead neighbor — the best Twitter accounts are the automated ones who do nothing personal but give links to worthy pieces of information. The good Twitter neighbor is the impersonal cyborg neighbor, the neighbor without the kernel of unfathomable surplus-enjoyment. But then, we need enjoyment for systems to function, which is why we are all suggested to have smiling face photo avatars and occasional talk about the kids and dinner — the legitimized versions of object a as the proper way to enjoy, with all its encoded ideologies.

Mumbai, Islamic Terrorism, and the Antagonism

Posted in Political Focus with tags , , , , , , , , , on December 6, 2008 by Bonni Rambatan
Terrorism in India

Terrorism in India

The battle between the Muslim terrorists and the rest of the world is a strange thing. Even within this previous sentence, many would already disagree on how I put it (“Is it not rather the Muslim and the West, or even the West and the rest?” etc…). I live in Indonesia, the country with the largest Muslim population on Earth. Many took to the streets in protest of the Danish caricatures of Mohammed back in 2006, and many hail suicide bombers as martyrs, even teaching grade school students to raise their fists and scream “Allah is the greatest” on the sites of their burials… Suffice it to say that I am very well positioned within the sphere of the current “war”.

A very curious thing for me is how each side view the war: on one side, this war is seen as a war between civilization and an uncivilized Other, a war of universal human rights versus those intent on disrupting it. Many Muslims take this side, claiming how the terrorists are not real Muslims, etc… For the other side, however, this is — to use their diction — a battle of ideologies. It surprises me how intent so many of the Muslim communities are that the entire notion of universal human rights, etc. is another “Western capitalist” way of oppressing Islam, and that the real war is that between capitalism and Muslim ideology.

I do not think, of course, that they are completely wrong. What should be taken into mind is rather this inherent split — a Real antagonism, as Žižek, but also Laclau and Mouffe, would have put it (“society doesn’t exist”, etc.). There is no neutral, “objective” position from which to see the war, since every position is already part of the struggle.

Thus, to fully grasp the Mumbai incidents and the ongoing 9/11 aftermath of the global war, we should first and foremost understand that this very war is in itself structured around a traumatic kernel, a Real qua impossible intersection between the dominant humanist Western paradigm and the Muslim one. Every attempt to shut down the war based on certain values is already violent and doomed to fail since its very utterance — there can be no agreement between the two point-of-views. The Habermasian ideal communication is in itself a fantasy.

How, then, do we confront and handle this war? As a good Lacanian, my political standard would be that of an ethical act — a politics that traverses the fantasy of any possible mediation between impossible points. The first thing we should realize before attempting any solution to this problem would be that there is no objective point of agreement. Will, then, a full-frontal war be more effective? I would claim that it would be in vain, since here, perhaps more than ever, we are dealing with specters: the more we annihilate the physical enemy, the more we become paranoid that they grow stronger, that there remains an impossible kernel we can never destroy (from, for one side: a worldwide capitalist conspiracies for the Muslims; up to, for the other side: clandestine Madrasahs that train endless suicide bomber recruits, etc.)…

Awareness as a Decaffeinated Political Act

Posted in Political Focus with tags , , , on December 3, 2008 by Bonni Rambatan
AIDS awareness

AIDS awareness

Within the past couple of years, the notion of what we can do to help society, of what a political act means, has changed significantly. Just a couple of days ago we had World AIDS Day, the international day of HIV awareness. One thing that interests me is how “awareness” has become elevated into one of the primary good things we can do, politically and socially. Campaigns today have to be marketed as made more to “raise awareness” than to, say, change real political acts. There is always a necessity today to let all people “take part” by doing very small things (“if you have one minute”, etc.) — perhaps to make us believe that we live in a democracy while keeping the violent kernels of real politics untouched and, often, unquestioned.

It is of course true that ignorance, lack of information, and plain stupid unawareness is a terrible thing, and I am by no means supportive of perpetrating such matters. But it is also crucial to be critical of these injunctions and question the horizons of understanding that underlie their motives. To put it shortly: what do we mean when we say that we are aware? Aware of what, exactly?

Of course, some awareness will lead to good, significant action: in the recent World AIDS Day, people who raise awareness of course hope that more people will visit the free HIV-testing centers, and so on. This kind of action needs a special day, and a special event to raise awareness. This disclaimer being said, I still claim that the standard protest that awareness is not enough, that we need more action, however, is out of place.

Let us take, for example, the problem of climate change. Do we really know how to handle it properly? What will raising awareness of climate change do? The nightmare is not for nobody to be not aware of anything, and then we suddenly plunge into a global disaster. The true nightmare for me is that everybody will be aware of it, stop using plastic bags, etc., and that the disaster still happens nonetheless — because the big players know nothing (or were not serious, due to corruption, “corporatism”, etc.) about how to handle it properly. The true nightmare of AIDS is not a nightmare of negligence (although, of course, it is always horrific to see people dying of AIDS just because they did not check early enough); it is when everybody is aware of it, there remains an entire continent with an epidemic we cannot cure, even with Bono’s RED, the ONE campaign, etc.

Is this not the same injunction we have behind all the Facebook Causes? We effectively join to let people (actually, the Lacanian big Other) aware that we are aware of a certain cause, while of course the implicit promise is that we do not have to do anything real. Awareness (and joining Facebook Causes) is a political act without real politics — we get the credit for it, but we don’t have to really do anything tiring, dangerous, dirty, etc. — a decaffeinated political act, as Žižek would put it.

So, how can we really help? Again a Žižekian thesis, perhaps the best way to change is not to take any action at all. Perhaps the best way to start a real change is to expose our true predicament: how everyone is already ambiently aware of all the problems in the world, but nonetheless we know nothing of what exactly are we to do. In World AIDS Day, despite all the pretty ribbons we are wearing, we still have all the elementary questions of whether stopping the AIDS epidemic in Africa is possible at all with our current state of rampant global capitalism.

We do not need more awareness — we need more questions that ask, “What are we actually aware of?” questions that force all of us to stop and think, and think hard, of the state of things. Perhaps, rather, the best thing to do is to stop all action and call for awareness and expose us to the sheer vacuity and cluelessness of our age, despite all the noisily marketed awareness and little local actions.

The Unbearable Lightness of Being Blue

Posted in Divine Science with tags , , , , , , , , , on November 29, 2008 by Bonni Rambatan

Courtesy of Dr. Pablo de Heras Ciechomski/Visualbiotech

A computer simulation of the upper layer of a rat brain neocortical column

OK, so perhaps this should have been the first post of this category, since it is arguably the top project at the intersection of cognitive science and artificial intelligence. Yes, by “Blue” I am referring to Henry Markam’s Blue Brain project, started back in 2005. What we encounter in the Blue Brain project is nothing less than a possibility of a simulated “consciousness” and other complexities of the brain (although only on the level of a rat’s at the moment, and at 1/10th of its speed), approached from the other side of computation, namely by creating individual neurons instead of the more common top-down approach in grammatical computing — the revolution of third wave cybernetics, as Katherine Hayles would have put it.

Even the strong AI skeptic John Searle in one of his lectures commented that [citation needed], even though the mind is not a software, and the brain is not a supercomputer, the computer may be constructed as such to simulate the brain. He denies by all means the grammatical approach to consciousness with his famous Chinese Room argument, but he never did deny semantic approaches, only stating that such approach is not yet possible. But with the Blue Brain — although this project leans towards cognitive science instead of AI — such an approach is, at least, on the horizon of possibility.

You can read more extensively about the Blue Brain and its progress here and here, but now let us continue on to our analysis. What do we have here with the Blue Brain is no less a science simulating the full complexity of a mind, right down (or, rather, up from) its neuron basis. How do we integrate this into our cognitive mapping? What we need to understand first and foremost is how this project requires an understanding that, 1) the mind comes from a whole, not of any one of its parts, and therefore, 2) non-localizable to any kernel whatsoever.

This, of course, is what Slavoj Žižek, in his The Parallax View, called “the unbearable lightness of being no one”. Are we not, today, with cognitive science, confronted with the fact, already acknowledged with Hume, that the subject does not exist? In today’s context, however deep we pry open the skull and dig the brain, we find nobody home, no kernel of the soul, no subject. This is the paradox of the 21st-century narrative of the subject.

What the Blue Brain project provides is no less than a common ground for us to think about silicon-based versus carbon-based life — whereas before, we see carbon-based life as evolving, as beings whose consciousness comes later, silicon-life were beings programmed through “consciousness” (grammatical understanding of the relations of objects, etc. — which is where I suspect the true “uncanny valley” lies). Artificial Life provided a change by introducing chaos and emergence into the foray, but did not necessarily look into complex nervous systems. If and when the Blue Brain project succeeds, what we will have is no less than a complete brain simulation of a species, a silicon-based brain, “comparable to the Human Genome Project,” as Markam put it on the link above.

While I remain an agnostic to the Moravecian idea of downloading minds into computers (and totally an atheist apropos the idea that the subject will remain the same), I do believe that the Blue Brain project and its completion will require us to rethink our subjectivity and humanity as a whole. Silicon-based and carbon-based life will have a fully similar grounding, and so many new spaces of cognitive science will open up, as well as new spaces of transhumanity and ubiquitous technology. All of us will have to confront not only the fact that there is nobody home, but also that home is temporary, shredding every last bit of our humanistic grounding — the unbearable lightness of being Blue.

If (and possibly when) the Brain is implemented into a body, then things will go much further. Thoughts? Feel free to comment away!

Chinese Room and the Cogito

Posted in Pure Theory with tags , , , , , , , , , , , , on November 26, 2008 by Bonni Rambatan
Chinese Room

The Chinese Room

Cogito ergo sum is perhaps the most abused three-word phrase in our contemporary intellectual sphere, so much so that most of us no longer bothered to read further on the subject and what Descartes really meant. “I think therefore I am” has been recycled over and over by changing the verb into every other activity, from shopping to tweeting. All these has of course one underlying assumption, a false reading of the Cartesian subject as a substantial subject. Truth be told, the mind-body split did not come from Descartes at all — the idea has obviously been around since the pre-Socratic era (why else would we have the narratives of heaven and hell?). The true Cartesian revolution is in fact the opposite one: that of a total desubstantialized subject.

This does not mean, again, a subject desubstantialized of of a body and becoming a free-flowing mind, a (mis)reading found everywhere today in the intellectual sphere, and especially in the area of third-wave cybernetic research. Among the most fiercest proponents of this version of Descartes is none other than John Searle, the proponent of the famous Chinese Room argument. Unknowingly for Searle, however, the Chinese Room argument is, in fact, at one point, an ultimately Cartesian paradox.

What does the res cogitans, the thinking substance, mean, then, if not the common misreading of it as a declaration of a subject of pure thought? Here, it is crucial to look at what kind of thinking the cogito is first formulated under. The thinking that brought upon the cogito is none other than pure doubt — the doubting of reality and my existence within it. This doubt is irreducible, so much so that, in what may pass as a rather desperate move, the doubt itself becomes the only positive proof of the thing that I doubt — I exist only insofar as I doubt my existence. Rather than a substance of pure thought (“that can be downloaded into computers”, as Hans Moravec put it, etc…), the Cartesian subjectivity is a void of pure self-doubt.

(It is of course true that, in Descartes, there is ultimately a mind-body duality: the subject does not depend on the world, res extensa, to truly exist. This is, however, not because they are two separate substances, but because the former is a precondition of the latter; because the cogito is a prior void around which res extensa can only emerge as such.)

Does John Searle not reproduce exactly same motive in his Chinese Room argument, but instead of doubting the true existence of his reality, he doubts the cognition of computer programs? The famous Cartesian doubt, “What if God is a liar” is here replaced by Searle’s “What if I ultimately do not understand the symbols with which I communicate, but only know its perfect grammar?” Of course, the path they take in the end is different: If Descartes were a Searlean, he would have claimed that he cannot prove his own existence; if Searle were a Cartesian, he would have acknowledged that it would not be possible for one to know grammar without knowing semantics, for ultimately meaning is generated from structure, as the Structuralists already have it.

A great answer to the Chinese Room argument, and so far the best, I think, is the systems reply, claiming that it is the whole room instead of the person that understands, because cognition could not be localized to a single source. This would be the true Cartesian revolution, that cognition is separate from any particular subject, and the true Lacanian experience of the subject as barred. Searle rejected this argument by saying that if the entire room is located inside the brain, that would not make a subject understand any more than he does, despite his being able to communicate — which, of course, presupposes an ideal subject that “truly understands.”

Here, Daniel Dennett’s reply is worth noting: Dennett claims that if such non-thinking-but-nevertheless-communicating subjects exist, they would be the ones surviving natural selection, hence we would all be “zombies”. Does this not perfectly mimic the humanists’ fear of the post-struturalist alienation of the subject from language? Dennett, perhaps rather unfortunately, then goes on to say that the Chinese Room is impossible because we are not zombies — which, again, presupposes an ideal, non-alienated subject.

Distributed cognition is where the barred subject takes its place in contemporary cybernetics, and this is, contrary to popular belief, ultimately a Cartesian move that fully separates cognition from its local basis, as the separation of mind from its carbon basis. It turns out that Descartes was not only the first Lacanian, as Žižek put it, but also the first third-wave posthumanist. It is a sad fact, thus, that leaders in the field of cybernetics overlook this fact and, in both sides of the argument, tend to return to Aristotelian ideals, to illusions of wholeness.