Archive for Jacques Lacan

Facebook between Anti-Semitism and Breastfeeding

Posted in Postmodern 2.0 with tags , , , , , , on May 11, 2009 by Bonni Rambatan

The Real of Facebook

The Real of Facebook

Recently Mike Arrington of TechCrunch posted a polemic of Facebook’s policies. So it turns out that Facebook does not ban denials of Holocaust and yet they ban pictures of breastfeeding women. This predictably disturbs many people, and has lead to paranoiac suggestions that Facebook is actually anti-Semite, anti-feminist, and so on.

Especially interesting here is the juxtaposition of the two taboos, positioning them from the start as a case of mispositioned censorship — one that should be but isn’t censored, and another that should not be but is in fact censored. This proves to be more problematic the closer one looks at it: of course, people would still get angry even if Facebook does not censor breastfeeding (both not censored), while probably they would be content if Facebook censors both. However, this problematic only rises when there is a displacement of censorship, as is the case at the moment.

Which leads us to a more interesting perspective: what do Jews and tits have in common? As Žižek noted, Jews are the anti-Semites’ embodiment of the malignant object-cause of desire, while breasts are of course one of the forms in which the object-cause of desire appears, according to Lacan. The appearance of anti-Semitic comments on Facebook discussions brings into obscene light this object-cause of desire, while breastfeeding, supposedly through its context, arguably does the opposite: it desexualizes the breasts into  a non-sexual, family-friendly object.

The excuse for the latter’s censorship is of course the usual one: “We knowthere is nothing sexual about breastfeeding, but nonetheless people have warped fetishes,” while the excuse for having no ban of Holocaust denial is, presumably, that all opinions are allowed and must be respected. Of course, the hate speech shown on TechCrunch’s screen-caps are already against Facebook’s TOS (which obviously leads to the rash conclusion that everyone who denies the Holocaust are anti-Semites), but it seems that the core problem is not so much the hate speech as the space for denying the Holocaust whatsoever.

It is here (and not only on 4chan!) that one encounters the Real of the Internet. We start off wanting to promote “safe content” and end up censoring arguably trivial things such as breastfeeding, which recalls the proverbial paranoia that nothing is safe on the Internet (how about pictures of feet and socks, or of children at all, should Facebook allow them when after all people can just as easily take them off of facebook with a few simple clicks and use them as a means of warped public masturbation on another site?). The obverse is also true: we start off wanting to encourage discussions from all perspectives, and end up encountering the hard limit of our so-called postmodern tolerance (every historical account is relative and its truth is questionable, except the Holocaust, the truth of which must be maintained at all costs to keep ourselves from the resurgence of anti-Semitism!).

Youd think it was really easy to moderate a group

You'd think it was really easy to moderate a group

Mike Arrington ends his article with a comment, “Yes, Facebook, this is the side of the line you’ve chosen to stand on,” and posted an obscene image of child victims. I would say that Facebook, being the “sixth largest country,” is fated to continue to find itself in dangerous situations on the other side of the line (remember Facebook’s privacy polemic several months back?) — why? Because Facebook is becoming more and more like a government rather than a system (like Wikipedia, Twitter, or 4chan, which are really more public places than a governed home). And the obscene image, what is it but a symptom dedicated to an innocent Other’s gaze for which the truth of the Holocaust must be maintained?

Accommodating society with their own postmodern paranoia and micropractical ethics is a tough, if not impossible, job. There is a line, a primordial split constitutive of society, which looks different from different sides. In some ways 4chan (especially /b/) is luckier, since it embodies nothing but this split itself. Facebook tried to be careful as it always does, but it looks like once again they got back their message in an inverted true form.

On The Idea of Communism

Posted in Political Focus with tags , , , , , , , , on April 1, 2009 by Bonni Rambatan

Hello hello, TPM readers! Thank you for being faithful even in these times where I am blogging much less than usual — two weeks of unexplained absence, without a drop in the reader count! Thank you for standing by! Well, I have been doing several projects, and am also writing my thesis, but here I am :-)

To start the month, why don’t we review a bit of what happened on March, an event that started on the appropriately dangerous Friday the 13th and ended on the following Sunday. I am talking, of course, about On The Idea of Communism conference, hosted by Slavoj Žižek at Birkbeck College, which included names like Alain Badiou, Terry Eagleton, Peter Hallward, Michael Hardt, Antonio Negri, Jacques Ranciere, Judith Balso, Bruno Bosteels, Alessandro Russo, Alberto Toscano and Gianni Vattimo. Jean-Luc Nancy I think was supposed to be there but could not attend due to Visa problems (which reminds me of my own case last year).

I would have loved it if I had actually attended and this were an actual report, but I didn’t, so for conference notes I would refer you to Andrew Osborne’s post here. I watched several videos on YouTube as well, one of which I linked above.

I want to just comment on this conference. First of all, it is a really exciting conference and perhaps could not have had better timing. We are living in times in which people have less and less faith in both world politics and economy. It is true that people in many places, including my own country, still irrationaly fear communism (the most popular response in my country being that communism is forbidden by religion — LOL?), but it nonetheless should be conceived as the perfect time to think. Žižek suggested us to take Lenin as an example: in the harsh times of 1915, he retreated to Switzerland to read Hegel.

About the times we are facing today, Alain Badiou puts it very nicely. I quote from Osborne’s blog:

Today we are nearer the 19th century than the 20th century  with the arrival of utterly cynical capitalism. We are witnessing the return of all sorts of 19th century phenomena such as pirate nationalisations, nihilistic despair and the servility of intellectuals.

Badiou then of course goes on in his usual manner mentions of the need for a strong subjectivity to change the coordinates of possibilities in order to create the Event, the rupture in existence to which we can militantly assert a new truth. This is important and stressed again by Žižek in the conclusion, that a change is not a change in actuality but a change in possibilities. Thus, our task is to think of the possibility of possibilities, to do the impossible — not the usual Kantian “we must, because we can,” but the Badiouvian “we must, because it is impossible.”

I also love what Michael Hardt and Antonio Negri have to say, although the ideas they mention are nothing new if you have read their work, Empire. How can I not love it when the entire notion is similar to the original theme of this blog (I say original, because lately it has become more and more Lacanian than Marxist, I know), that is, one that interrogates the notion of cognitive capital, digital property, and the commons in this day and age of biocybernetic reproduction. Copyright conflicts are the new terrain of the struggle of the commons — now you know why I love calling myself a pirate.

Antonio Negri stressed another importance of communism, one I tweeted in three tweets. It is an importance already mentioned by Tronti and Lenin, as @semioticmonkey corrected me. Indeed, communism is opposed to socialism, and in the same way that psychoanalysis is opposed to ego psychology. There is no equal State, as there is no healthy ego. Communists must organize the decline of State, as psychoanalysts must sustain the efficacy of the ego. Both communism and psychoanalysis must act with an ethics of the Real and acknowledge the redundancy of the agent.

But all in all, in the end, we still do have a question. Is communism a program, a movement to bring back politics and its efficiency that is faithful to a continuous revolution — do we need to organize a continuous decline of the State in order to change our possibilities, as Žižek would argue? Or is it merely a philosophical idea, and what we need now are militant communists, not communism per se, acting to the fullest extent the ethics of the tragic hero, the ethics of the Real, in order to produce an Event, as Badiou maintained?

The Twitter Hysteria

Posted in Postmodern 2.0 with tags , , , , , , , on March 14, 2009 by Bonni Rambatan
Are all Twitter users insecure like her?

Are all Twitter users insecure like her?

Twittering stems from a lack of identity. It’s a constant update of who you are, what you are, where you are. Nobody would Twitter if they had a strong sense of identity.

Angry? Here’s another one:

Using Twitter suggests a level of insecurity whereby, unless people recognise you, you cease to exist. It may stave off insecurity in the short term, but it won’t cure it.

Curious what it’s all about? Here is the full article for you to read.

Annoying as those statements may be, we should not get caught up in our emotions and just disapprove them as having no degree of truth whatsoever, although we must admit that when they said that tweeples do not say “What do you think of Descartes’s second treatise?” you really know they got things wrong.

After all some of us do ask questions like those on Twitter — and start a terrific discussion while we’re at it. Don’t believe me? Try following these people. There are lots more, but I just linked the ones that happened to debate most recently on the precise issue brought up by the article. Sure, some of them may tweet about mundane daily things (if you don’t want mundane daily things and only philosophical content and computer stuff, I have a twitter friend on this extreme end — perhaps a few others?). But really, the reason I followed them was not because I want to fabricate an imaginary connection with the person (in the loose non-Lacanian sense of the term), but because we spark interesting discussions. And although we don’t, I still follow people like Guy Kawasaki not because I think they’re such great guys but because they post links to interesting articles.

Tempted to continue my rant

Tempted to continue my rant

I’m tempted to continue my rant, but let’s get serious. Just sign up to Twitter if you haven’t, and follow the people I linked to, and you can see right out that Oliver James the clinical psychologist and David Lewis the cognitive neuropsychologist may be less intelligent than the people they talk negatively about.

But as I said what they say deserves a closer look. It’s not pure bullshit. We do have people on Twitter who go on emo rant mode 90% of the time saying how worthless their life is (no, I won’t link, if only so that you vain followers paranoically think it’s you). It’s obvious that they get no better by doing so.

Jacques Lacan said that the art object occupies the place of the analyst. By this he means it occupies the object a, but not necessarily the analytical discourse. So too with the Internet, and Twitter in particular — here is an ultimate proof of that. Why Twitter in particular? Because of the space of speech, of course — an illusion of connection, if you want to call it that, since it does belong to the Imaginary register, which is especially true on Twitter where people don’t listen to you but nonetheless hear you. I told you our unfortunate friends got some things right!

What things right? That connection on Twitter serves as an object-cause of desire. They are wrong, however, in saying that this object-cause of desire must be located along with the subject, producing a hysterical discourse with symptoms such as those James and Lewis mentioned (insecurity, lack of identity, etc.). As I tweeted, the problem is, do you let it speak the truh, or are you too busy trying to speak that object little a?

Slavoj Žižek once said that the Internet merely confirms how virtual our lives already had been. What a beautiful way to put it. If nobody would Twitter if they had a strong sense of identity, we should then ask, what will they do instead? For we have always been living a virtual life.

It’s not about Twitter, after all. Twitter just makes it more visible. We have always been attracted to connection. We have always been attracted to those who hear us without having to really listen to us or know us, those who see us on the streets from the corner of their eyes, those who peek at our sexual lives. We have always been fascinated by those as we are fascinated by art. That’s what Twitter is all about; that’s what the Social Web is all about. We love those things, those object-causes of desire. Consuming them in no way makes us an insecure hysteric all of a sudden.

Just in case your friends on Twitter

Just in case your friend's on Twitter

Social Networks and Mind Evolution

Posted in Postmodern 2.0 with tags , , , , , , , on March 7, 2009 by Bonni Rambatan

Real conversation in real time may eventually give way to these sanitised and easier screen dialogues, in much the same way as killing, skinning and butchering an animal to eat has been replaced by the convenience of packages of meat on the supermarket shelf. Perhaps future generations will recoil with similar horror at the messiness, unpredictability and immediate personal involvement of a three-dimensional, real-time interaction.

 

Photograph by Chris Jackson/Getty Images, taken from The Guardian

Photograph by Chris Jackson/Getty Images, from The Guardian

Several weeks ago there was a post in The Guardian UK titled “Facebook and Bebo risk ‘infantilising’ the human mind”. Gosh, I thought, not another one of those technophobic critics again! If the name of Susan Greenfield was not mentioned right at the beginning I probably would have just read a few lines and close my browser’s tab. So luckily Greenfield was mentioned, and when she is I know it’s going to be neuroscience, and it’s going to be not so much technophobic as exciting.

And I was right. Sure enough, there were technophobic tones here and there, but tones of fascination with the brain are more prevalent. It is particularly the paragraph quoted above that took my attention. Also, the conclusion of the article reads as such:

But Greenfield warned: “It is hard to see how living this way on a daily basis will not result in brains, or rather minds, different from those of previous generations. We know that the human brain is exquisitely sensitive to the outside world.”

This is an exciting fact. A critical theorist shouldn’t be tempted to look at this merely in the vein of some cheap “postmodern” attitude of criticizing the hegemony of mathematical neuroscience while it represses the contingency of the discourse of knowledge and therefore oppresses certain minority discourse about the brain and the spirit and such and such. Rather, one should ask what truth lie behind these claims and how they will effectively play out.

As for myself, I am highly pleased that we have such articles. At least I know that I am not the only person to make the claim that Web 2.0 is changing our minds much more radically than we may think. Of course I was never the only one, but it is disconcerting to note that most of those who think so belong to the school that call themselves “New Age”, with which I never want to be associated.

Greenfield does it in terms of neuroscientific psychology. Her points, I think, are correct. I also think they deserve much more elaboration from a proper psychoanalytic point of view. After all, psychoanalysis proper is psychology, so should it not make much more sense that they work together in criticisms? Perhaps the now fashionable divergence of the fields (one to neuroscience, another to a “materialist-transcendental” Deleuzian approach) should not be embraced so dearly, after all.

So what is my Lacanian take on this issue of what I call, for lack of a better name, a “mind evolution”? Social networks and Web 2.0, even the computer logic in general, play a world of difference in the subject’s relation to the big Other, the socio-Symbolic order, as well as his relation to his own object-production. In the social Web, we have fluid identities (we consciously construct online identities), confusion of time (we can undo many things we did not want, we can think about what we want to say in a chat before hitting Enter), and virtually eternal memory (objects of our production never disappear and can be reproduced endlessly), to name a few that I think are the major. Is this not proof that the basis of language and society itself (the notion of forgetting, of property, of not-knowing, of spatial interaction, of temporality, etc.) is changing?

My point in this post had been but one: something is happening in the human mind, and questions are popping up about this change. Analysis should not be given up to neuroscientists alone, but to critical theorists as well. We are in dire need of a coherent cognitive mapping, one that I believe psychoanalysis proper will help greatly.

The White Bentley Chase Did Not Happen

Posted in Postmodern 2.0 with tags , , , , , , , , , on March 2, 2009 by Bonni Rambatan

Such was Jason Quackenbush’s response #5 to the polemic that has been going on the the blogosphere about a live-tweeting event of a certain suicide last month. Yes, I know this is very much a late response, but you will forgive me for I was isolated from the blogosphere in the past two months. I knew of the story just a while ago from my good friend A.V. Flox, who blogged about it herself. Five days later Amber Rhea responded rather harshly to A.V.’s post, saying that accusations of technology being awful has always been going around for ages in virtually every era.

Simulacra by Tatchapon Lertwirojkul, which has nothing to do with our post

Tatchapon Lertwirojkul's Simulacra, which has nothing to do with our post

I agree with them. Yes, all of them. Well, I’m a Lacanian, and Lacanians like to bring seemingly contradictory things together and show how they are two sides of the same coin. This is what I’m going to do here.

But before that, a quick recap of what happened. On the eve of February 10th, there was a white Bentley chase in LA that lasted for three hours before the driver pulled over in front of a Toyota dealership and finally killed himself. Let us quote A.V., who tells it in a much more eloquent manner than I could ever do:

The white Bentley stopped in front of a Toyota dealership near Universal City after a three hour chase on Hollywood Freeway and Interstates 5, 10 and 405. The stand-off began at around 11:00PM PST, with hundreds tuning in to the FOX11ABC7 and KCAL9 live feeds online.

Before long, Twitter streams were on fire with commentary from people around the world about what was happening. People watching gave in to speculation about the identity driver, debating whether it was hip hop singer Chris Brown, charged earlier with assault—allegedly against his girlfriend, the singer Rihanna or rapper DJ Khaled, as well as the reason for his fleeing.

As time passed with no action, the public became more and more irate. Jokes followed, including the creation of the fake account @WhiteBentley, which ran a stream of comments as though he was the driver inside the car.

The jokes soon turned sinister, with many expressing someone should just shoot the driver down and save the LAPD thousands, and still others suggesting the driver end his life to avoid repercussions of the extended chase. Then, after news reports began coming in that the driver might indeed have shot himself and the ABC7 cameras zoomed out to avoid exposing the public to a gruesome scene, the disappointment was almost unanimous.

“They aren’t going to zoom in and show us the possible brains, bullshit!” a chilling tweet read.

The driver and law enforcement personnel involved were no longer human to those of us watching. Moving around inside our computer screens, they had become characters in a play put on for our entertainment.

Fascinating. Of course, I could not agree more: people inside the computer screen have become characters in a play put on for our entertainment. Let’s get back to Jason Quackenbush. The same idea, of course, underlies his post. He mentions Baudrillard, whom he likes the more “the older he gets”, and evokes Baudrillard’s famous statement that the Gulf War did not happen and applied the same idea to the suicide tragedy.

I am not a Baudrillardian. I like Baudrillard, but to me his ideas are a little simplistic, and I could never be convinced of his idea of a postmodern rupture after which all things implode into a simulacra. I do not like the technophobic tone, often with hints of a nostalgia for the past, detectable in his works, but also in most postmodern philosophers including today Paul Virilio. And this is where I agree with Amber Rhea. “What’s the current monster of the week?” she said, “The formula seems to be: pick something relatively new and use it as a scapegoat; wring hands; bemoan the direction society is heading (downward, one presumes); repeat in 2-3 months.”

In fact Amber made an excellent point, that we can always go further back in time to find this monster. As far back, I would say, as the development of language and tools itself, the very things that make us what we are today instead of cavemen. You see, mankind is a creature that is fundamentally alienated, separated from reality. Deal with it. To bemoan technology is in effect to bemoan language itself. When I said that the white Bentley chase did not happen, it is not because Twitter has created a Baudrillardian rupture of reality, but because nothing really happened. We live in a Symbolic universe, the universe of technology and language, mediated by it, and things happen, be it with drama and empathy or with sheer coldness and chilling morbid jokes, in none other than our imaginations. We always connect with Imaginary relations with other people.

That should not however be an argument to merely dismiss the live-tweeted suicide event as another day in the office. One could not deny that it was a horrible event, and one that can only happen after the invention of Twitter. Technology does change us, in major ways, and we cannot deny that. Does Twitter kill your soul? Perhaps. But let us not forget that the history of technology is a history of human souls being killed over and over and over again since the beginning of time. It is lso a history of their rebirth, of new modes of Being, as Heidegger put it.

Ultimately, the question of the inherent good or evil within technology is a personal wager. We are never sure that technology will bring us good. But let us not die in postmodern simulacra. Let us be a good Badiouvian and realize the militant nature of truth and the good. I’m rooting for Twitter all the way. Kill our souls, if only to make us grow.

Your Mind is Now Undead!

Posted in Divine Science with tags , , , , , , , , , , on December 19, 2008 by Bonni Rambatan
Teh ceiling cat is in ur machine, reading ur mind...

Teh ceiling cat is in ur machine, reading ur mind...

Less than a week ago researchers in Japan confirmed a way to extract images directly from brains. Yes, you read that correctly; in a nutshell: by hooking you up to this machine everyone can now see what you are thinking, because it will be shown in a monitor. I had this reply in my Twitter stream when I tweeted about it, and although I have not yet seen that movie it is nonetheless very easy to imagine this invention being taken right out of a science fiction gig. (Being the shameless otaku that I am, my personal memory that this news recalled is none other than Japan’s anime ambassador, Doraemon.)

I often have people asking me what I think of the newest mind-blowing inventions the world has to offer (which is one of the reasons why this blog was created). Perhaps surprisingly to some, I never throw out horrible paranoiac scenarios of nightmarish dystopias people commonly take as “critical” reviews of a certain technology. While I do acknowledge the potential new narratives of paranoia such technologies — and especially mind-reading technologies — will engender, I like to look at technology the way I look at bodies, Lacanian style — i.e., as the false representative, the lacking signifier of the subject.

Being able to record one’s thought into an image on the computer screen is one of the basic tenets of posthuman fluidity. After all, if video games can read your mind, why shouldn’t the computer be able to see your mind?

Here, however, I have a very basic question: will our mind, after being replicated into a computer screen, remain our mind? Will my mind not, rather, take the position of an “undead” mind, a mind that is both mine and not mine at the same time, giving me the uncanny experience akin to listening to a recording of my own voice, a voice both mine and not mine at the same time? In the domain of the voice, we have horror movies like The Exorcist, in which a ghostly intrusion is symbolized by the changing of the voice. Similarly, we also have scenarios like the Imperius Curse in Harry Potter, in which a Death Eater intrusion is symbolized by the changing of a victim’s mind.

What this implies, however, is a much more radical thesis: today, with neuroscience and other mind-reading technologies, the mind reveals its inherent split: my mind is not my mind. (Or, to put it in Hegelian tautology-as-contradiction: my mind is my mind.) It is no longer the age-old “Cartesian” split between the mind and the body — we are now forced to realize that even without the body, the mind is already inherently split from within. Yes, we can extract minds, read them, project them onto screens, record them and store them, build them from individual neurons, etc., but the fact remains that there is an irreducible kernel behind its presence, its irreducible (misrecognizing) reflexivity. After all is said and done, we still have a gaping void in the middle of the thinking mind, its “true” presence (compared to the “undead” simulation of the projections on the screen, which is not fully our mind, etc.), what Žižek calls “the unbearable lightness of being no one”.

It is here that we may come up with another definition of the posthuman subject: the posthuman subject is the subject whose mind is undead, a subject whose externalized mind as such loses its phenomenological vigor of living presence and turns into a zombie.

As an additional note, it is fun to imagine the birth of “mind art” in the future with this technology — far from needing any motoric skills, the artist would only utilize his sharp concentration to create stunning artworks. Like, you know, porn.

Now, replace the snowman with a nude chick.

The snowman is actually a nude chick.

Why We All Hate Comic Sans

Posted in Pop Culture with tags , , , , , , , , , , on December 14, 2008 by Bonni Rambatan
Mmm, Comic Sans...

Mmm, Comic Sans...

It is a very interesting fact that a single font could create phenomena to such extents, spurring their own hate groups on one side (mostly designers) and being loved by another (mostly amateurs). What is it with Comic Sans? I am of course not asking about typography history or other things that make man’s love-hate relationship with the font contingent to historical events, as many would. Instead, a much more interesting question would be: is there something inside the font itself that makes it possess such a property?

What is typography? Here I would refer again to a Lacanian textual analysis. Is not typography that which is precisely an excess to the meaning of a word — that which remains, rather incessantly, after we get the entire meaning of the word? In this sense, typography may be considered the voice of the movable type, insofar as it is a ladder to get to meaning, but useless after we achieve meaning itself (I am here referring to the definition of voice by Mladen Dolar in his book A Voice and Nothing More).

Good typography, then, like the good art of voicing, may be compared to music, the music of written words — in Lacanian terms, its jouis-sense, enjoyment-in-meaning, enjoy-meant. Is, Comic Sans, then, bad typographical music? Why so? The first thing most of us associate Comic Sans with is childishness, immaturity, and non-professionalism. Comic Sans is thus like an annoying children’s music (it may not be a coincidence that many people I know who loathe the font also do not have that strong an affinity with children). This can be excellent, of course, in the right context.

What is the right context? As the name suggests: comics. Following comic art theory (read with Lacan), comics depict subjects drawn simply or use a lot of shadows to maintain the character’s subjective attachment — to put it simply, a lack, an unregulated place for the object little a, to maintain the little other. Is this not also precisely the case with Comic Sans, that there is too much room for subjective attachment due to its inherent lack in design?

In what sense can we talk about this? Let us now borrow a term from Derrida: that of undecidability. Comic Sans is precisely undecidable on the category it tries to occupy: On one side, there are the more professional fonts: Times New Roman, Helvetica, etc. On the other side, there are the obviously decorative fonts ranging from script-like cursive fonts to Wingdings, with Jokerman and the like somewhere in the middle. Does not Comic Sans lie precisely in the middle — not as a compromise between the two, but as a kind of spectral object that leans towards both ends simultaneously, just as a good Derridean undecidable object would do?

Comic Sans is thus the undecidable object of typography, an undead type. As such, there is a huge gaping void of lack, a spectral appearance of the object-cause of desire that on one hand captures the heart of sixth-grade first-time presenters, and on the other freaks professional designers out.

(But why is my title “Why We All Hate Comic Sans” if I acknowledge that some people love the font? The reason is tautological — is it not, today, to be considered a “we” in the digital age, we have to be more professional and shun Comic Sans for good? We all hate Comic Sans because the big Other does — we must hate Comic Sans.)

I’ll leave you with a video to give you more idea of the undecidability of our undead font. Feel free to comment your thoughts away.

Chinese Room and the Cogito

Posted in Pure Theory with tags , , , , , , , , , , , , on November 26, 2008 by Bonni Rambatan
Chinese Room

The Chinese Room

Cogito ergo sum is perhaps the most abused three-word phrase in our contemporary intellectual sphere, so much so that most of us no longer bothered to read further on the subject and what Descartes really meant. “I think therefore I am” has been recycled over and over by changing the verb into every other activity, from shopping to tweeting. All these has of course one underlying assumption, a false reading of the Cartesian subject as a substantial subject. Truth be told, the mind-body split did not come from Descartes at all — the idea has obviously been around since the pre-Socratic era (why else would we have the narratives of heaven and hell?). The true Cartesian revolution is in fact the opposite one: that of a total desubstantialized subject.

This does not mean, again, a subject desubstantialized of of a body and becoming a free-flowing mind, a (mis)reading found everywhere today in the intellectual sphere, and especially in the area of third-wave cybernetic research. Among the most fiercest proponents of this version of Descartes is none other than John Searle, the proponent of the famous Chinese Room argument. Unknowingly for Searle, however, the Chinese Room argument is, in fact, at one point, an ultimately Cartesian paradox.

What does the res cogitans, the thinking substance, mean, then, if not the common misreading of it as a declaration of a subject of pure thought? Here, it is crucial to look at what kind of thinking the cogito is first formulated under. The thinking that brought upon the cogito is none other than pure doubt — the doubting of reality and my existence within it. This doubt is irreducible, so much so that, in what may pass as a rather desperate move, the doubt itself becomes the only positive proof of the thing that I doubt — I exist only insofar as I doubt my existence. Rather than a substance of pure thought (“that can be downloaded into computers”, as Hans Moravec put it, etc…), the Cartesian subjectivity is a void of pure self-doubt.

(It is of course true that, in Descartes, there is ultimately a mind-body duality: the subject does not depend on the world, res extensa, to truly exist. This is, however, not because they are two separate substances, but because the former is a precondition of the latter; because the cogito is a prior void around which res extensa can only emerge as such.)

Does John Searle not reproduce exactly same motive in his Chinese Room argument, but instead of doubting the true existence of his reality, he doubts the cognition of computer programs? The famous Cartesian doubt, “What if God is a liar” is here replaced by Searle’s “What if I ultimately do not understand the symbols with which I communicate, but only know its perfect grammar?” Of course, the path they take in the end is different: If Descartes were a Searlean, he would have claimed that he cannot prove his own existence; if Searle were a Cartesian, he would have acknowledged that it would not be possible for one to know grammar without knowing semantics, for ultimately meaning is generated from structure, as the Structuralists already have it.

A great answer to the Chinese Room argument, and so far the best, I think, is the systems reply, claiming that it is the whole room instead of the person that understands, because cognition could not be localized to a single source. This would be the true Cartesian revolution, that cognition is separate from any particular subject, and the true Lacanian experience of the subject as barred. Searle rejected this argument by saying that if the entire room is located inside the brain, that would not make a subject understand any more than he does, despite his being able to communicate — which, of course, presupposes an ideal subject that “truly understands.”

Here, Daniel Dennett’s reply is worth noting: Dennett claims that if such non-thinking-but-nevertheless-communicating subjects exist, they would be the ones surviving natural selection, hence we would all be “zombies”. Does this not perfectly mimic the humanists’ fear of the post-struturalist alienation of the subject from language? Dennett, perhaps rather unfortunately, then goes on to say that the Chinese Room is impossible because we are not zombies — which, again, presupposes an ideal, non-alienated subject.

Distributed cognition is where the barred subject takes its place in contemporary cybernetics, and this is, contrary to popular belief, ultimately a Cartesian move that fully separates cognition from its local basis, as the separation of mind from its carbon basis. It turns out that Descartes was not only the first Lacanian, as Žižek put it, but also the first third-wave posthumanist. It is a sad fact, thus, that leaders in the field of cybernetics overlook this fact and, in both sides of the argument, tend to return to Aristotelian ideals, to illusions of wholeness.

Cultured Meat and Totem Culture

Posted in Divine Science with tags , , , , , , , , , , , on November 17, 2008 by Bonni Rambatan
In Vitro Meat (c) DC Spensley/H+ Magazine

In Vitro Meat (c) DC Spensley/H+ Magazine

Let us now go on to discuss further on the issue on how to deal with life (in accordance to this Cat Bag post). It is interesting today to see the debate surrounding cultured meat: meat grown in labs, without any animal being sacrificed. The idea is of course to care more for the animals (which is why PETA would give $1 million to anyone who first come up with a successful way to cultivate the meat), less energy consumption and less pollution by decreasing the number of butcher houses… Basically following the fashionable standard of environmentalist use of science.

It is incredibly hard to miss the Žižekian logic of decaffeinated culture at work here: is not the meat without sacrifice the example of decaffeinated consumption par excellence? But now let us take a moment and look further into the response of society surrounding this very topic: do a quick search for “cultured meat” on the internet, and you will see that most people reject the idea. Why is this? Are we not supposed to celebrate the progressive development of this decaf ideology with joy? In the case of cultured meat, however, even the famed transhumanist RU Sirius commented, “Yuck!”

The answer is not that hard to find: people still find it strange and uncanny to eat meat that was not taken from a live animal. Why? Here we can clearly see the symbolic ideological dimension of a purely biological everyday act of eating, one that Freud has explicated in his Totem and Taboo. In eating meat, are we not also eating the other species’ death? The death of the sacrificed animal is more of a symbolic necessity than an unavoidable fact. This is the reason we have all those kinds of sacrifice rituals and forbidden meals.

What is very interesting, of course, is how this primitive logic of totemic rituals still turn out to play a large role in an age where we are supposed to no longer believe anything anymore. What is the state of affairs of totem and interspecies relations in the world today? Clearly, we are stuck between two conditions: novel technologies enable us to have capacities of which only God himself would be able to do just a little over a hundred years ago — the “divinity of science” that goes with the rapid advancements of quantum physics, bioengineering, and neuroscience — and ancient symbolic necessities, the totem and taboos of our primitive ancestors.

In the end, perhaps Paul Virilio was right: we are caught between the contradicting dromologies: the ecstatic high speed of cyberspace and the slowness of human minds. Or perhaps, Hayles and Haraway was right, that this is not a deadlock after all, and what we need is a new formulation of subjectivity itself. Or, perhaps, all of them are correct in a way, and we need to see — to put it in Kierkegaard’s terms — the primitive totem-and-taboo subject as this new posthuman subjectivity in-becoming instead of its enemy.

What about you? Would you eat meat grown in labs? More ideologies at work you find? Feel free to comment away!

Cat Bags and Cyborg Significant Others

Posted in Companion Species with tags , , , , , , , , , , , on November 14, 2008 by Bonni Rambatan

The Cat Bag

What is life today? Obviously I am not talking about another kind of New Age mysticism here, but nonetheless I think this question is crucial if we are to fully grasp the notion of significant otherness in interspecies relations. If Haraway talked about cyborgs and companion species, today, with ambient intelligence and wearable computing on one hand and increasing atomization of society in the other, we are entering more and more into a realm of cyborg companion trans-species — the land of ambient life in the glorious age of “hybrid wearables”.

I was intrigued by these photos the “Cat Bag”, pictured above. What is so interesting is that how this bag will breathe, purr, light up its eyes, radiate warmth, and even beat its heart. If the OncoMouse, the first species with a trademark register, is the prime example of the convergence of biotechnology, scientific research, and capitalist production, what is the Cat Bag if not the example par excellence of the convergence between the romantic realm of significant otherness and the realm of stupid, elementary practical usage?

How is this possible? What do we see in the potentials of technology today? From Mediamatic‘s review of the Hybrid Wearables Workshop, we can read:

I do not need my laptop to be merged with my overcoat. I do not want to receive email on a tiny screen mounted on my eyeglasses. I do not have enough attention to distribute to real and virtual life at once. Nevertheless, applications like these are some of the first which come to mind when one mentions wearable computing.

Instead, what if your shirt would hug you every now at then? What if your bag would warn you about forgetting your keys? What if your socks explained how to give a fantastic foot massage?

If you are familiar at all with Lacanian psychoanalysis, one thing is clear: not only that technology is driven to be made as a means to gain the object little a from other human subjects, but technology itself is seen as possessing the object little a, as the treasure box (or hard disk?) in which the agalma is hidden  — a posthuman cultural construct at its most elementary.

Animal domestication was among one of the crucial steps in the development of modern man, in par with tool use itself. The relationship between the human and the nonhuman has continued, of course, to be a crucial relationship. And it is evolving with technology, as we can see. In psychoanalysis, already with Freud, we have theories of the totem, animal spirits, and so on. But what about the evolution of the discourse of species itself? Here, I think, the cyborg subject is not so well theorized.