Category Archives: conference

Society for Literature Science and the Arts 2007

Today the SLSA 2007 conference began in Portland Maine. For perhaps the first time ever, I will attend all sessions every day of a professional meeting. I suppose I shouldn’t admit it, but I make this revelation to illustrate the “high-poweredness” of this year’s meeting. Every session has at least one fascinating panel and speakers whose work I know. In fact though, I won’t always be attending the famous ones.

So today I went to a panel of two speakers, Vera Bühlmann and Klaus Wassermann, who were both speaking about Deleuze and they were really interesting. In particular Vera’s was relevant to my own paper because she was talking about Sloterdijk and he underpins the foam metaphor I was trying to discuss, riffing off what Mirko presented this summer. The most important point was Sloterdijk saying that humans could become anything they could imagine in a sustainable way. Because that’s one thing I thing really attracts people to online communities of diff types; they possibility of having a variant identity validated and sustained.

Anyway, more on my panel (which was the very next one) later. Now I head back to the conference.

The transformation of Literary and Media Studies

Finally I have a chance to finish my report on Katherine Hayles; I’m sure you must all be relieved, waiting with baited breath as you were. Or weren’t you? tsk tsk. Well, I will finish it for my own satisfaction then.

After laying out her argument for human-computer interaction being an example of intermediation, or emergent complexity (or at least having the potential to so be) Hayles then claimed that “consciousness is not the expression of a coherent unified self, but is the narrative that sutures together a fragmented collection of multiple agents working simultaneously.” Based on my own fairly extensive research in neuroscience, I can say that this is indeed what is believed, and has been believed for since around 1997 at least. She cites Daniel Dennet.

Some E-lit takes a form that enacts this view: for example Slipping Glimpse by Stephanie Strickland, work by Cynthia Lawson, and by Jaramillo.

Hermeneutics alone can’t do the bridgework needed for E-lit (I know of one friend who might disagree…) and this has led to a profound transformation of traditional disciplines under the pressure of electronic media. Here we ran out of time, so unfortunately rather than elaborating on the above examples or explaining further about this transformation, Hayles said only that Literary and Media studies should be part of this conversation, rather than being shunted aside by new fields or by the sciences themselves.

And lastly a plug for the new E-Lit Anthology.

Of course there were a few questions, but the most interesting was from Samuel Weber who asked if meaning had to equal unity–what about the 7 types of ambiguity, for example, and isn’t hermeneutics all about interpreting? (refering to here early point that meaning depends on the device doing the interpretation). Hayles answered that there has been a shift in (or an expansion of) “interpretation” from high level cognition to machine interpretation. She then quoted Emo Phillips: “I used to think the brain was the most wonderful organ, but then I thought about who was telling me this.”

It was a funny response, but I don’t think Weber is so easily answered as all that, and while ultimately I could imagine possibly agreeing with Hayles, I really need to hear or read more about the latter stages of her argument.

Overall this was a good talk in the clarity of explanation and competence of its delivery, but I really wish we had gotten farther than just laying the groundwork because of course it’s her last claims about hermeneutics and the transformation of the disciplines that really need to be argued, rather than just explained. Better yet if she had a paper out somewhere that went into more detail. But so far she doesn’t, that I could find, and I checked. If anyone knows of one, please tell me!

More on Hayles

Ok, so having introduced the idea of emergent complexity and it’s requirements, Hayles next started talking about analogies as pattern recognition. She reminded us of Douglas Hofstader’s claim that all “cognition is recognition.” If recognizing patterns is what leads to understanding analogies, well so what? She gave some examples first of human ability to understand analogies.

If ABC =>ABD
then WXY=>?
of course WXZ.

But then what would the next term be? everyone in the audience immediately suggested WXA, because we know that typically lettering starts over (or possibly doubles, but I guess we all assumed only three-letter combinations were allowed).

The machine comes up with:

WXZ=>XY
WXZ=>WXZZ
WXY=>WXA

So it eventually reached the same solution–Hayles didn’t say exactly how they “learned” or whether it was just random, and now I wish I’d asked… Then she gave an example of Morse code which combined digital (substituting dots and dashes for letters) with analog (using spaces to represent pauses) so that space=time. By contrast, binary code is all digital in which space (absence) must be represented by an actual term. She brought up Morse code to demonstrate that we all understood the spaces to represent spaces or pauses between words because of implicit assumptions based on homologies between Morse code and our everyday experience of speech.

The invisible assumptions we make about one medium are made visible when we change to a media that doesn’t support the homology behind the assumption.

Next she refers to Ed Fredkin’s notion that “the meaning of information is given by the process that interprets it.” By interpretation she means for example the way an MP3 player interprets digital information and interfaces with some other device to produce sound, but we can also say this about cognition. –Question, is this a metaphor, or a model? I think she means it to be a model.

We experience sensory input which activates neuronal groups that then activate maps, which activate larger cognitive structures, which eventually add up as cognition (recognize this pattern?).

If we go along with Fredkin, then we shift our emphasis from product to process (of course we did this in Comp. Theory years ago), from intentionality to consciousness, to “aboutness” as a spectrum….meaning is de-anthropomorphized. I hope I didn’t miss something important in that ellipses in my notes.

Having said all of this, humans and computers interacting meet the conditions for emergent complexity or intermediation. Certain works of E-literature foreground this as content as well as process. The evolution of E-literature also exemplifies a dynamic heterarchy (see the outline of conditions in my previous post)as we adopt new media. So the codex book was for storage and transmission of ideas; iit was a vehicle for cognitions. The computer though is an active cognitive agent–that is, it acts upon data and doesn’t just store or transmit it.

Ok, I only have one more page of notes, but it’s the most complex, so it must wait until tomorrow. At the earliest!

I leave tomorrow and while I’m mostly packed, I may need the morning to make sure everything is organized before I leave around 9:30 to get the tram, to get the train, to be at Schiphol by 11:30 for my 2pm flight. Tot ziens!

A brief interruption of the timeline…

Before I finish reporting on Worm, I have to pause and catch up on some earlier stuff, before I forget everything, so this entry will be on Katherine Hayles’ keynote speech at the Re-mediating Literature conference that I covered very generally a few posts back.

Ok, she had a really clearly laid out talk with started with an explanation of emergent complexity and what conditions produce it. Here is an overview of her points, closely paraphrased from her slides (finally, someone who did a simple PPT presentation with no bugs):

–The universe is fundamentally computational (Wolfram)
*examples of cellular automata, fractals
–Emergence, complex behaviors arising spontaneously and unpredictably from simple computational rules
*example cute program with 24 independent agents to which various rules can be applied (I wonder if she coded that herself?)

But, digital mechanisms can’t be the whole story; digital and analog cooperate, for example in DNA strand replication, which is digital, only creates a practical or concrete result when it is expressed through protein-folding, which is an analog process. Analog is good at transferring information while digital is good at error control and both are essential in the case of DNA.

But these two processes affect each other and we see evolving complexity across levels–“dynamic heterarchies.”

Feed forward and feedback loops in dynamically interconnecting media. In other words, First level primitives interact and the results of those interactions become second level primitives, and so on. For example, the interaction of sub-atomic particles make atoms, the atoms interact to form molecules, and the molecules interact to form proteins. But, activity on any level reaches not just those right about or below, but may reach through levels as well.

Another example is pregnancy; the mother is producing the fetus, but at the same time, the fetus is re-engineering the mother. –This example really struck me because it’s really quite interesting the way developments in the fetus trigger further changes in the mother, and at the same time the fetus is reacting to changes in the environment (the mother) who mediates changes in the external environment, such as what is present in the air or the water or the food she takes in.

So intermediation has these crucial components:

–Different levels of complexity
–Different media
–Re-representation
–Heterarchical dynamics

all of which lead to emergent complexity. Damn, still 4 pages of notes left, but I will pause here. So remember, emergent complexity comes from dynamic intermediation.

New Network Theory, day 1, session 1

Here I am in the opening plenary, listening to a talk by Siva Vaidhyanathan about Google and its philosophy, and about how talk about Google is characterized by a strongly theological tone. Interesting discussion of how it’s philosophy and technology are entangled and don’t always work well together. For example, level of user interest strongly influences pagerank, so just based on Google’s search algorithms, terms like “holocaust” would bring up pages of holocaust denial sites. Google engineers had to really mess with their own code in oder to get around this.

Then on to Google booksearch, which apparently sucks–well, I think it hardly compares to their regular search, but is it really that terrible? Maybe I just got lucky.

Google video–well, we all know the issues there, I think. Google tells you not to infringe, but they’re not responsible if you do and they aren’t going to police it (unless threatened with a lawsuit by Viacom).

Interesting influence of pragmatic and technical issues on copyright law enforcement. Search engines couldn’t function if those companies couldn’t copy the pages into their indexes without asking every time.

And then there are the privacy issues with Google Earth… Issues with the Chinese government over making dissident material available…

Clearly drifting from the “do no evil” position.

Google is seriously understudied. It’s not neutral ….not a lot of other stuff–he’s speeding up to finish….we need the synoptic rather than panoptic…

We need critical information studies–but the description is what we’d like to say about any field. While I agree with this critique of Google, I don’t know that his larger calls are actually anything new or different. Ok, we’ll see what the next speakers say.

Immediate Causes

In the last (first) post I never got round to explaining why I’m trying this again right at this moment–don’t get excited though, it’s not earth-shaking. I mentioned to a David Silver, who runs the terrifically useful Resource Center for Cyberculture Sudies that I was attending some conferences in the Netherlands on virtual communities and on digital lit. if he’d like reports for the RCCS, and he said I should just blog it. He said it in all caps. I respect his work and opinion, so I figured what the hell. I’ve been teetering on the edge of blogging again for months anyway.

So a brief intro on the conferences: next week I’ll start with the New Network Theory conference in Amsterdam, and I’m presenting a paper there. Eventually I’ll put it online someplace and you all can see it–but since my school currently only allows us upload access from campus computers, and since I won’t be on campus again until August, this could take awhile. Or maybe I’ll find some other place for it…. I’m excited about it though because a bunch of people whose work I like will be speaking and I look forward to that. If I were at a comic book convention (you will probably read a fair amount about comic books in this blog) I would collect autographs and sketchs from people whose work I like. I wonder how these speakers would react? The proceedings are going to be on cd, so it doesn’t really offer much space for cute little notes…maybe I’ll suggest that for next time. Anyway, here’s who I want to see: Wendy Chun, Florian Cramer, Alan Liu (all keynotes, all rather well-known) and also Ramesh Srinivasan, Ulises Ali Mejias and Mirko Schaefer (not as well known, but should be).

Then I’ll be going to Re-Mediating Literature in Utrecht where, happily, I am not presenting, so I can just enjoy myself listening to other people talk about things that interest me, including Katherine Hayles, Jan Baetens and some others. But it’s late, I’m tired and I’ll say more about this in my next post, along with more exciting news about my impending trip.