More on Hayles

Ok, so having introduced the idea of emergent complexity and it’s requirements, Hayles next started talking about analogies as pattern recognition. She reminded us of Douglas Hofstader’s claim that all “cognition is recognition.” If recognizing patterns is what leads to understanding analogies, well so what? She gave some examples first of human ability to understand analogies.

If ABC =>ABD
then WXY=>?
of course WXZ.

But then what would the next term be? everyone in the audience immediately suggested WXA, because we know that typically lettering starts over (or possibly doubles, but I guess we all assumed only three-letter combinations were allowed).

The machine comes up with:

WXZ=>XY
WXZ=>WXZZ
WXY=>WXA

So it eventually reached the same solution–Hayles didn’t say exactly how they “learned” or whether it was just random, and now I wish I’d asked… Then she gave an example of Morse code which combined digital (substituting dots and dashes for letters) with analog (using spaces to represent pauses) so that space=time. By contrast, binary code is all digital in which space (absence) must be represented by an actual term. She brought up Morse code to demonstrate that we all understood the spaces to represent spaces or pauses between words because of implicit assumptions based on homologies between Morse code and our everyday experience of speech.

The invisible assumptions we make about one medium are made visible when we change to a media that doesn’t support the homology behind the assumption.

Next she refers to Ed Fredkin’s notion that “the meaning of information is given by the process that interprets it.” By interpretation she means for example the way an MP3 player interprets digital information and interfaces with some other device to produce sound, but we can also say this about cognition. –Question, is this a metaphor, or a model? I think she means it to be a model.

We experience sensory input which activates neuronal groups that then activate maps, which activate larger cognitive structures, which eventually add up as cognition (recognize this pattern?).

If we go along with Fredkin, then we shift our emphasis from product to process (of course we did this in Comp. Theory years ago), from intentionality to consciousness, to “aboutness” as a spectrum….meaning is de-anthropomorphized. I hope I didn’t miss something important in that ellipses in my notes.

Having said all of this, humans and computers interacting meet the conditions for emergent complexity or intermediation. Certain works of E-literature foreground this as content as well as process. The evolution of E-literature also exemplifies a dynamic heterarchy (see the outline of conditions in my previous post)as we adopt new media. So the codex book was for storage and transmission of ideas; iit was a vehicle for cognitions. The computer though is an active cognitive agent–that is, it acts upon data and doesn’t just store or transmit it.

Ok, I only have one more page of notes, but it’s the most complex, so it must wait until tomorrow. At the earliest!

I leave tomorrow and while I’m mostly packed, I may need the morning to make sure everything is organized before I leave around 9:30 to get the tram, to get the train, to be at Schiphol by 11:30 for my 2pm flight. Tot ziens!

Leave a Reply

Your email address will not be published. Required fields are marked *