Author Archives: Kim

News From Comparative Media Studies at MIT

I never thought I’d see it, but Henry Jenkins is leaving Comparative Media Studies at MIT and moving to the Annenberg School at USC. The full story (or as full as will ever be made public, I’d guess) is on Henry’s blog.

I express surprise, but in another way I’m not surprised at all. The whole time I was teaching in Course 21 at MIT I observed how little the Institute as a whole seems to value Humanities and Social Sciences. I had thought though that they might care more about supporting CMS because they have such a strong international reputation and really enrich the academic programs, the campus community, and the school’s image. But in the early 2000s I saw how much time Henry had to spend on fund-raising, and how in spite of the MA program’s growth and the creation of the BA, that no new faculty positions were created. So really maybe the surprise is that Henry waited this long.

I would hate to see that program disappear; all the things they do are so valuable in general, and I personally gained a lot from some of the programs they were running when I was there. But I wonder what it will take for MIT to stop treating the humanities as not even second rate? I know many people want to work there; they have a good reputation, good students, good location, etc.. But in fact this is at least the third member of the humanities faculty I know who has chosen to leave in the last four years, and likely there have been others. Maybe this will be a wake-up call at last, but I doubt it.

Florian’s moment of revelation

Florian at Ars Electronica 2007.

Note, visit Ars Electronica 2007 for more info. about Florian winning the Prix Ars prize for theory.

I was still a student in 1995, in Comparative Literature, and there was a conference in Berlin. It wasn’t really a conference so much as a public culture event, and it was called Soft-Moderna –soft modernism and basically it was organized by people from the American Studies program from the John F. Kennedy Institute in Berlin, and it imported the whole Brown University Hyperfiction discourse. So it was about literature and the internet and computing, but heavily based in the whole hypertext-hyperfiction paradigm. And bringing together Robert Coover for example and some German people who were doing early experiments in that field. And you could see the whole helplessness of people there, and they also operated in the new media paradigm so they asked a couple of media journalists and media studies people to be on this panel and discuss this whole thing. And you could see this complete helplessness. And I was just this young student and I just stood up and asked critical questions. I didn’t talk so long, maybe two minutes or so, but I was really critical of what they had said. And then basically the organizer of the conference said well, you seem to know more about that stuff than the people we had on the panel, so do you want to be in the next conference? So that was actually my first public lecture and I was on a panel with Friedrich Kittler (!) and Andy Müller-Maguhn the spokesperson from the chaos computer club. And from there I got writing commissions and I gut sucked into this whole field.

This is a short one, but next time I’ll be covering what Florian likes about the field, his concerns about the art being produced, and his own role.

I’m interested though to learn how closely connected new media and hyperfiction were early on and how hyperfiction/text was really one of the basic paradigms because today in the US, hyperfiction seems like a narrow genre that a few people are really getting into, like Nick Montfort, but at least on the conference circuit it seems to have lost it’s place as being so basic, being something everyone knew about and discussed.

Florian Cramer on the problem of new media paradigms

We moved on to discuss “new media” as a discipline and I mentioned how both Sher Doruff and Renee Turner had said one thing that attracted them (among other characteristics) was the lack of constraints on the field, because no body knew what was possible or not, and no one expected or required that any particular methodology be used. I’m afraid I also indulged in a mini-rant about how often I’ve seen presentations that were basically just descriptions of the speakers encounter with some situation involving new media, and just stopped at that–no analysis, no theory, no further data… Ahem. Anyway, Florian (as usual) had a far more thoughtful take on this issue.

I go even deeper than that and say that there is a lot in the so-called new media field, especially in the more alternative, or activist, or off-mainstream field, a kind of naive continuity of cybernetics. What do I mean by that? Well cybernetics in the 1950s and the 1960s was basically the idea that we operate with a notion of system-feedback-control and that these are descriptors that we could commonly apply to both artificial and natural systems. So that means we can analyze a society in terms of feedback, control or whatever. We can describe human organisms, we can describe politics, but we can also describe a machine.

And here I noted we had arrived right at Katherine Hayles! Florian agreed and continued.

Then what I see in the so-called new media field is that it was from the same paradigm except that it doesn’t work with this classical behaviorist model which is really about almost totalitarian control fantasies, but their model is something like the rhizome. But the rhizome is just another cybernetic model and it is based on the same idea of using that structure in order to compare the internet to human society, etc etc etc. And that is something I find very questionable and I also want to do more critical writing on. And I think there is little reflection and little awareness of the continuity of these cybernetic paradigms. And nobody questions for example the notion of “system.” System is a highly speculative construct. I mean you say we are systems, society is a system, the human body is a system, and a computer is a system. But I think this kind of rhetoric obscures and clouds more than it actually helps to analyze things and I think we have to go beyond that. For me, really critical media studies would be to question both notions.

But I see when I say this that I’m really making myself enemies. And even with people with whom I wouldn’t have thought it. Well I thought they also come from a really critical camp. But it’s really astonishing to see how deeply these paradigms are really embedded into the whole field.

So now, after these two entries, we are about 17 minutes into the interview, and I already feel like I’ve swallowed a rich media text! In the next entry we finally get to the actually reasons Florian got into this work.

The secret origins of Florian Cramer

Ok, there aren’t really any secrets but I haven’t seen any really biographical interviews with Florian anywhere else, so maybe it will be some kind of revelation. 🙂 I’ve known Florian for a long time now, about 15 years, but when working on my projects in the Netherlands, I realized we had never talked very explicitly about his own history with technology, art, culture, etc. –For reasons that will become clear, I am not using the term New Media.

In fact my first question was how Florian got involved with new media to begin with, and this led immediately to a a lengthy and detailed explanation of the problems with the term from a historical perspective. I will try to encapsulate it:

  • First, new media means something totally different in the US than in Europe. Here it means digital or computer media, ala Lev Manovich, but in Europe TV and radio are often included in that, in fact, from a historical perspective, all media is new at some point.
  • Second, the terms medium and media are being used incorrectly throughout the field. For example, if we speak of radio, one of the earliest technologies to be discussed as a medium, then technically the medium, the carrier of radio waves, is air. This was then extended metonymically to include the waves themselves, then further expanded to include the devices themselves, the senders, and even the receivers (that is the people sending and receiving). So that the term now encompasses so much, it’s not even very useful.

I explained that while I agreed with this critique, I’ve been using the term as the most broadly understood as covering the territory I mean to explore, but I am coming to believe that it’s really time to dispense with it altogether. At any rate, I reiterated my question, how did he get started?

According to Florian he started by programing his own computers when he was 13, and in fact might be considered to have been doing the same stuff for 25 years: he used computers to generate random poetry which he published in his own punk fanzine. 🙂 The most fascinating thing for him then was the random generator, though of course now that he’s “older and wiser” he knows that the randomness of a computer is not true randomness; it’s “pre-determined chance.” This shaped his interest; the kind of meta reality, textuality, emergence of code, and also the connection to society and all the arts.

But back to the timeline; I asked how at this starting point at 13, in 1982 how he even had a computer. Through friends he started using them, especially an older friend who used computers to trigger the light show for his music–all of this was programmed in Basic.

But his interest in computers went up and down; in the very early 90s he was on the internet but found it really boring; it was all controlled by system administrators and not much was going on. Now he reads papers by his students that glorify the old days, he says “oh but you couldn’t do much then; you couldn’t use your own server or install your own software; you could only dial up the university mainframe.”

I contrasted this to Sher Doruff’s experience that people felt even a sense of wonder at being able to connect at all. But of course she is older than Florian or I and so had a different set of expectations about what might be possible. Further, and I think this is a crucial factor affecting people’s attitudes toward computers and “new media,” Florian has always been quite skeptical about the technology itself and the promise it might hold. (A skepticism I share.) As he puts it:

They’re not the perfect machines and they’re not the dream machines, and this is what also cripples the whole new media field. Basically there have been all these kinds of utopian expectations. The first machine I had was incredibly primitive; it had 1 kb of memory. But today’s machines cannot really do more. And the structure of programming is not at all different, it’s just more comfortable. The machines have become faster but they haven’t become smarter. And what also surpised me, when I came to the Netherlands, is that even more than in other parts of the world, is the expectation that somehow computers will become smarter or less deterministic. And you can name those expectations with certain names such as artificial intelligence –where computers are not just stupid sytactic machines, but become semantic machines that have a true understanding. Or artificial life; that you have something like emergence , or whatever, out of computers. And the third one I think is new media. The whole idea, especially in the 1990s with the whole virtual reality nonsense, is that somehow through multi-media interfaces, the machine wouldn’t be this whole command-line deterministic thing, but would become more intuitive, less deterministic…. but if you’re a smart computer user you know that a mouse click is the same as typing a command. The logic remains the same.

So that is Florian’s take on new media as such, and a tiny bit about how he himself got involved. But in the next part we talked much more about the actual conditions of the field (however one names it) and about his own history, from being a graduate student in comparative literature to his current role as Director of the Media Design MA course at the Piet Zwart Institute.

datadirt

A quick note to thank Ritchie Pettauer (whom I met through Facebook!) for asking to publish my Facebook paper on his blog, datadirt. As he describes it:

the main focus is (pro)blogging, WordPress and online marketing with the occassional media theory twist. I also like to blog about music and funny stuff on the net – yup, it’s a wild mixture of highly personalized preferences; but hey, that’s why it’s called a blog and not a magazine.

I’ve been following it for a little while via Twitter and there always seems to be something fun and interesting posted over there. –A much cooler blog than this one! 😉

And Ritchie has reformatted my paper in a really easy way to navigate–I’ll have to steal it someday. 😉 Last, upon reading I see that in spite of my efforts, typos are plentiful in that text, and I want to make clear they are mine, not Ritchie’s.

Brenno de Winter


B. de Winter
Originally uploaded by cuuixsilver

I should also note that Brenno is starting to establish quite a journalistic reputation when it comes to reporting on IT and issues of privacy, freedom on information, and related matters. For example, he has relentlessly pursued the privacy problems with the OV Chipkaart. You can see the most recent article at WebWereld–all in Dutch though.

An interesting thing about Brenno’s work is how he manages the rhetorical frame around these issues in order to be more persuasive. Rather than using the usual hacker image and discourse which is scary and paranoid, all about protecting individual’s privacy, instead he talks about protecting data in more business-like terms which are far more appealing to government and business types, but in the end lead to the same desired results. An interesting example of someone co-opting corporate language and discourse in the inverse of the way corporations often try to co-opt user discourse.

Finally, Brenno de Winter

Having got through everything I want to say for now about IR9, I hope to finish with my summer interviews–it’s not too long now until I go back to NL for more interviews, so I have to get these done!

So actually one of the earliest interviews I did in the summer was with Brenno de Winter, who I actually first learned of through his podcasts and and website, Laura Speaks Dutch. –That’s a great resource for learning Dutch, by the way. Strangely, he turned out to be the one who had translated the instructions for how to use GPG with Mac mail and when I realized he had done these two totally different but helpful things, emailed to thank him. Once I learned more about his work in IT security and as an IT journalist, I decided to interview him. Also, we’ve gotten to be friends, so it was nice to meet in person finally anyway.

Brenno has a fairly classic history with technology, from the gender standpoint. Like many male geeks, he started very young and was coding before age 10. But beyond that, I’d have to say he violates most other stereotypes about male geeks or hackers. He tends to wear preppy clothes, is quite sociable, has a very positive attitude toward people at all skill levels when it comes to technology, as long as they are trying to educate themselves, and he shows no hostility at all toward girl geeks. In fact he’s very supportive.

Our conversation was not so focused because his work is really outside the new media stuff I usually look at, but we did have a very interesting discussion of what the atmosphere was like in the open source and hacker communities and how it might have changed over time. He felt that when he first got involved, it was very community-spirited, and even described himself as feeling tearful at some evnts, because he was so moved by how everyone cooperated and how warmly people behaved toward each other. Over time though he feels this has diminished and gave the example of his own efforts to found a house in Amsterdam where hackers could live for free. He met with a group of them and offered to help them find funding, which he thought might be fairly easy. But because the group could not reach any agreement at all about how the whole thing might work, it just collapsed and went nowhere.

This really seemed to echo some of how William Uricchio has described his own frustrations in trying to organize new media scholars in the Netherlands for everyones mutual benefit. I wonder if no longer being such small and beleaguered has actually made it harder for people in these groups to unite. This is a fairly common problem when a group that has been outcast starts to gain social currency; since they no longer have to spend all their enrgy and resources to survive, room opens to argue about how to spend the “excess.” Or everyone gains a little power and security, and suddenly they have something to lose, and so they become territorial.

I guess no matter how technology changes, in some ways, people never do.

Anyway, nowadays Brenno is working on a project called Small Sister that aims to educate people about privacy issues and provide tools with which they can guard their privacy in these frightening days of increasing data-retention. It’s already a cool project just in the way it collects together so much useful info about protecting your privacy, but I’m looking forward to seeing what they cook up themselves. He’ll be speaking at 253C in December, so if you are around Berlin, go see him.

Bernhard Rieder and Algorithmic Proximity at IR9

The last talk I saw that I’ll report was Bernhard’s, on Algorithmic Proximity. Bernhard started off with background on the work he and Mirko have done that led up to the hybrid foam model, but his main question in this talk was to look at lower level sociality, such as in sites like Flickr, where most interactions are singular, and connections are fleeting. He is trying to understand “socio-genesis” or the process through which these low level communications crystallize into a real relationship.

In reality, individuals stand at varying social distances, or in network theory terms, where individuals are linked by paths of varying lengths which represent the probability of association. Add to this the notion of homophily; that we tend to associate with those like ourselves. (on the twitter channel for IR9 a number of people agreed that while it was true, we hated to admit it because it seemed narrow-minded).

Next, it is possible to render social interactions digitally and what will that reveal? Skipping the math… we see the importance of space somewhat reduced, and status homophily seems to be replaced by value homophily, where interest factors become more important than socio-economic factors.

Algorithmic proximity is a form of social proximity produced by the rendering of many factors in order to make recommendations about friends or matches. For example, on Facebook, the number of friends you have in common with someone may lead to a friend recommendation in “people you might know.” This is most noticeable on dating sites which aim to match people based on similarity across a range of categories, and in fact is almost essential if one is to effectively filter through all the possible matches. Bernhard went through a few other examples; Last.fm, Flickr, and Delicious, and said a bit about how on these sites, similar tagging practices might lead people to start following other users.

But what about serendipity? Is homophily a feature or a bug? If we only see people who are like us, then what? I think that’s a frightening prospect myself; I can think of a lot of interesting ideas and people I would hate to have missed, but if all my encounters were based on some kind of homophily, we would never have met. A fun counter example, the Unsuggester. This site tells you what books you would hate based on books you like (and maybe by extension, the people). I’m afraid I do judge people by what they read, sometimes….

I really need to get the whole paper because I think the math would be interesting, and also, Bernhard makes very strong but closely argued points, and a lot of the details have to be left out of such a short talk. So I’ve emailed Bernhard and if I can get more details, I’ll update this entry later because this seems important to me, thought it’s a tangent to Bernhard’s work.

If I am to figure out how people connect and stay connected, I think this could be a really important piece of the puzzle, and also suggests measurable data I could look at in order to see patterns — for example, what kind of proximity, exactly, seems most important? Are there certan values or other shared chracteristics that correlate more strongly with connection than others?

A really thought-provoking talk.

My Panel

I don’t want to brag..well, actually I do. The panel went very well considering how many speakers we ended up with. Everyone kept to the time limit, no one had technical problems. And the talks themselves were all quite good; I think even exceptional in going beyond the anecdotal case studies we so often see when it comes to work on participation. Since we had so many speakers, there was really no time for discussion; that was the one downside, but I did have some short chats with people later on about our panel, so I guess they liked it.

Here is a link to my prior post which has links to all the full papers.

I also recorded audio for the whole panel and hope to eventually make podcasts for each speaker.

Big thanks to Elfi, Anders, Christian, and Mirko. You guys rock! 🙂

Finally, I really have to thank Bernhard Rieder for his masterful work as respondent. He had quite a job having to read all five papers and find some way of summing them all up. I also recorded that, thankfully because Bernhard had good ideas that inspire further development of my ideas at least. –I heard the same from Elfi, in fact.

Marianne van den Boomen at IR 9.0

The next talk I saw was Marianne’s; a much more developed version of the research she presented at New Network Theory in Summer ’07. The title this time was “E-sociability metaphors:
From virtual community to social network and beyond,” and looked at the evolution of metaphors used to describe social relationships on the Internet.

The most interesting point for me was the really concrete way she identified ways that Web 2.0 platforms in their technical workings actually might be described as undermining the previous kinds of online communities that were so much glorified.

As she puts it, Internet communities were once like this:

  • localized social aggregation on the Internet
  • based on shared practice, interest, or value
  • gathering at a collective place
  • having a core of recurrent active users
  • engaged in on ongoing group communication
  • and so developing a common frame
  • of reference

But, Web 2.0 technologies create this:

  • the page is dissolved as unit for collective gathering
  • on the fly aggregation and reassemblage of user enriched data
  • interacting data entities rather than interacting users
  • no common collective place of gathering
  • no ongoing debate between a recurrent group of users

At least in part these changes occur because of technologies–scripts, usually–that allow dynamic html content to be generated, saving time and bandwidth by not serving page after static page or creating whole new pages from scratch. This means that users don’t have to interact with each other or with other real people (web-mistresses, sys-admins, site owners or whomever). Instead the system can answer most requests.

While this is true, in fact, fora still exist, and people often interact through blog comments, wall-posts on Facebook, etc. But it’s probably true that the focus is not any more on centralized “gathering places.” Insteadit seems more like visiting neighbours, to me. Occassionally you all get together socially, but most interactions are one to one. But that is often what we do in person too, isn’t it? Phone calls, meeting for coffee or lunch, sending email. Historically we might say that this is more typical, so I don’t know that we can really blame web 2.0. On the other hand, I haven’t researched the whole history of human intercation (yet!), so maybe this is so. SHould have asked about this at the talk, but I guess I can just send a message… 😉