Word Salad

Hello, my name is Jared Lee Loughner

The shooting yesterday in Arizona of Representative Gabrielle Giffords, which left the politician critically wounded and six others dead, almost instantaneously conjured into existence two bleak and mysterious new entities, both in their way artifacts of language: the shooter, Jared Lee Loughner, and the debate about the causes and implications of his lethal actions.

The 22-year-old Loughner seems an all-too-typical figure, one whom we might pity had he not so decisively removed himself from the precincts of empathy. An eccentric loner whose strange preoccupations, coupled with a tendency to rant about them in public, led to his suspension and withdrawal from the community college he was attending, he appears to have drifted into his homicidal mission with the sad inevitability of a piece of social flotsam awash on the tides of fringe discourse. He marks the return of a certain repressed specific to U.S. political culture — the lone gunman — whose particulars always differ but whose sad, destructive song remains the same. He is Lee Harvey Oswald, James Earl Ray, Mark David Chapman, Nidal Malik Hasan: a gallery of triply-named maniacs whose profound disconnection from the shared social world toggled overnight into the worst kind of celebrity.

Like all other modes of celebrity, the fame and fascination specific to assassins has accelerated and complexified in recent times with the proliferation of data streams and their accompanying commentaries. We deconstruct and reconstruct the killer’s trajectory: the hours, days, weeks, and months that led up to their rampage, the ideologies that brought them to that final moment of the trigger pull. We sift through blog posts, Twitter feeds, and YouTube channels to assemble the kind of composite portrait that once would have been consigned to the scribbled diary kept under the bed, or the conspiracy wall tucked away in a smelly corner of the basement, its authoritative interpretation doled out slowly by experts rather than crowdsourced in breathless realtime. We ask ourselves what could have been done to avert the personal implosion, dismantle the ticking bomb, or in the phrase to which 9/11 gave new weight, “connect the dots” before something awful happened (a literally unthinkable alternative, since without the awful event, the current conversation would never have started: this tape self-destructed before you heard it).

Amid the fantasy forensics and reverse-engineering of psychopathologic etiology, it’s hard to escape the sense that we’re building a new Jared Lee Loughner out of words, sticking a straw man of signifiers into a person-shaped hole. The fact that Loughner survived his own local apocalypse is probably irrelevant to this story’s emergence: killers are rarely allowed to speak for themselves from the jail cell, having ceded their rights of linguistic self-determination along with every other freedom. The kind of annihilation I’m talking about is the plangent sting of The Parallax View (Alan J. Pakula, 1974), a masterfully cynical political thriller in which “lone gunmen” are mass-produced commodities of a faceless corporation, and the journalist hero — Joe Frady (Warren Beatty) — is himself framed as an unhinged killer after the fact.

Loughner, of course, is as undeniably real as his horrible actions and their tragic impact on the families and friends of the dead and wounded. I don’t mean to turn him into a fiction, just to point out that the picture we are now building of him is, in its way, another kind of cultural narrative. Which makes it rather ironic that among Loughner’s many obsessions, from the fraudulent behavior of his college teachers to the 2012 prophecy, is a fixation on grammar as a tool of mind control: what a critical theorist might rephrase as the construction of subjectivity through language. Lacan argued that we lose ourselves in the Symbolic Order only to find ourselves there as an Object: self-identity at the cost of self-alienation. It’s a paradox in whose twisty folds Jared Lee Loughner evidently lost his soul.

What emerged from the outputs of this particular black box might have been some kind of furious crusader, but I suspect rage actually had little to do with what took place in Tucson yesterday. Scrolling through the inflectionless syllogisms in Loughner’s YouTube videos is like studying HAL’s readouts: one detects only the icy remoteness of pure logic (a logic, it should be added, devoid of sense) — a chain-link ladder of if-thens proceeding remorselessly to their deadly conclusion. I think some virus of language did finally get to Loughner; I think words ate him alive.

***

The question now occupying the public: whose words were they?

I try to keep my politics off this blog, in the sense of signaling party affiliation outright. That said, it will probably surprise no one to learn that I’m just another lefty intellectual, an ivory-tower Democrat. My first reaction to the Arizona shootings is to read them as evidence that the rhetoric of the right, in particular Sarah Palin and the Tea Party, has gone too far.

I’m obviously not alone in that assessment, but if I’m being honest, I must acknowledge that both sides (and forgive me for reducing our country’s ideological landscape to a red-blue binary) are hard at work spinning Loughner’s onslaught in favor of their own agendas. In fact, looking at today’s mediascape, I see a giant hurricane-spiral of words, a storm of accusations and recriminations played out — because I happen to be tracking this event through HTML on a laptop screen rather than in the shouting arenas of CNN and FOX News — in text. The ideas that chased each other through Loughner’s weird maze of a mind have externalized themselves on a national scale like the Id monster in Forbidden Planet.

I know I will sound like an old fogey for saying this, but I’m startled by how quickly our tilt-a-whirl news cycle and its cloud of commentary have moved from backlash to backlash-against-the-backlash. When I want the facts, I go to Google News or (Liberal that I am) the New York Times; when I want to read people hurling the textual equivalent of monkey poop at each other, I read the comments at Politico, where you can still reliably find someone calling our President “Barack Hussein Obama.” While I have nothing bad to say about the stories carried on the site, working through the comments is like taking the temperature of our zeitgeist — rectally.

Commenters labeling themselves as everything from conservative to independent and “undecided” have seized on a tweet from one of Loughner’s former classmates to the effect that he was “left wing, quite liberal.” Hence (in their view) it’s liberal rhetoric that led Loughner to start shooting on Saturday morning. Or as “NotPC” puts it:

Liberals are always threatening anyone with violence, starting with Obama. How ironic, a leftist attempted to assassinate another leftist, I guess Gifford is not leftist enough she must be taken with a bullet from glock. Even the guy who tried to crash his airplane to IRS building in Texas last year was a leftwing, Amy Bishop who started shooting other professors (she’s a big Obama supporter) is a leftist; the guy who held staff of Discovery Channel last year was also a leftist, follower of Al Gore.

What’s the matter with liberals? You’re into violence! This is what you get when you read and listen too much of Markos Moulitsas (DailyKos), Arianna Huffington (huffpo), media matter, Rachel Maddow, Olbermann, Ed Shultz, Chris Matthews, Van Jones, and the LEADER of them all fomenting violence who disagree with LIBERAL MANTRA – BARACK HUSSEIN OBAMA.

Nothing surprises me about LIBERALS, even Charles Manson is a LIBERAL!

LIBERALS ARE WHACKOS! BUNCH OF LEFTWINGNUTS – ONE FRIES SHORT OF A HAPPY MEAL!

Teaching my Conspiracy class for the first time in Fall 2009, I became very familiar with utterances like this, even (I will admit) rather charmed by them. I can defocus my eyes and gauge the shrillness of claims about, say, the faked Moon landings by the number of capital letters and exclamation points. But I find it’s harder to be amused when the angry word-art accretes so quickly around a wound still very much open — lives and loves quite literally in the balance. Is my mind naive for whirling at the promptness of this inversion, the rapidity with which an act of supremely irrational violence has been repurposed into another, lexical form of ammunition?

The next few weeks will surely see many more such claims thrown around, and at the higher levels of the media’s digestive system, the production of much reasoned analysis about the language question itself. I’m sure I’ll engage with the issues raised, and eventually settle on a conclusion not all that much different from the position I already hold. But as I watch politicians work through their own chain of if-thens, framing and reframing the facts of Saturday’s carnage in hopes of advancing their own agendas (left, right, and everything in between), I believe I will have a hard time shaking a sense that we have been caught up in, rather than containing, Loughner’s particular form of madness.

Visible Evidence

From the Department of Incongruous Juxtapositions, this pair of items: on the right, the photograph of Rihanna following her beating by Chris Brown; on the left, the supposed image of Atlantis culled from Google Earth.

Let me immediately make clear that I am in no way calling into question the fact of Rihanna’s assault or equating its visual trace — its documentary and moral significance — with the seeming signs of ancient civilization read into a tracework of lines on the ocean floor. The former is evidence of a vicious crime, brutal as any forensic artifact; the latter a fanciful projection, like the crisscrossing canals that generations of hopeful astronomers imagined onto the surface of Mars.

If there is a connection between these images, it is not in their ontological status, which is as clean an opposition as one could hope for, but in their epistemological status: the way they localize larger dialectics of belief and uncertainty, demonstrating the power of freely circulating images to focus, lenslike, the structures of “knowledge” with which our culture navigates by day and sings itself to sleep at night.

In LA a legal investigation rages over who leaked the photo of Rihanna, while across the nation a million splinter investigations twease out the rights and wrongs of TMZ’s going public with it. Does one person’s privacy outweigh the galvanizing, possibly progressive impact of the crime photo’s appearance? Does the fight against domestic and gendered violence become that much more energized with each media screen to which Rihanna’s battered face travels? What happens now to Rihanna, who (as my wife points out) faces the Hobson’s choice of disavowing what has happened and going on with her career in the doomed fashion of Whitney Houston, or speaking out and risking the simultaneous celebration and stigmatization we attach to celebrities whose real lives ground the glitter of their fame? If nothing else, Rihanna has been forcefully resignified in the public eye; whatever position she now adopts relative to the assault will bring judgment upon her — perhaps the most unfair outcome of all.

Meanwhile, the purported image of Atlantis manifests a familiar way of seeing and knowing the unseeable and unknowable. It joins, that is, a long list of media artifacts poised at the edge of the possible, tantalizing with their promise of rendering in understandable terms the tidal forces of our unruly cultural imaginary: snapshots of the Loch Ness Monster, plaster castings of Bigfoot’s big footprints, grainy images of UFOs glimpsed over backyard treelines, Abraham Zapruder’s flickering footage of JFK’s execution, blown-up photos of the jets hitting the World Trade Centers (bearing circles and arrows to indicate why the wrongly-shaped fuselages disprove the “official story”). From creatures to conspiracies, it’s a real-world game of Where’s Waldo, based on fanatically close readings of evidence produced through scientific means that pervert science’s very tenets. Fictions like Lost provide sanitized, playful versions of the same Pynchonean vertigo: the spiraling, paranoid sense that nothing is as it seems — a point ironically proved with recourse to cameras as objective witnesses. In the case of Google’s Atlantis, that “camera” happens to be a virtual construct, an extrapolated map of the ocean floor. The submerged city itself, Google says, is an artifact of compositing and compression: the lossy posing as the lofty, a high-tech updating of lens flares, cosmic rays, and weather balloons too distant for the lens to resolve into their true natures.

In both cases, something submerged has been brought to light, touching off new rounds of old debates about what really happened, what’s really out there. With depressing speed, internet message boards filled with derisive reactions to Rihanna’s photo. “She looks better that way”; “That’s what happens when a woman won’t shut her mouth”; and perhaps most disheartening, “It’s not that bad.” A chorus of voices singing a ragged melody of sexism, racism, and simple hard-heartedness. Let’s hope we have the collective sense to respond correctly to these two images, separating tragic fact from escapist fiction.

Holograms

It’s still Jessica Yellin and you look like Jessica Yellin and we know you are Jessica Yellin. I think a lot of people are nervous out there. All right, Jessica. You were a terrific hologram.

— Wolf Blitzer, CNN

I woke this morning feeling distinctly unreal — a result of staying up late to catch every second of election coverage (though the champagne and cocktails with which I and my wife celebrated Obama’s amazing win undoubtedly played a part). But even after I checked the web to assure myself that, indeed, the outcome was not a nighttime dream but a daylight reality, I couldn’t shake the odd sense of being a projection of light myself, much like the “holograms” employed by CNN as part of their news coverage (Here’s the YouTube video, for as long as it might last):

I’ve written before on the spectacular plenitude of high-definition TV cross-saturated with intensive political commentary, an almost subjectivity-annihilating information flow on the visual, auditory, and ideological registers. In the case of CNN’s new trick in the toolbox, my first reaction was to giggle; the projection of reporter Jessica Yellin into the same conversational space as Wolf Blitzer was like a weird halftime show put on by engineering students as a graduation goof. But the cable news channel seemed to mean it, by God, and I have little doubt that we’ll see more such holographic play in coverage to come, as the technology becomes cheaper and its functionality streamlined into a single switch thrown on some hidden mixing board — shades of Walter Benjamin’s observation in “The Work of Art in the Age of Mechanical Reproduction” about striking a match.

Leaving aside the joking references to Star Wars (whose luminously be-scanlined projection of Princess Leia served, in 1977, to fold my preadolescent crush on Carrie Fisher into parallel fetishes with science-fiction technology and the visual-effects methods used to create them), last night’s “breakthrough” transmission of Yellin from Chicago to New York contains a subtle and disturbing undertone that should not be lost on feminist critics or theorists of simulation. This 2008 version of Alexander Graham Bell’s “Mr. Watson — Come here — I want to see you” employed as its audiovisual payload a woman’s body. It was, in this sense, just the latest retelling of the sad old story in which the female form is always-already rendered a simulacrum in the visual circuits of male desire. Yellin’s hologram, positioned in compliant stasis at the twinned focus of Blitzer’s crinkly, interrogative gaze and a floating camera that constantly reframed her phantasmic form, echoed the bodies of many a CG doll before it: those poor gynoids, from SIGGRAPH’s early Marilyn Monrobot to Shrek‘s Princess Fiona and Aki Ross in Final Fantasy: The Spirits Within, whose high-rez objectification marks the triumphal convergence of representational technology and phallic hegemony.

But beyond the obvious (and necessary) Mulveyan critique exists another interesting point. The news hologram, achieved by cybernetically tying together the behavior of two sets of cameras separated by hundreds of miles, is a remarkable example of realtime visual effects: the instantaneous compositing of spaces and bodies that once would have taken weeks or months to percolate through the production pipeline of even the best FX house. That in this case we don’t call it a visual effect, but a “news graphic” or the like, speaks more to the discursive baffles that generate such distinctions than to any genuine ontological difference. (A similar principle applies to the term “hologram”; what we’re really seeing is a sophisticated variant of chroma key, that venerable greenscreen technology by which TV forecasters are pasted onto weather maps. In this case, it’s been augmented by hyperfast, on-the-fly match-moving.) Special and visual effects are only recognized as such in narrative film and television — never in news and commercials, though that is where visual-effects R&D burns most brightly.

As to my own hologrammatic status, I assume it will fade as the magic of this political moment sinks in. An ambiguous tradeoff: one kind of reality becoming wonderfully solid, while another — the continuing complicity between gendered power and communication / imaging technology — recedes from consciousness.

Convention in a Bubble

A quick followup to my post from two weeks ago (a seeming eternity) on my gleeful, gluttonous anticipation of the Democratic and Republican National Conventions as high-def smorgasbords for my optic nerve. I watched and listened dutifully, and now — literally, the morning after — I feel stuffed, sated, a little sick. But that’s part of the point: pain follows pleasure, hangover follows bender. Soon enough, I’ll be hungry for more: who’s with me for the debates?

Anyway, grazing through the morning’s leftovers in the form of news sites and blogs, I was startled by the beauty of this interactive feature from the New York Times, a 360-degree panorama of the RNC’s wrapup. It’s been fourteen years since Quicktime technology pervily cross-pollinated Star Trek: The Next Generation‘s central chronotope, the U.S.S. Enterprise 1701-D, in a wondrous piece of reference software called the Interactive Technical Manual. I remember being glued to the 640X480 display of my Macintosh whatever-it-was (the Quadra? the LC?), exploring the innards of the Enterprise from stem to stern through little Quicktime VR windows within which, by clicking and dragging, you could turn in a full circle, look up and down, zoom in and out. Now a more potent and less pixilated descendent of that trick has been used to capture and preserve for contemplation a bubble of spacetime from St. Paul, Minnesota, at the orgiastic instant of the balloons’ release which signaled the conclusion of the Republicans’ gathering.

Quite apart from the political aftertaste (and let’s just say that this week was like the sour medicine I had to swallow after the Democrats’ spoonful of sugar), there’s something sublime about clicking around inside the englobed map. Hard to pinpoint the precise location of my delight: is it that confetti suspended in midair, like ammo casings in The Matrix‘s bullet-time shots? The delegates’ faces, receding into the distance until they become as abstractedly innocent as a galactic starfield or a sprinkle-encrusted doughnut? Or is it the fact of navigation itself, the weirdly pleasurable contradiction between my fixed immobility at the center of this reconstructed universe and the fluid way I crane my virtual neck to peer up, down, and all around? Optical cryptids such as this confound the classical Barthesian punctum. So like and yet unlike the photographic, cinematographic, and ludic regimes that are its parents (parents probably as startled and dismayed by their own fecundity as the rapidly multiplying Palin clan), the image-machine of the Flash bubble has already anticipated the swooping search paths of my fascinated gaze and embedded them algorithmically within itself.

If I did have to choose the place I most love looking, it would be at the faces captured nearest the “camera” (here in virtualizing quotes because the bubble actually comprises several stitched-together images, undercutting any simple notion of a singular device and instant of capture). Peering down at them from what seems just a few feet away, the reporters seem poignant — again, innocent — as they stare toward center stage with an intensity that matches my own, yet remain oblivious to the panoptic monster hanging over their heads, unaware that they have been frozen in time. How this differs from the metaphysics of older photography, I can’t say; I just know that it does. Perhaps it’s the ontology of the bubble itself, at once genesis and apocalypse: an expanding shock wave, the sudden blistered outpouring of plasma that launched the universe, a grenade going off. The faces of those closest to “me” (for what am I in this system? time-traveler? avatar? ghost? god?) are reminiscent of those stopped watches recovered from Hiroshima and Nagasaki, infinitely recording the split-second at which one reality ended while another, harsher and hotter, exploded into existence.

It remains to be seen what will come of this particular Flashpoint. For the moment — a moment which will last forever — you can explore the bubble to your heart’s content.

Conventional Wisdom

Ooooh, the next two weeks have me tingling with anticipation: it’s time again for the Democratic National Convention and its bearded-Spock alternate-universe doppelganger, the Republican National Convention. I intend to watch from my cushy couch throne, which magisterially oversees a widescreen high-def window into the mass ornament of our country’s competing electoral carnivals.

Strangely, the Olympics didn’t hold me at all (beyond the short-lived controversy of their shameless simulationism), even though they served up night after night of HD spectacle. It wasn’t until I drove into the city last week to take in a Phillies game that I realized how hungry I am to immerse myself in that weird, disembodied space of the arena, where folks to the right and left of you are real enough, but rapidly fall away into a brightly-colored pointillist ocean, a rasterized mosaic that is, simply, the crowd, banked in rows that rise to the skyline, a bowl of enthusiastic spectatorial specks training their collective gaze on each other as well as inward on a central proscenium of action. At the baseball game I was in a state of happy distraction, dividing my attention among the actual business of balls, strikes, and runs; the fireworky HUDs of jumbotrons, scoreboards, and advertising banners, some of which were static billboards and others smartly marching graphics; the giant kielbasa (or “Bull Dog”) smothered with horseradish and barbecue sauce clutched in my left hand, while in my right rested a cold bottle of beer; and people, people everywhere, filling the horizon. I leaned over to my wife and said, “This is better than HD — but just barely.”

Our warring political parties’ conventions are another matter. I don’t want to be anywhere near Denver or Minneapolis/St. Paul in any physical, embodied sense. I just want to be there as a set of eyes and ears, embedded amid the speechmakers and flagwavers through orbital crosscurrents of satellite-bounced and fiber-optics-delivered information flow. I’ll watch every second, and what I don’t watch I’ll DVR, and what I don’t DVR I’ll collect later through the discursive lint filters of commentary on NPR, CNN, MSNBC, and of course Comedy Central.

The main pleasure in my virtual presence, though, will be jumping around from place to place inside the convention centers. I remember when this joyous phenomenon first hit me. It was in 1996, when Bill Clinton was running against Bob Dole, and my TV/remote setup were several iterations of Moore’s Law more primitive than what I wield now. Still, I had the major network feeds and public broadcasting, and as I flicked among CBS, NBC, ABC, and PBS (while the radio piped All Things Considered into the background), I experienced, for the first time, teleportation. Depending on which camera I was looking through, which microphone I was listening through, my virtual position jumped from point to point, now rubbing shoulders with the audience, now up on stage with the speaker, now at the back of the hall with some talking head blocking my view of the space far in the distance where I’d been an instant previously. It was not the same as Classical Hollywood bouncing me around inside a space through careful continuity editing; nor was it like sitting in front of a bank of monitors, like a mall security guard or the Architect in The Matrix Reloaded. No, this was multilocation, teletravel, a technological hopscotch in increments of a dozen, a hundred feet. I can’t wait to find out what all this will be like in the media environment of 2008.

As for the politics of it all, I’m sure I’ll be moved around just as readily by the flow of rhetoric and analysis, working an entirely different (though no less deterministic) register of ideological positioning. Film theory teaches us that perceptual pleasure, so closely allied with perceptual power, starts with the optical and aural — in a word, the graphic — and proceeds downward and outward from there, iceberg-like, into the deepest layers of self-recognition and subjectivity. I’ll work through all of that eventually — at least by November 4! In the meantime, though, the TV is warming up. And the kielbasa’s going on the grill.

Technologies of Disappearance

My title is a lift from Alan N. Shapiro’s interesting and frustrating book on Star Trek as hyperreality, but what motivates me to write today are three items bobbing around in the news: two from the world of global image culture, the other from the world of science and technology.

Like Dan North, who blogs smartly on special effects and other cool things at Spectacular Attractions, I’m not deeply into the Olympics (either as spectator or commentator), but my attention was caught by news of what took place at last week’s opening ceremonies in Beijing. In the first case, Lin Miaoke, a little girl who sang the revolutionary hymn “Ode to the Motherland,” was, in turns out, lip-synching to the voice of another child, Yang Peiyi, who was found by the Communist Party politburo to be insufficiently attractive for broadcast. And in the second case, a digitally-assisted shot of fireworks exploding in the nighttime sky was used in place of the evidently less-impressive real thing.

To expound on the Baudrilliardian intricacies at play here hardly seems necessary: the two incidents were tied together instantly by the world press and packaged in headlines like “Fakery at the Olympics.” As often happens, the Mass Media Mind — churning blindly away like something from John Searles’s Chinese room thought experiment — has stumbled upon a rhetorical algorithm that tidily condenses several discourses: our simultaneous awe and dread of the powers of technological simulation; the sense that the Olympics embodies an omnivorous spectacularity threatening to consume and amplify beyond recognition all that is homely and human in scale; and good ol’ fashioned Orientalism, here resurrected as suspicion of the Chinese government’s tendency toward manipulation and disguise. (Another “happy” metaphorical alignment: the visibility-cloaking smog over Beijing, so ironically photogenic as a contrast to the crisp and colorful mass ornament of the crowded, beflagged arenas.)

If anything, this image-bite of twinned acts of deception functions, itself, as another and trickier device of substitution. Judging the chicanery, we move within what Adorno called the closed circle of ideology, smugly wielding criticism while failing to escape the orbit of readymade meanings to question more fundamental issues at stake. We enjoy, that is, our own sense of scandal, thinking it premised on a sure grasp of what is true and indexical — the real singer, the unaltered skies — and thus reinscribe a belief that the world can be easily sorted into what is real and what is fake.

Of course it’s all mediated, fake and real at the same time, calibrated as cunningly as the Olympics themselves. Real bodies on bright display in extremes of exertion unimaginable by this couch potato: the images on my high-def screen have rarely been so viscerally indexical in import, every grimace and bead of sweat a profane counterpoint to sacred ballistics of muscled motion. But I fool myself if I believe that the reality of the event is being delivered to me whole. Catching glimpses of the ongoing games as I shuffle through surrounding channels of televisual flow is like seeing a city in flickers from a speeding train: the experience julienned by commercials and camera cuts, embroidered by thickets of helpful HUD graphics and advertisers’ eager logos. Submerged along another axis entirely is the vanished reality of the athletes’ training: eternities of drilling and repetition, an endless dull disciplining at profound odds with the compacted, adrenalized, all-or-nothing showstoppers of physical prowess.

Maybe the collective fascination of the lip-synching stems from our uncomfortable awareness that we’re engaged in a vicarious kind of performance theft, sitting back and dining on the visual feast of borrowed bodily labor. And maybe the sick appeal of the CG fireworks is our guilty knowledge that human beings are functioning as special effects themselves, there to evince oohs and ahs. All I know is that the defense offered up by the guy I heard on BBC World News this morning seems to radically miss the point. Madonna and Britney lip-synch, he said: why is this any different? As for the digital fireworks, did we really expect helicopters to fly close to the airborne pyrotechnics? The cynicism of the first position, that talent is always a manufactured artifact, is matched by the blase assumption of the second, permuting what we might call the logic of the stuntman: if an exploit is too dangerous for a lead actor to perform, sub in a body worth a little less. In the old days, filmmakers did it with people whose names appeared only in the end credits (and then not among the cast). Nowadays, filmmakers hand the risk over to technological standins. In either case, visualization has trumped representation, the map preceding the territory.

But I see I’ve fallen into the trap I outlined earlier, dressing up in windy simulationist rhetoric a more basic dismay. Simply put, I’m sad to think of Yang Peiyi’s rejection as unready for global prime time, based on a chubby face and some crooked teeth (features, let me add, now unspooling freely across the world’s screens — anyone else wondering how she’ll feel at age twenty about having been enshrined as the Ugly Ducking?). Prepping my Intro to Film course for the fall, I thought about showing Singin’ in the Rain — beneath its happy musical face a parable of insult in which pretty but untalented people highjack the vocal performances of skilled but unglamorous backstage workers. Hey, I was kind of a chubby-faced, snaggle-toothed kid too, but at least I got to sing my own part (Frank Butler) in Annie Get Your Gun.

In other disappearance news: scientists are on their way to developing invisibility. Of this I have little to say, except that I’m relieved the news is getting covered at all. There’s more than one kind of disappearance, and if attention to events at Berkeley and Beijing is reassuring in any measure, it’s in the “making visible” of cosmetic technologies that, in their amnesiac emissions and omissions, would otherwise sand off the rough, unpretty edges of the world.

Cartographers of (Fictional) Worlds, Unite!

J. K. Rowling’s appearance in a Manhattan courtroom this week to defend the fantasy backdrop of her Harry Potter novels is interesting to me for several reasons. It dovetails with a conversation I’ve been having in the Fan Culture class I’m teaching this semester, about the vast world-models that subtend many franchise fictions (e.g. the “future history” of Star Trek, the Middle-Earth setting of Lord of the Rings, the Expanded Universe of Star Wars, and so on). In his writing on subcreation, J. R. R. Tolkien calls these systematic networks of invented facts, events, characters, and languages “secondary worlds,” but more recently the phenomenon has been given other labels by media theorists: master text, hyperdiegesis. Henry Jenkins has put forth the most influential formulation with his concept of transmedia storytelling, which recasts franchise fictions like The Matrix as a kind of generative space — a langue capable of ceaseless acts of fictional parole — which can be accessed through any number of its “extensions” in disparate media.

One might say, in an excess of meta-thinking, that the notion of the storyworld itself floats suspended among these various theoretical invocations: a distributed ghost of a concept that feels increasingly “real.” As our media multiply, overlap, and converge in a spectacular mass ornament like a Busby Berkeley musical number, we witness a contrasting, even paradoxical, tendency toward stabilization, concreteness, and order in our fictional universes.

A key agency in this stabilization is the cataloging and indexing efforts of fans who keep track of sprawling storylines and giant mobs of dramatis personae, cross-referencing and codifying the rules of seriality’s endless play of meaning. Most recently, these labors have coalesced in communally-maintained databases like Lostpedia, the Battlestar Wiki, and — yes — the Harry Potter Lexicon at the heart of the injunction that Rowling is seeking. The conflict is over a proposed book project based on the online Lexicon, a fan-crafted archive of facts and lore, characters and events, that make up the Harry Potter universe. Although Rowling has been sanguine about the Lexicon till now (even admitting that she draws upon it to keep her own facts straight), the crystallization of this database into a for-profit publication has her claiming territorial privilege. Harry, Hermione, and Ron — as well as Quidditch, Dementors, and Blast-Ended Skrewts — are emphatically Rowling’s world, and we’re not quite as welcome to it as we might have thought.

At issue is whether such indexing activities are protected by the concept of transformative value: an emerging legal consensus that upholds fan-produced texts as valid and original so long as they add something new — an interpretive twist, a fresh insight — to the materials they are reworking. (For more on this movement, check out the Organization for Transformative Works.) Rowling asserts that the Harry Potter Lexicon brings nothing to her fiction that wasn’t there already; it “merely” catalogs in astonishing detail the contents of the world as she has doled them out over the course of seven novels. And on the surface, her claim would seem to be true: after all, the Lexicon is not itself a work of fiction, a new story giving a new slant on Harry and his adventures. It is, in a sense, the opposite of fiction: a documentary concordance of a made-up world that treats invention as fact. Ideologically, it inverts the very logic of make-believe, but in a different way from behind-the-scenes paratexts like author interviews or making-of featurettes on DVDs. We might call what the Lexicon and other fan archives do tertiary creation — the extraction of a firm, navigable framework from a secondary, subcreated world.

But is Rowling’s case really so straightforward? It seems to me that what’s happening is a turf battle that may be rare now, but will become increasingly common as transmedia fictions proliferate. The Lexicon, whether in print or cybertext, does compete with Rowling’s work — if we take that “work” as being primarily about building a compelling, consistent world. The Lexicon marks itself as a functionally distinct entity by disarticulating the conventional narrative pleasures offered by Rowling’s primary text: what’s stripped away is her voice, the pacing and structure of her storytelling. By the same token, however, the Lexicon produces Rowling’s world as something separate from Rowling. And for those readers for whom that world was always more compelling than the specific trajectories with which Rowling took them through it (think of the concept of the rail shooter in videogames), the Lexicon might indeed seem like a direct competitor — especially now that it has migrated into a medium, print, that was formerly Rowling’s own.

The question is: what happens to secondary worlds once they have been created? What new forms of authority and legitimacy constellate around them? It may well be the case that the singular author who “births” a world must necessarily cede ownership to the specialized masses who then come to populate it, whether by writing fanfic, building model kits and action figures, cosplaying, roleplaying, or — in the Lexicon’s case — acting as archivists and cartographers.

Before the Internet, such maps were made on paper, sold and circulated among fans. One of my areas of interest is the “blueprint culture” that arose around Star Trek and other science-fiction franchises in the 1960s and 1970s. I’ll be speaking about this topic at the Console-ing Passions conference in Santa Barbara at the end of April, but Rowling’s lawsuit provides an interesting vantage point from which to blend contemporary and historical media processes.

Finding a Transmedia “Compass”

his-dark-materials-the-golden-compass-4.jpg

My colleague Tim Burke’s pointed rebuttal to critics of the film adaptation of The Golden Compass – who charge that the movie lacks the theological critique and intellectual heft of Phillip Pullman’s source novel – caught my eye, not just because I’m a fan of the books and intend to see the movie as soon as end-of-semester chaos dies down, but because I’ve spent the last week talking about transmedia franchises with my Intro to Film class.

To recap the argument, on one side you have the complaint that, in bringing book to screen, Pullman’s central rhetorical conceit has been cruelly compromised. The adventure set forth in the three volumes of His Dark Materials (The Golden Compass, The Subtle Knife, and The Amber Spyglass) unfolds against a world that is but one of millions in a set of alternate, overlapping realities. But the protagonist Lyra’s cultural home base is fearfully repressed by religious authorities whose cosmology allows for no such “magical thinking” – and whose defense of its ideology is both savagely militaristic and a thin veil over a much larger network of conspiracy and corruption. (Really, right-wing moral guardians should not be objecting to how Pullman treats the Church, but how he nails the current U.S. administration.) But, the charge goes, the movie has trimmed away the more controversial material, leaving nothing but a frantic romp through tableaux of special-effects-dependent set design and, in the case of Iorek Byrnison and the daemons, casting.

On the other side you’ve got positions like Tim’s, which welcome many of the excisions because they actually improve the story. As Pullman gets cranking, especially in the concluding Amber Spyglass, his narrative becomes both attenuated and obese, subjective time slowing to a crawl while mass increases to infinity like an space traveler moving near the speed of light. Personally, I was mesmerized by Spyglass’s long interlude in the Land of the Dead, which in its beautifully arid and disturbing tedium managed to remind me simultaneously of L’Avventura, Stalker, and Inland Empire. But it’s hard to disagree, especially when Tim reminds me how turgid and didactic C. S. Lewis’s The Last Battle got, that while we all like to have our intellect and imagination stirred, very few of us like to be lectured.

Me, I’ll suspend judgment on the movie until I see it – a strategy that worked well with The Mist, which I enjoyed astronomically more than Stephen King’s original novella. But I do sense in the debate around Compass’s political pruning an opportunity to air my concern with transmedia storytelling, or rather with the discursive framework that media scholars are evolving to talk about and critique transmedia “operations.”

In a nutshell, and heavily cribbed from Henry Jenkins’s Convergence Culture, storytelling on a large scale in contemporary media involves telling that tale across a number of different platforms, through different media, all of which are delegated one part of the fictional universe and its characters, but none of which contains the whole. While Star Trek and Star Wars did this starting in the 1960s and 1970s respectively, current exemplars like The Matrix bring the logic of transmediation to its full, labyrinthine flower. The three installments of the 1999-2003 trilogy are but land masses in a crowded sea of other textual windows into the Matrix “system”: videogames, websites, TV spots, comics, etc. each play their part. Each text is an entry point to the franchise; ideally, each stands alone on its artistic merits while contributing something valuable to the whole; and the pleasurable labor of transmedia audiences is to explore, collect, decrypt, and discuss the fragments as an ongoing act of consumption that is also, of course, readership.

Admittedly, Pullman’s trilogy doesn’t lend itself perfectly to transmediation any more than The Lord of the Rings did. When you’ve got to contend with an “original,” pesky concepts like canonicity and (in)fidelity creep in. Fans will always measure the various incarnations of Harry Potter against Rowling’s books, just as J. R. R. Tolkien’s fans did with Peter Jackson’s movies. But The Matrix or Heroes or Halo, which don’t owe allegiance to anything except their own protocols of ongoing generation, are freed through a kind of authorless solipsism to expand indefinitely through “storyspace,” no version more legitimate than another. (I’m not saying those franchises are literally authorless, but that they lack a certain auratic core of singular, unrepeatable authorship: instead they are team enterprises, all the more appealing to those who wish to create more content.)

There are some neat felicities between the transmedia system’s sliding panels — each providing a partial slice of a larger world — and the cosmological superstructure of His Dark Materials. (One could even argue that franchises come with their own pretender-gods, the corporations that seek to brand each profitable reality and police its official and unofficial uses, thus contradicting the avowed openness of the system: New Line as Magisterium.) But to come back to the question with which I opened, does it matter that, in turning Golden Compass the book into Golden Compass the movie – surely the first and most crucial “budding” of a transmedia franchise — some of the text’s teeth have been pulled?

I suggest that one danger of transmedia thinking is that it abandons, or at least dilutes, the concept of adaptation – a key tool by which we trace genealogical relationships within a world of hungrily replicating media. If A is an adaptation of B, then B came first; A is a version, an approximation, of B. We assess A against B, and regardless of which comes out the victor (after all, there have been good movies made of bad books), we understand that between A and B there are tradeoffs. There have to be, in order to translate between media, where 400 pages or the premise of a TV series rarely fit into a feature-length film.

The contradiction is that, while we would not usually expect an adaptation to precisely replicate the ideological fabric of its source, and can even imagine some that consciously go against the grain of the original, transmedia models, which talk of extensions rather than adaptations, assume a much more transparent mapping of theme and content. We expect, that is, the various splinter worlds of Star Trek and The Matrix to agree, in general, on the same ideological message: the commonsense “talking points” of their particular worldviews. We may get different perspectives on the franchise diegesis, but the diegesis must necessarily remain unbroken as a backdrop – or else it stops being part of the whole, abjected into a wholly different and incompatible franchise. (There’s a reason why Darth Vader will never meet Voldemort, except in fan fiction, which is a whole ‘nother ball of transmedial wax.)

Golden Compass’s critical “neutering” in the process of its replication reminds us that different media do different things, and that this has political import. Jenkins writes that, in transmedia, each medium plays to its strengths: videogames let you interact with – or inhabit — the story’s characters, while novelizations give internal psychological detail or historical background. Comic books and artwork visualizes the fiction, while model kits, costumes, and collectibles solidify and operationalize its props. Precisely because of the logic of transmedia, or distributed storytelling, we don’t expect these fragments to carry the weight of the whole. But each medium promotes through its very codes, technologies, and operations a particular set of understandings and values (a point not lost on Ian Bogost and other videogame theorists who talk about “persuasive games” and “procedural rhetoric”), hence translation always involves a kind of surgery, whether to expunge or augment.

Golden Compass may fail at the box office, which would end the Dark Materials franchise then and there (or maybe not – transmedia are as full of surprise resurrections and reboots as the stories told within them). But director/screenwriter Chris Weitz has made no secret of the fact that he sanitized the book’s theological transgressions in hopes that, having found an audience, he can go on to shoot the remaining two books more as Pullman intended. Regardless of what happens to this particular franchise, it’s our responsibility as scholars and critics – hell, as people – to be sensitive to, and wary about, the ideological filters and political compromises that fall into place, like Dust, as stories travel and multiply.

Razor’s Edge

cain.jpg

Tonight I had the privilege of attending an advance screening of “Razor,” the Battlestar Galactica telefilm that will be broadcast on the SciFi Channel on November 24. Fresh from the experience, I want to tell you a bit about it. I’ll keep the spoilers light – that said, however, read on with caution, especially if, like me, you want to remain pure and unsullied prior to first exposure.

Along with several colleagues from Swarthmore College, I drove into Philadelphia a couple of hours before the 7 p.m. showing, fearing that more tickets had been issued than there were seats; this turned out not to be a problem, but it was fun nevertheless – a throwback to my teenage days in Ann Arbor when I stood in line for midnight premieres of Return of the Jedi and Indiana Jones and the Temple of Doom – to kill time with a group of friends, all of us atingle with anticipation, eyeing the strangers around us with a mingled air of social fascination (are we as nerdy as they are?) and prefab amity (hail, fellow travelers, well met!).

The event itself was interesting on several levels, some of them purely visual: We knew we’d be watching a video screener blown up onto a movie-sized screen, and true to expectation, the image had the washed-out, slightly grainy quality that I’m coming to recognize now that I’m getting used to a high-def TV display. (Things overall are starting to look very good in the comfort of my living room.) There was also the odd juxtaposition of completely computer-generated science-fiction imagery in the plentiful ads for Xbox 360 titles such as Mass Effect and the new online Battlestar Galactica game (yes, more tingling at this one) with the actual show content – the space battles especially were in one sense hard to distinguish from their Xbox counterparts.

But at the same time, the entire program served as a reminder of what makes narratively-integrated visual effects sequences more compelling (in a certain sense) than their videogame equivalents. “Razor”’s battle scenes, of which there are – what’s the technical term? – puh-lenty, carry the dramatic weight of documentary footage or at least historical reenactments, by comparison to which the explosive combat of Mass Effect and the BSG game were received by audiences with the amused condescension of parents applauding politely an elementary-school play starring somebody else’s kids. Disposable entertainment, in a word, paling beside the high-stakes offering of “real” Galactica – and not just any Galactica, but the backstory of one of BSG’s most nightmarish and searing storylines, that of the “lost” Battlestar Pegasus and her ruthlessly hardline commander, Admiral Helena Cain (Michelle Forbes).

(I’ll get to the meat of the story in a moment, but one last thought on the blatantly branded evening of Microsoft-sponsored fun: does anyone really own, or use, or enjoy their Zune? The ad we watched [twice] went to great lengths to portray the Zune as better than an iPod – without ever mentioning iPods, of course – but the net effect was to remind me that a device intended to put portable personal media on a collective footing is as useless as a prehensile toe if no one around you actually owns the thing. “Welcome to the Social,” indeed.)

On to “Razor” itself. Was it any good? In my opinion, it was fantastic; it did everything I wanted it to do, including

  • Lots of space battles
  • Hard military SF action, namely a sequence highly reminiscent of the Space Marine combat staged to perfection by James Cameron in Aliens
  • A few heart-tugging moments, including several exchanges between Bill Adama (Edward James Olmos) and his son Lee (Jamie Bamber) of a type that never fail to bring tears to my eyes
  • Scary, Gigerish biomedical horror
  • Aaaaand the requisite Halloween-candy sampler of “revelations” regarding BSG’s series arc, which I won’t go into here except to note that they do advance the story, and suitably whet my appetite for season four (assuming the writer’s strike doesn’t postpone it until 2019).

A better title, then, might be “Razor: Fanservice,” for this long-awaited installment returns to the foreground many of the elements that made BSG such a potent reinvigoration of televised SF when it premiered in the U.S. at the end of 2004. Since then, Galactica has flagged in ways that I detail in an essay for an upcoming issue of Flow devoted to the series; but judging from “Razor,” showrunner Ronald D. Moore, like Heroes’s Tim Kring, has heard the fans and decided to give them what they want.

For me, the season-two Pegasus arc marked a kind of horizon of possibility for Galactica’s bold and risky game of mapping the least rendering of real-world political realities – namely government-sponsored torture questionably and conveniently justified by the “war on terror” – in SF metaphor. With the exception of the New Caprica arc that ended season two and began season three, the show has never since quite lived up to the queasy promise of the Pegasus storyline, in which a darkly militarized mirror-version of the valiant Galactica crew plunged itself with unapologetic resolve into Abu Ghraib-like sexual abuse and humiliation of prisoners.

What “Razor” does so engrossingly is revisit this primal scene of Galactica’s complex political remapping to both rationalize it – by giving us a few more glimpses of Admiral Cain’s pre- and post-apocalypse behavior and inner turmoil – and deepen its essential and inescapable repugnance. We’re given a framework, in other words, for the unforgivable misdeeds of Pegasus’s command structure and its obedient functionaries; the additional material both explains and underscores what went wrong and why it should never happen again.

Perhaps most strikingly, “Razor” provides a fantasy substitute for George W. Bush — a substitute who, despite her profoundly evil actions, is reassuring precisely because she seems aware of what she has wrought. In the film’s crucial scene, Cain instructs her chief torturer, Lieutenant Thorne (Fulvio Cecere), to make Six (Tricia Helfer)’s interrogation a humiliating, shameful experience. “Be creative,” Cain commands, and the fadeout that follows is more chilling than any clinically pornographic rendering of the subsequent violence could ever be. Precisely because I cannot imagine the cowardly powers-that-be, from Bush, Dick Cheney, and Alberto Gonzales on down to Lynndie England and Charles Graner, to ever take responsibility in the straightforward way that Cain does, this scene strikes me as one of the most powerful and eloquent portrayals of the contemporary U.S./Iraqi tragedy that TV has generated.

Admiral Cain is the real frog in SF’s imaginary garden. Moreover, her brief return in “Razor” suggests our ongoing need – a psychic wound in need of a good antisepsis and bandage – for a real leader, one with the courage not just to do the unthinkable on our behalf, but to embrace his role in it, and ride that particular horse all the way to his inevitable destruction and damnation.

Thoughts on the Writers’ Strike

strike.jpg

The decision by the Writers Guild of America to go on strike this week, bringing production of scripted media content in the U.S. to a halt, triggered a couple of different reactions in me.

1. Thank god for the strike. I say this not because I believe in the essential rightness of unionized labor (though I do), or because I believe writers deserve far more monetary benefits from their work than they are currently getting (though I also do). No, I’m grateful for the strike because there is just too much new content out there, and with the scribes picketing, we now have a chance to recover — to catch up. The launch of the fall TV season has been stressful for me because I’m sharply aware of how many shows are vying for my attention; the good ones (Heroes, House, 30 Rock) demand a weekly commitment, but even the bad or unproven ones (Journeyman, Bionic Woman, Pushing Daisies) deserve at least a glance. And while being a media scholar has its benefits, the downside is that it casts a “work aura” over every leisure activity; it’s nearly impossible to just watch anything anymore, without some portion of my brain working busily away on ideas for essays, blog entries, or material to use in the classroom. Hooray for the stoppage, then: it means more time to catch up on the “old” content spooled up and patiently waiting on DVD, hard drive, and videotape, and more mental energy to spare on each. To live in a time of media plenitude and infinite access is great in its way. But having so much, all the time, also risks reducing the act of engaging with it to dreary automaticity — a forced march.

2. It’s fascinating to watch the differential impact of the script drought diffusing through the media ecosystem. First to go dead are the daily installments of comedy predicated on quick response to current events: nightly talk shows, The Daily Show and The Colbert Report. Next to fold will be the half-hour sitcoms and hourly dramas currently in production. Some series, like 24, may not get their seasons off the ground at all. And somewhere far down the line, if the strike continues long enough, even the mighty buffers of Hollywood will go dry. Seeing the various media zones blink out one at a time is like watching the spread of a radioactive trace throughout the body’s organs, reminding us not only of the organic, massively systematic and interconnectedly flowing nature of the mediascape, but of the way in which our media renew themselves at different rates, deriving their particular relevance and role in our lives by degree of “greenness” on the one hand, polish on the other.