William Cameron Menzies and the Clumsy Sublime

This is a paper I wrote for the SCMS 2011 Confererence, which took place last March in New Orleans. A family emergency prevented me from attending, so my colleague and good friend Chris Dumas — who organized our panel on Kings Row (Sam Wood, 1942) — kindly gave the presentation in my absence. The essay’s full title, too long for this blog, is ” ‘Each of Us Live in Multiple Worlds’: William Cameron Menzies and In/Visible Production Design Between Classical and Digital Hollywood.”

 

1: A Not-So-Clumsy Sublime

As a student of special effects, the first time I saw Kings Row my attention was drawn inevitably to those points in the movie where artifice tips its hand: animated lightning bolts superimposed against a stormy sky; a miniature train passing between the camera and what appears to be rear-projected footage; and most of all the backdrops that pervade the film – painted cycloramas of rolling, pastoral hills, the roof lines of houses and mills, vast skies piled with billowing clouds.


Such moments, which are vital not just to establishing the town of Kings Row as narrative space but to mapping the idyllic and nightmarish polarities of Kings Row as cinematic experience, form an inextricable part of the film’s texture. In that they also mark interventions by studio trickery, they also document the operations of classical Hollywood during a key period in the development of its illusionistic powers, when emerging articulations among shooting script, art direction, and visualization technologies – choreographed by a new managerial category, the production designer – set the industry on a path that would lead, some seventy years later, to the immersive digital worlds of contemporary blockbuster franchises.

Writing about the use of rear-projection in classical Hollywood, Laura Mulvey has coined the term clumsy sublime to refer to that weird subset of screen imagery in which a cost-saving measure – in her example, filming actors against previously-captured footage – results in a burst of visual incongruity whose “artificiality and glaring implausibility” in relation to the shots that bracket it invites a different kind of scrutiny from the spectator.[1] There is an echo here of Tom Gunning’s famous formulation of the early-cinema “attraction,” which presents itself to appreciative viewers as a startling sensorial display,[2] but Mulvey’s point is that rear projection was rarely intended to be noticed in its time; it only “seems in hindsight like an aesthetic emblem of the bygone studio era.”[3] Like the attraction, the clumsy sublime destabilizes our ontological assumptions about how the image was made (indeed, its impact stems largely from our sudden awareness that the image was manufactured in the first place). But where Gunning argues that contemporary, spectacular special effects carry on the highly self-conscious work of what he calls the “tamed attraction,” the clumsy sublime suggests a more contingent and even contentious relationship to cinema’s techniques of trompe l’oeil, in which illusions originally meant as misdirective sleight-of-hand acquire with age their own aura of movie magic.

Looking at Kings Row as a special-effects film, then, invites us to redraw the borders between visible and invisible special effects – those meant to be noticed as spectacles in themselves and those meant to pass as seamless threads in the narrative fabric – and to consider the degree to which such an apparently obvious distinction, like those that once applied to practical versus optical effects, and which now separate analog from digital modes of production, flows not from some innate property of the artifact but from the cultural and industrial discourses that frame our understanding of film artifice itself.

 

2: William Cameron Menzies and the Visualization of Kings Row

As David Bordwell observes in his blog post “One Forceful, Impressive Idea,” William Cameron Menzies was a pivotal figure in the evolution of film design.[4] After rising to prominence as an art director during the 1920s, he coordinated key sequences of Gone with the Wind, where he originated the title of production designer. Menzies’s detailed breakdowns of each shot, in addition to demonstrating his particular expressionist tendencies (strong diagonals, stark lighting contrasts, forced-perspective settings, and dramatically high or low camera angles), embodied a newly integrative philosophy of composing for the frame. Just as Menzies was an interstitial figure in whom were subsumed those functions of the director and cinematographer having to do with conceiving shots and scenery in dialogue with each other, his sketches and drawings embedded within themselves multiple phases of film manufacture, designating, in addition to set design and actor blocking, “the camera’s viewpoint, the lens used, and any trick effects.”[5] In this way, the first mature storyboards blurred temporal and technological lines between practical and optical special effects, pre- and post-production, while Menzies himself complicated auteurist assumptions about cinematic authorship, leaving his distinctive signature on the movies in which he played the greatest role behind the scenes – in Bordwell’s description, “abduct[ing] these films from their named directors.”[6]

This seems to have been especially true of Sam Wood, whose three-year, five-film partnership with Menzies included Our Town (1940) and The Pride of the Yankees (1942). Kings Row, while neither as lyrical as the former nor as blunt as the latter, represents a more restrained and oblique application of Menzies’s skills, eschewing obvious flourishes in favor of a more controlled approach in which the most elaborate manipulations of time and space are snugly folded into the narrative fabric. Consider, for example, the opening moments: a horse-drawn wagon, silhouetted against a characteristically sky-dominated frame, crosses the prairie as the opening credits play. As the wagon crosses between the camera and a sign reading Kings Row, there is a cut, taking us from footage shot on location to a backlot setting. A rightward tracking shot continues the motion, bringing into view an elementary school from which children emerge, including young versions of protagonists Parris Mitchell, Drake McHugh, and Cassie Tower. The soundtrack’s singing voices hover somewhere between the diegetic and nondiegetic, paralleled by Erich Wolfgang Korngold’s score, evoking the happy play of children while foreshadowing the psychoanalytic themes of the rest of the film.

The efficient encapsulation of plot information, so typical of classical Hollywood narration, is here conveyed through what is essentially a virtual shot stitched together from “real” and “artificial” elements, prefiguring the digitally-assisted establishing shots now commonplace in cinema.

An even more complex assemblage occurs later in the film, as Parris departs to begin his studies abroad. From a long shot of Drake, Parris, and Randy Monaghan on the platform, we cut to a different angle on the same scene, the image degraded and grainy in a way that suggests second-generation footage. Echoing the earlier left-to-right motion of the wagon, a train sweeps into the frame, its miniature status given away by the lack of focus on the foreground element. As the train slides past, a carefully-timed wipe shifts us back to a medium closeup of Randy and Drake. A shot-reverse-shot series shows Parris waving goodbye as the train carries him around the bend, the painted backdrop of the mill in the distance.

Elegant for their era, both of these brief passages presumably passed unnoticed by their initial audience, but with the passage of time, their sleight-of-hand has become more evident, constituting new nodes of fascination in a film text that is also – like all movies, but especially those that depend on special effects – indexical evidence of its own manufacture.

Perhaps the most eloquent of Menzies’s contributions to Kings Row are the cycloramas that pepper the film, lending it a painterly, faintly uncanny air. This feeling is present in the town’s train yard as well as its flowery fields, framing the actors in front of them in a theatrical amber similar to that which Mulvey ascribes to rear projection:

Performances … tend to become self-conscious, vulnerable, transparent. The actors can seem almost immobilized, as if they are in a tableau vivant, paradoxically at the very moment in the film when there is a fictional high point of speed, mobility, or dramatic incident.[7]

But in Kings Row the effect of the painted backdrops is different: less of an interruption, more in synch with the story’s themes. The town of Kings Row is, after all, a kind of beautiful trap, nurturing its children only to imprison them like drawings in a storybook, and beneath the pastoral languor of its more innocent vistas run undercurrents of the poisonous, narcotic, or – to adopt the film’s medicinal metaphor for its sadistic counterforces – anesthetic.

Oppositions between innocence and corruption, the sublime and the malign, that shape the film’s darker turns (Cassie’s madness, Dr. Tower’s murder-suicide, the double castration of Drake’s bankruptcy and amputation) are most evident in the shifting portrayal of its most important site, the fence line running along the Mitchell property – a space of transition whose markings of studio artifice reinforce, rather than dilute, its metamorphic extremes.

3. Building Better Screen Worlds, Then and Now

The productions for which William Cameron Menzies is perhaps most remembered are his two forays into science fiction: Things to Come (1936) and Invaders from Mars (1953), whose (admittedly very different) deployments of SF iconography enabled him to indulge his penchant for striking visual invention. His industrial legacy bears out this genetic pairing of strong, centrally-organized production design and the genres of science fiction and fantasy, whose storyworlds tend to be built from the ground up, and whose product differentiation in terms of franchise potential require the creation of distinct brand identities, recognizable by consumers and defensible by the intellectual-property law that polices a minimum necessary distance between, say, the stylistic universes of Star Wars and Star Trek, or between Harry Potter and The Chronicles of Narnia. The tools available to Menzies in crafting his worlds can be traced to the Special Effects Department at Warner Brothers, where artist-technicians such as Hans Koenekamp, Byron Haskin, and the effects supervisor for Kings Row, Robert Burks, worked on countless films from the 1920s to the 1960s.[8] Their glass shots and matte paintings – as well as their practical effects work such as the creation of wind, lightning, and other environmental effects – have their contemporary counterpart in the digital set extensions and CGI elements whose near-ubiquity says less about the inventiveness of our current screen wizardry than about its vastly increased speed and efficiency.

The classical and analog roots of digital modes of production remain relatively unexcavated in modern special-effects scholarship, whose coherence as a subdiscipline of film and media studies began with the advent of computers as all-purpose filmmaking tools and fixture of the popular imagination in the late 1990s. But as CGI performs one type of spectacular labor though its monsters, explosions, and spaceships while distracting us from its more quotidian augmentations of mise-en-scène, critical film theory stands to benefit from considering the present era’s counterintuitive linkages to the golden age of Hollywood, which foregrounded smooth verisimilitude through an equally intricate web of technological trickery.

The clumsy sublime, product of a time-based calculus of spectatorship and a shifting state of the art, is an important tool in this critique, in part because it enables new readings of familiar film texts. Seen through the lenses of technology and style that special-effects history provides, a film like Kings Row seems less like a dated artifact than a predictor of the present. For just as its narrative, set at the end of the 19th century and dawn of the 20th, stages on a manifest level the birth of psychoanalysis, its production stages in latent terms the emergence of a filmic apparatus for the production of expressive screen worlds.


[1] Laura Mulvey, “A Clumsy Sublime,” Film Quarterly 60.3 (2007).

[2] Tom Gunning, “The Cinema of Attractions: Early Film, Its Spectator, and the Avant-Garde,” in Early Cinema: Space, Frame, Narrative (Ed. Thomas Elsaesser. London: BFI, 1990), 56-62.

[3] Mulvey, “A Clumsy Sublime.” Emphasis added.

[4] David Bordwell, “One Forceful, Impressive Idea,”

http://www.davidbordwell.net/essays/menzies.php (accessed March 1, 2011).

[5] Ibid.

[6] Ibid.

[7] Mulvey, “A Clumsy Sublime.”

[8] Peter Cook, “Warner Bros. Presents … A Salute to the Versatility and Ingenuity of Stage 5: Warner’s Golden Era Effects Department,”

http://nzpetesmatteshot.blogspot.com/2010/08/warner-bros-presents-sulute-to.html (accessed February 25, 2011).

 

Super 8: The Past Through Tomorrow

Ordinarily I’d start this with a spoiler warning, but under our current state of summer siege — one blockbuster after another, each week a mega-event (or three), movies of enormous proportion piling up at the box office like the train-car derailment that is Super 8‘s justly lauded setpiece of spectacle — the half-life of secrecy decays quickly. If you haven’t watched Super 8 and wish to experience its neato if limited surprises as purely as possible, read no further.

This seems an especially important point to make in relation to J. J. Abrams, who has demonstrated himself a master of, if not precisely authorship as predigital literary theory would recognize it, then a kind of transmedia choreography, coaxing a miscellany of texts, images, influences, and brands into pleasing commercial alignment with sufficient regularity to earn himself his own personal brand as auteur. As I noted a few years back in a pair of posts before and after seeing Cloverfield, the truism expounded by Thomas Elsaesser, Jonathan Gray, and others — that in an era of continuous marketing and rambunctiously recirculated information, we see most blockbusters before we see them — has evolved under Abrams into an artful game of hide and seek, building anticipation by highlighting in advance what we don’t know, first kindling then selling back to us our own sense of lack. More important than the movie and TV shows he creates are the blackouts and eclipses he engineers around them, dark-matter veins of that dwindling popular-culture resource: genuine excitement for a chance to encounter the truly unknown. The deeper paradox of Abrams’s craft registered on me the other night when, in an interview with Charlie Rose, he explained his insistence on secrecy while his films are in production not in terms of savvy marketing but as a peace offering to a data-drowned audience, a merciful respite from the Age of Wikipedia and TMZ. No less clever for their apparent lack of guile, Abrams’s feats of paratextual prestidigitation mingle the pleasures of nostalgia with paranoia for the present, allying his sunny simulations of a pop past with the bilious mutterings of information-overload critics. (I refuse to use Bing until they change their “too much information makes you stupid” campaign, whose head-in-the-sand logic seems so like that of Creationism.)

The other caveat to get out of the way is that Abrams and his work have proved uniquely challenging for me. I’ve never watched Felicity or Alias apart from the bits and pieces that circulated around them, but I was a fan of LOST (at least through the start of season four), and enjoyed Mission Impossible III — in particular one extended showoff shot revolving around Tom Cruise as his visage is rebuilt into that of Philip Seymour Hoffman, of which no better metaphor for Cruise’s lifelong pursuit of acting cred can be conceived. But when Star Trek came out in 2009, it sort of short-circuited my critical faculties. (It was around that time I began taking long breaks from this blog.) At once a perfectly made pop artifact and a wholesale desecration of my childhood, Abrams’s Trek did uninvited surgery upon my soul, an amputation no less traumatic for being so lovingly performed. My refusal to countenance Abrams’s deft reboot of Gene Roddenberry’s vision is surely related to my inability to grasp my own death — an intimation of mortality, yes, but also of generationality, the stabbing realization that something which defined me as for so many years as stable subject and member of a collective, my herd identity, had been reassigned to the cohort behind me: a cohort whose arrival, by calling into existence “young” as a group to which I no longer belonged, made me — in a word — old. Just as if I had found Roddenberry in bed with another lover, I must encounter the post-Trek Abrams from within the defended lands of the ego, a continent whose troubled topography was sculpted not by physical law but by tectonics of desire, drive, and discourse, and whose Lewis and Clark were Freud and Lacan. (Why I’m so touchy about border disputes is better left for therapy than blogging.)

Given my inability to see Abrams’s work through anything other than a Bob-shaped lens, I thought I would find Super 8 unbearable, since, like Abrams, I was born in 1966 (our birthdays are only five days apart!), and, like Abrams, I spent much of my adolescence making monster movies and worshipping Steven Spielberg. So much of my kid-life is mirrored in Super 8, in fact, that at times it was hard to distinguish it from my family’s 8mm home movies, which I recently digitized and turned into DVDs. That gaudy Kodachrome imagery, adance with grain, is like peering through the wrong end of a telescope into a postage-stamp Mad Men universe where it is still 1962, 1965, 1967: my mother and father younger than I am today, my brothers and sisters a blond gaggle of grade-schoolers, me a cheerful, big-headed blob (who evidently loved two things above all else: food and attention) showing up in the final reels to toddle hesitantly around the back yard.

Fast-forward to the end of the 70s, and I could still be found in our back yard (as well as our basement, our garage, and the weedy field behind the houses across the street), making movies with friends at age twelve or thirteen. A fifty-foot cartridge of film was a block of black plastic that held about 3 minutes of reality-capturing substrate. As with the Lumiere cinematographe, the running time imposed formal restraints on the stories one could tell; unless or until you made the Griffithian breakthrough to continuity editing, scenarios were envisioned and executed based on what was achievable in-camera. (In amateur cinema as in embryology, ontology recapitulates phylogeny.) For us, this meant movies built around the most straightforward of special effects — spaceship models hung from thread against the sky or wobbled past a painted starfield, animated cotillions of Oreo cookies that stacked and unstacked themselves, alien invaders made from friends wrapped in winter parkas with charcoal shadows under the eyes and (for some reason) blood dripping from their mouths — and titles that seemed original at the time but now announce how occupied our processors were with the source code of TV and movie screens: Star Cops, Attack of the Killer Leaves, No Time to Die (a spy thriller containing a stunt in which my daredevil buddy did a somersault off his own roof).

Reconstructing even that much history reveals nothing so much as how trivial it all was, this scaling-down of genre tropes and camera tricks. And if cultural studies says never to equate the trivial with the insignificant or the unremarkable, a similar rule could be said to guide production of contemporary summer blockbusters, which mine the detritus of childhood for minutiae to magnify into $150 million franchises. Compared to the comic-book superheroes who have been strutting their Nietzschean catwalk through the multiplex this season (Thor, Green Lantern, Captain America et-uber-al), Super 8 mounts its considerable charm offensive simply by embracing the simple, giving screen time to the small.

I’m referring here less to the protagonists than to the media technologies around which their lives center, humble instruments of recording and playback which, as A. O. Scott points out, contrast oddly with the high-tech filmmaking of Super 8 itself as well as with the more general experience of media in 2011. I’m not sure what ideology informs this particular retelling of modern Genesis, in which the Sony Walkman begat the iPod and the VCR begat the DVD; neither Super 8′s screenplay nor its editing develop the idea into anything like a commentary, ironic or otherwise, leaving us with only the echo to mull over in search of meaning.

A lot of the movie is like that: traces and glimmers that rely on us to fill in the gaps. Its backdrop of dead mothers, emotionally-checked-out fathers, and Area 51 conspiracies is as economical in its gestures as the overdetermined iconography of its Drew Struzan poster (below), an array of signifiers like a subway map to my generation’s collective unconscious. Poster and film alike are composed and lit to summon a linkage of memories some thirty years long, all of which arrive at their noisy destination — there’s that train derailment again — in Super 8.

I don’t mind the sketchiness of Super 8‘s plot any more than I mind its appropriation of 1970s cinematography, which trades the endlessly trembling camerawork of Cloverfield and Star Trek for the multiplane composition, shallow focus, and cozily cluttered frames of Close Encounters of the Third Kind. (Abrams’s film is more intent on remediating CE3K‘s rec-room mise-en-scene than its Douglas-Trumbull lightshows.) To accuse Super 8 of vampirizing the past is about as productive as dropping by the castle of that weird count from Transylvania after dark: if the full moon and howling wolves haven’t clued you in, you deserve whatever happens, and to bring anything other than a Spielberg-trained sensibility to a screening of Super 8 is like complaining when Pop Rocks make your mouth feel funny.

What’s exceptional, in fact, about Super 8 is the way it intertextualizes two layers of history at once: Spielberg films and the experience of watching Spielberg films. It’s not quite Las Meninas, but it does get you thinking about the inextricable codependence between a text and its reader, or in the case of Spielberg and Abrams, a mainstream “classic” and the reverential audience that is its condition of possibility. With its ingenious hook of embedding kid moviemakers in a movie their own creative efforts would have been inspired by (and ripped off of), Super 8 threatens to transcend itself, simply through the squaring and cubing of realities: meta for the masses.

Threatens, but never quite delivers. I agree with Scott’s assessment that the film’s second half fails to measure up to its first — “The machinery of genre … so ingeniously kept to a low background hum for so long, comes roaring to life, and the movie enacts its own loss of innocence” — and blame this largely on the alien, a digital McGuffin all too similar to Cloverfield‘s monster and, now that I think about it, the thing that tried to eat Kirk on the ice planet. Would that Super 8‘s filmmakers had had the chutzpah to build their ode to creature features around a true evocation of 70s and 80s special effects, recreating the animatronics of Carlo Rambaldi or Stan Winston, the goo and latex of Rick Baker or Rob Bottin, the luminous optical-printing of Trumbull or Robert Abel, even the flickery stop motion of Jim Danforth and early James Cameron! It might have granted Super 8‘s Blue-Fairy wish with the transmutation the film seems so desperately to desire — that of becoming a Spielberg joint from the early 1980s (or at least time-traveling to climb into its then-youthful body like Sam Beckett or Marty McFly).

Had Super 8‘s closely-guarded secret turned out to be our real analog past instead of a CGI cosmetification of it, the movie would be profound where it is merely pretty. Super 8 opens with a wake; sour old man that I am, I wish it had had the guts to actually disinter the corpse of a dead cinema, instead of just reminiscing pleasantly beside the grave.

 

What is … Watson?

We have always loved making our computers perform. I don’t say “machines” — brute mechanization is too broad a category, our history with industrialization too long (and full of skeletons). Too many technological agents reside below the threshold of our consciousness: the dumb yet surgically precise robots of the assembly line, the scrolling tarmac of the grocery-store checkout counter that delivers our purchases to another unnoticed workhorse, the cash register. The comfortable trance of capitalism depends on labor’s invisibility, and if social protocols command the human beings on either side of transactions to at least minimally acknowledge each other — in polite quanta of eye contact, murmured pleasantries — we face no such obligation with the machines to whom we have delegated so much of the work of maintaining this modern age.

But computers have always been stars, and we their anxious stage parents. In 1961 an IBM 704 was taught to sing “Daisy Bell” (inspiring a surreal passage during HAL’s death scene in 2001: A Space Odyssey), and in 1975 Steve Dompier made his hand-built Altair 8800 do the same, buzzing tunes through a radio speaker at a meeting of the Homebrew Computer Club, an early collective of personal-computing enthusiasts. I was neither old enough nor skilled enough to take part in that initial storm surge of the microcomputer movement, but like many born in the late 1960s, was perfectly poised to catch the waves that crashed through our lives in the late 70s and early 80s: the TRS-80, Apple II, and Commodore PET; video arcades; consoles and cartridges for playing at home, hooked to the TV in a primitive convergence between established and emerging technologies, conjoined by their to-be-looked-at-ness.

Arcade cabinets are meant to be clustered around, joysticks passed around an appreciative couchbound audience. Videogames of any era show off the computer’s properties and power, brightly blipping messages whose content, reversing McLuhan, is new media, presenting an irresistible call both spectacular and interactive to any nerds within sensory range. MIT’s Spacewar worked both as game and graphics demo, proof of what the state of the art in 1962 could do: fifty years later, the flatscreens of Best Buy are wired to Wiis and PlayStation 3s, beckoning consumers in endless come-on (which might be one reason why the games in so many franchises have become advertisements for themselves).

But the popular allure of computers isn’t only in their graphics and zing. We desire from them not just explorable digital worlds but minds and souls themselves: another sentient presence here on earth, observing, asking questions, offering commentary. We want, in short, company.

Watson, the IBM artifact currently competing against champions Ken Jennings and Brad Rutter on Jeopardy, is the latest digital ingenue to be prodded into the spotlight by its earnest creators (a group that in reaction shots of the audience appears diverse, but whose public face in B-roll filler sums to the predictable type: white, bespectacled, bearded, male). Positioned between Jennings and Rutter, Watson is a black slab adorned with a cheerful logo, er, avatar, conveying through chance or design an uneasy blend of 2001‘s monolith and an iPad. In a nearby non-space hums the UNIVAC-recalling bulk of his actual corpus, affixed to a pushbutton whose humble solenoid — to ring in for answers — is both a cute nod to our own evolution-designed hardware and a sad reminder that we still need to even the playing field when fighting Frankenstein’s Monster.

There are two important things about Watson, and despite the technical clarifications provided by the informational segments that periodically and annoyingly interrupt the contest’s flow, I find it almost impossible to separate them in my mind. Watson knows a lot; and Watson talks. Yeats asked, “How can we know the dancer from the dance?” Watson makes me wonder how much of the Turing Test can be passed by a well-designed interface, like a good-looking kid in high school charming teachers into raising his grades. Certainly, it is easy to invest the AI with a basic identity and emotional range based on his voice, whose phonemes are supplied by audiobook narrator Jeff Woodman but whose particular, peculiar rhythms and mispronunciations — the foreign accent of speech synthesis, as quaint as my father’s Czech-inflected English — are the quirky epiphenomena of vast algorithmic contortions.

Another factor in the folksiness of Watson is that he sounds like a typical Jeopardy contestant — chirpy, nervous, a little full of himself — and so highlights the vaguely androidish quality of the human players. IBM has not just built a brain in a box; they’ve built a contestant on a TV game show, and it was an act of genius to embed this odd cybernetic celebrity, half quick-change artist, half data-mining savant, in the parasocial matrix of Alex Trebek and his chronotypic stage set: a reality already half-virtual. Though I doubt the marketing forces at IBM worried much about doomsday fears of runaway AIs, the most remarkable thing about Watson may be how benign he seems: an expert, and expertly unthreatening, system. (In this respect, it’s significant that the computer was named not for the brilliant and erratic Sherlock Holmes, but his perpetually one-step-behind assistant.)

Before the competition started, I hadn’t thought much about natural-language processing and its relationship to the strange syntactic microgenre that is the Jeopardy question. But as I watched Watson do his weird thing, mixing moronic stumbles with driving sprints of unstoppable accuracy, tears welled in my eyes at the beautiful simplicity of the breakthrough. Not, of course, the engineering part — which would take me several more Ph.D.s (and a whole lotta B-roll) to understand — but the idea of turning Watson into one of TV’s limited social beings, a plausible participant in established telerituals, an interlocutor I could imagine as a guest on Letterman, a relay in the quick-moving call-and-response of the one quiz show that has come to define, for a mass audience, high-level cognition, constituted through a discourse of cocky yet self-effacing brilliance.

Our vantage point on Watson’s problem-solving process (a window of text showing his top three answers and level of confidence in each) deromanticizes his abilities somewhat: he can seem less like a thinking agent than an overgrown search engine, a better-behaved version of those braying “search overload” victims in the very obnoxious Bing ads. (Tip to Microsoft: stop selling your products by reminding us how much it is possible to hate them.) But maybe that’s all we are, in the end: social interfaces to our own stores of internal information and experience, talkative filters customized over time (by constant interaction with other filters) to mistake ourselves for ensouled humans.

At the end of the first game on Tuesday night, Watson was ahead by a mile. We’ll see how he does in the concluding round tonight. For the life of me, I can’t say whether I want him to win or to lose.

Oscar Notes 2011: Black Swan

This week, busy with a writing project, I barely poked my head from the Man Cave except to eat, sleep, or watch iCarly; into the confines of my cocoon the massive changes taking place in the world were filtered to a distant rumble, tremulous but implacable. Today, deadline met, I emerged to find a peoples’ revolution in Cairo, the protesters’ din of dissatisfaction turned to cheers. I am pleased by this apparent triumph of the democratic spirit, as well as by a victory for more peaceful, if passionate, tactics of overthrow. (I am, after all, half-Czech.) But something limits my happiness. I have learned to be cautious of my attraction to feel-good narratives in fiction, which, finding its unhealthy ally in the spin mechanisms of news and politics, makes me susceptible to feel-good metanarratives. Would that I find in myself an iota of the Egyptians’ courage and faith!

Also delayed by the week’s work: the next in my series of notes on this year’s Best Picture nominees. Beware of spoilers; other posts can be found here.

Black Swan

Natalie Portman is surrounded by a powerful force field of genre that clouds my mind, the result of her early starring role in Luc Besson’s gold-tinged fairy tale of a father-assassin, Leon: The Professional (1994) and — crucially — playing Padmé Amidala in the three Star Wars prequels (1999-2005). There in his mad but pedestrian fantasies George Lucas doomed Portman to the same plasticification he inflicted on Ewan McGregor, as though the director were showing off his ability to convert vital young actors into synthespians avant la lettre. Apart from these two mythically-overdetermined roles, Portman hasn’t really jumped out at me; certainly I wasn’t prepared for the vicious, wincing beauty of her performance in Black Swan.

Darren Aronofsky I also find something of an indirect object. His first film, Pi (1998), seemed almost untoppable in its perfection: minimal yet cosmic in the manner of the Twilight Zone and Outer Limits episodes that supplied its black-and-white grain and shoestring-budget nerd-horror. But we went our separate ways with the assaultive Requiem for A Dream (2000), whose blunt moralizing coarsened and corrupted the elan of its editing and cinematography. No fan of being brutalized, I ignored The Fountain (2006) and suspected The Wrestler‘s (2008) self-effacing warmth was just a tactic to get close enough to hurt me again.

Black Swan doesn’t need to line up neatly on some chart of my fears and fixations, of course; it’s allowed to be what it is, an exercise in style as broad as Sirk in its swoony melodrama and as slender as a surgical needle in its excitation of our nerves. Maybe the reason I want to graph it is because it so unerringly pinpoints a certain set of cinematic intersections — Alfred Hitchcock, Dario Argento, Brian DePalma, with a sprinkling of David Cronenberg and a side of Fritz Lang — pinning Portman to their nexus like a butterfly. It could be the most misogynistic film since True Lies (1994), that insufferably jovial Abu Ghraib of an action movie, but like James Cameron, Aronofsky has a way of turning the suffering of his women inside out, building up their vulnerability only to reverse it into (often deadly) toughness: female body become Swiss Army knife.

The movie’s narrative of possession — as in being possessed — encourages us to cheer for Portman’s character, Nina, even as she devolves into an ever more unhinged and unsettling state; she’s more than a little like Catherine Deneuve in Repulsion (Roman Polanski, 1965). Is Black Swan simply another story about a beautiful monster, whom we pity even as we recoil from her? By the film’s very design, it’s impossible to say: the closing moments made me laugh like I was finally getting a joke, but as in The Game (David Fincher, 1997), I couldn’t tell you what the punchline meant.

Oscar Notes 2011: 127 Hours

More thoughts on this year’s Best Picture nominees. I’m writing with the assumption that readers have seen the films in question, so please beware of spoilers. Other posts in the series can be found here.

127 Hours

If The King’s Speech is a castration run in reverse — the restoration of potency after a lifelong absence — 127 Hours offers up this most basic of phallic dramas in its correct, fated order: a gathering dread that culminates in a foundational wounding. (As in most psychodynamics, of course, time’s arrow is rarely straightforward: castration, like the primal scene, can only ever be retroactively experienced, trauma reconstructed in phantasy.) Knowledge of what is to come colors the entire film; the self-amputation performed by Aron Ralston (James Franco) awaits us at the end of the narrative like Shelob in her lair, and the sunny boisterousness of the rest of the movie (at least as it’s been shot and edited) seems like a long innoculation against those inevitable minutes of agony.

And what beautifully rendered agony it is! Movies are getting good at this lately — the stimulation of our pain centers via optical and audio channels, torture at a distance. (I blame, and thank, pornography.) The Passion of the Christ (Mel Gibson, 2004) still holds the record for the most lovingly conceptualized and rhapsodically paced destruction of an onscreen body, a four-course feast of suffering served up in more efficient form by the Saw franchise (2003-present) with the reliable abundance of fast-food burgers sliding down their stainless-steel troughs. Japanese guinea-pig films worked out much of the cinematic algebra involved, but it is in the French “new extreme” movement, specifically À l’intérieur (Julien Maury and Alexandre Bustillo, 2007) and Martyrs (Pascal Laugier, 2008), that we find this movie’s closest conceptual cousins, exploring the ways in which the visitation of unspeakable violence upon an avatarial stand-in for the spectator results in a kind of mutual apotheosis. As Aron, starved, dehydrated, and bleeding, stumbles out of his death trap, so we leave the theater cleansed, reborn. Having seen ourselves torn apart in the mirror of the movie, we appreciate anew the intactness of our limbs.

Being antisocial myself, I resent the way in which Ralston’s trials have been framed as a kind of punitive purgatorial isolation — the price of his disconnection from society. Ten years after 9/11, apparently, it’s become a bad thing to be a hero (the term used more than once not to praise but to chastise the self-sufficient outdoorsman), and the paneled montages of bustling crowds that open and close the movie read not as condemnations but celebrations of what I can only, in my own grumpy solitude, label herd security: endorsement of the arrival-gate fuzzies of Love Actually (Richard Curtis, 2003) over the misanthropic kaleidoscope of Koyaanisqatsi (Godfrey Reggio, 1982).

I have come to expect surprises from Danny Boyle, which when you think about it is a bit of a paradox. It’s the same way I feel about Quentin Tarantino and the Coen Brothers — the sense that they are minting, with each new film, fresh and highly-specific genres — though Boyle tends to work territory for which I’m more of a sucker, like 28 Days Later (2002) and Sunshine (2007), the latter being one of the most sublimely gorgeous science-fiction films ever made, exceptforitslastthirdwhichsucks. Boyle is showy in all the right ways, setting himself crazy storytelling challenges and then using style to sucker-punch them into submission. But his game in 127 Hours might finally be too similar to that of his previous film, Slumdog Millionaire (2008), another passion play about a young man in mortal danger whose backstory is parceled out in advent-calendar glimpses.

As for James Franco: as far as I’m concerned, this and Freaks and Geeks (1999) more than make up for his turn in the Spider-Man films.

Oscar Notes 2011: The King’s Speech

Over the past week I’ve been doing my homework for the Academy Awards, working my way through the Best Picture nominees. I have enough old-school lead in my shoes to still drag my feet when it comes to the list’s expansion to ten titles: since 2009, the lineup has seemed a little less … elite. On the other hand, I’m new-school enough to recognize that, in this context, “elite” is just another word for “canon fodder,” and if there’s more room in the pool, the diversity of our celebrated archives can only increase. (Unless we adopt the cynical view that it’s all the same brand of sausage, in which case, I suggest skipping the media middleman and just eating some sausage.) With this round of nominations, at least, I savored the grab-bag effect, the ten films up for Best Picture a satisfyingly strange mixture of tastes. Leading up to the Oscars, I will post some quick thoughts on the nominees.

The King’s Speech

On Facebook I called this film “Lacanian,” triggering a long chain of comments — proof less of my throwaway profundity, I know, than of the internet’s global function as a text generator, an explosive growth medium for words compared to which the linkless and un-live petri dish of the prior epoch, Gutenberg’s, now looks a limited arena indeed. (Funny, just a few decades ago it seemed the size of the universe.) As a friend pointed out recently on this very blog, the invocation of Lacan is itself another kind of lexical kudzu, and even if you threatened me with captation, I couldn’t name a specific teaching of the Master’s that applies to The King’s Speech.

Yet a Lacanian thing it remains, and here is why: it is explicit, clinical, and unsparing in its knotting of language, (royal) authority, and fathers – its title in French could be Nom du pere. Watching Colin Firth struggle, strangle, to find his voice, one is reminded that whether or not the subaltern speaks, “superaltern” status is determined first by the seizing of language. The triumphant surge of jouissance generated by the new King’s successful navigation of his radio address has a phallic rush to it — the story is a castration in reverse — but it is a tragic trap George VI finds himself in at the end, subject of a discourse inherited from his father as surely as the British are subjects of him.

Having watched a few episodes of the HBO miniseries John Adams directed by Tom Hooper, I notice the director has a characteristic way of shooting dialogue: shot-reverse-shot constructions in which the interlocutors’ heads are positioned to far right and left of their respective frames, so that if you superimposed them you would get a two-shot. Edited together, the sequences have a lovely rhythm, the back-and-forth of the conversation built on a seesawing center of visual gravity. With Colin Firth and Geoffrey Rush playing two halves of what is essentially a single, sundered subjectivity, the effect is as though Bergman’s Persona had been recast with men, its splitscreen compositions opened out in time.

It would make a great screening for a course on new media — especially one whose syllabus takes seriously the axiom that all media are new in the time of their introduction. Fascinated to an almost Cronenbergian degree by the alien apparatus of the microphone and radio dial, The King’s Speech flirts with science-fictional status in its dissection of the transformative cultural and political impact of emergent information technologies and the social protocols in which they are swaddled. Like Gunga Din (George Stevens, 1939), whose plot is set in motion by the disruption and repair of a telegraph line in India through which colonial British messages travel, The King’s Speech understands that power and communication find their inextricable nexus in the media machines that distribute that other machine, language.

The Reimagination of Disaster

Adapting Watchmen After 9/11

For a work that gives off such a potent impression of originality and singularity, Watchmen has always been haunted by the concept of the near-parallel, the skewed copy, the version. Start with its setting, an alternate-reality 1985. Cocked at a knowing and sardonic angle to our own, the world of Watchmen is one in which superheroes and costumed crimefighters are real, American won the Vietnam War, and Richard Nixon has just been elected to his fifth term as U.S. President. Consider too the book’s industrial origins in the early 1980s, when DC purchased a set of characters from the defunct Charlton Comics and handed them to a rising star, the British writer Alan Moore, to spin into a new series. When it became clear that the “close-ended scenario” Moore envisioned would preempt the Charlton lineup’s commercial possibilities, Moore and his collaborator, illustrator Dave Gibbons, simply reinvented them: Blue Beetle became Nite Owl, Captain Atom became Dr. Manhattan, The Question became Rorschach, and so on — an act of “reimagining” avant la lettre (1). The result, a 12-issue limited series published in 1986-1987 and collected in a single volume many times thereafter, is one of the undisputed keyworks in any canon of comic-book literature (or, if you prefer, graphic novels), winning a 1988 Hugo Award and named by Time Magazine one of the 100 best English-language novels from 1923 to 2005 (2).

But if Watchmen‘s transit across the hierarchies of culture constitutes yet another level of indeterminacy — a kind of quantum tunneling among the domains of geek, popular, and elite taste — this trajectory seemed to hit its limit at the ontological divide between the printed page and moving-image media. The drive to turn Watchmen into a movie arose early and failed often over the next two decades. Producer Joel Silver and directors Terry Gilliam and Darren Aronofsky were among those who ultimately fell before the challenge of what Moore described as an “unfilmable” text for its dependence on the formal aesthetics of comic-book storytelling (3). By the time Zack Snyder’s adaptation finally made it to the screen in 2009, the mission had grown beyond a mere cinematic “take” on the material into the production of something like a simulacrum, marshalling a host of artistic and technological resources to recreate in exact detail the panels, dialogue, and world-design of the original.

Only one thing was changed: the ending.

Watchmen‘s climax involves a conspiracy by the polymathic industrialist Adrian Veidt — alter-ego of the superhero Ozymandias — to correct the course of a world on the brink of nuclear armageddon. As written by Moore and drawn by Gibbons, Veidt teleports a giant, genetically-engineered squid into the heart of New York City, killing millions, and tricking global superpowers into uniting against a perceived alien invasion. Snyder’s version omits the squid in favor of a different hoax: in the movie, Dr. Manhattan, a superbeing created by atomic accident, is set up as the fall guy for a series of explosions in major world cities. As explained by Snyder and screenwriter Alex Tse in commentaries and interviews, the substitution elegantly solves a number of storytelling tangles, cutting a Gordian knot much like the one faced by Veidt. It simplifies the narrative by eliminating a running subplot; it employs a major character, Dr. Manhattan, whose godlike powers and alienation from humanity provide a logical basis for the blame he receives; perhaps most importantly, it trades an outrageous and even laughable version of apocalypse for something more familiar and believable.

Measured against the relentless fidelity of the rest of the project, the reimagined ending of Watchmen has much to say about the process of adaptation in an era of blockbuster filmmaking and ubiquitous visual effects, as well as the discursive means by which differences between one version and another are negotiated in an insistently expansive culture of seriality and transmedia. But it is also a striking and symptomatic response to an intervening real-world event, the terrorist attacks of 9/11, and our modes of visualizing apocalypse and its aftermath.

From the start, Watchmen‘s reputation for unadaptability stemmed not just from its origin as a graphic novel but from the self-reflexive way it drew on the unique phenomenology and affordances of comic-book storytelling. Its 384 pages interpolate metafictional excerpts from personal memoirs and tabloid interviews, business memos, and product brochures. The wealth of invented and implied information in these prepackaged paratexts extends the effects of Gibbons’s artwork, laid out in precise, nine-panel grids brimming with background details, from the logo of the Gunga Diner restaurant chain to the smokeless cigarettes, electric cars, and dirigible airships that signal an alternative technological infrastructure. Shaped by symmetries large and small, the comic’s plot — a mystery about the murder of costumed heroes — emerges as a jigsaw of such quotidian pieces, assembled by readers free to pause and scan back through the pages, rereading and recontextualizing, in a singularly forensic and nonlinear experience of the text.

Nowadays, the difficulty of appreciating the unprecedented nature of Watchmen is, ironically, proof of its genre-altering influence: grim dystopias and grittily “realistic” reinventions of superheroes quickly became a trend in comics as well as film, making Watchmen a distant parent of everything from Kick-Ass (Mark Millar and John Romita, Jr., 2008-2010) to Heroes (2006-2010) and The Dark Knight (Christopher Nolan, 2008). It may also be hard, in retrospect, to grasp why a detailed fictional world should pose a challenge to filmmakers. For his 1998 “special editions,” George Lucas added layers of CG errata to the backgrounds of his original Star Wars trilogy, and contemporary techniques of match-moving and digital compositing make it possible to jam every frame with fine detail. (Given that this layered ornamentation is best appreciated through freeze-frames and replays — cinematic equivalents of flipping back and forth through the pages — one might trace this aesthetic back to Watchmen as well.) Simply put, the digital transformation of filmmaking, a shift most visible in the realm of special effects but operative at every level and stage of production, has made the mounting of projects like Watchmen relatively straightforward, at least in technical terms.

But the state of the art did not emerge overnight, and Watchmen‘s path to adaptation was a slow and awkward one. Two concerns dominated early efforts to convert Moore’s and Gibbons’s scenario into a workable screenplay: compression of the comic’s scope (with a consequent reduction of its intricacy); and how to handle its setting. The economics of the blockbuster, built on the exigencies of Classical Hollywood, dictate that confusion on the audience’s part must be carefully choreographed — they should be intrigued and mystified, but not to the point where they choose to take their entertainment dollars elsewhere. Turning Watchmen into a successful feature film meant committing, or not, to a premise that probably seemed more formidable at a time when alternate realities (again, “versions”) were a limited subset of science fiction.

Sam Hamm, the first screenwriter to tackle the adaptation, rejected the squid subplot as an implausible lever for world peace, saying, “While I thought the tenor of the metaphor was right, I couldn’t go for the vehicle.” (4) His 1989 script climaxes with Veidt opening a portal in time in order to assassinate Jon Osterman before he can become Dr. Manhattan — undoing the superhuman’s deforming effects on the world. Although Veidt fails, Manhattan grasps the logic of his plan, and opts to erase himself from the timeline. The ensuing paradox unravels the alternate reality and plunges surviving heroes Nite Owl, Rorschach, and the Silk Spectre into “our” unaffected world.

It is tempting to view Hamm’s ending as a blend of science-fiction film motifs dominant at the end of the 1980s, fusing the time-traveling assassin of The Terminator (James Cameron, 1984) with the branching realities of Back to the Future 2 (Robert Zemeckis, 1989). Certainly David Hayter’s 2003 script takes the ending in a different direction: here Veidt bombards New York City with concentrated solar radiation, killing one million, in order to establish himself as a kind of benevolent dictator. Again, Veidt dies, but — in a denouement closer to the original’s — the hoax is allowed to stand, since to reveal the truth would return the world to the brink of war.

The version that Snyder finally filmed, based on a script co-authored with Alex Tse, makes just one final adjustment to the ending, one that could be seen as a synthesis of the versions that had come before: instead of solar radiation, it is Dr. Manhattan’s energy signature that destroys cities around the world, and Dr. Manhattan who takes the blame, exiling himself to explore other forms of reality.

It may be overestimating the depth of Hollywood’s imagination to suggest that its pairing of Zack Snyder and Watchmen was an inspired rather than merely functional decision — that Warner Brothers saw Snyder not just as a director with the right specializations to make their film, but as a kind of auteur of adaptation. Certainly Snyder had proved himself comfortable with vivid, subculturally cherished texts, as well as effects-intensive film production, with his first two movies, Dawn of the Dead (2004) and 300(2006). The latter film in particular, shot on a digital backlot of greenscreens with minimal sets, must have seemed an ideal resume for Watchmen, demonstrating Snyder’s ability to bring static artwork — in this case, Frank Miller’s and Lynn Varley’s — to strange, half-animated life on screen while maintaining his own distinct layer of stylization. (Snyder’s predilection for speed ramping, in which action shifts temporal gears midshot, may be his most obvious and mockable tic, but it is a clever way to substitute a cinematic signifier for comic art’s speed-line symbolalia.)

Rejecting prior half-measures at sanding off the story’s rough edges, Snyder embraced Moore’s and Gibbons’s work almost literally as a bible, letting the collected Watchmen guide production design “like an illuminated text, like it was written 2000 years ago.” (5) The prominence of this sentiment and others like it in the host of materials accompanying Watchmen‘s marketing can be seen as a discursive strategy as much as anything else, a move to reassure prospective audiences — a group clearly identified early on as a small but important base worthy of wooing, in much the manner that New Line Cinemas cultivated and calibrated its relationships with fans during the making of The Lord of the Rings. (6) Enacted at the layer of the manufactured paratextual penumbra that, as Jonathan Gray reminds us, is now de rigeur for blockbuster film events, public performances of allegiance to a single, accepted reference constitute a crucial discursive technology of adaptation, working in concert with the production technologies involved in translating fan-cherished works (7)

One such production technology doubling as discursive tool was the participation of Dave Gibbons. In DVD and Blu-Ray extras, Gibbons tours the set, posing with Snyder, all smiles, confirming the authenticity and integrity of the production. “I’m overwhelmed by the depth and detail of what I’m seeing,” he wrote of his visit. “I’m overwhelmed by the commitment, the passion, the palpable desire to do this right.” (8) Working with Moore in the 80s, Gibbons’s part in the origination of Watchmen was profound, extending far beyond the simple illustration of a script; the two brainstormed together extensively, and the story’s more reflexive and medium-aware qualities demanded near-microscopic coordination of story and image. Further, Moore’s choice to remove himself from the chain of mandatory citation that constitutes authorship as legal status, relinquishing any claim on the filmic realization of Watchmen, leaves only Gibbons to take credit for the work.

But it is hard to escape a suspicion that promotional discourses around the film lay a surreptitious finger on the scale, biasing the source’s creative center of gravity toward Gibbons, whose contributions are, after all, those most “available” to film production: the design of sets, costumes, props; character physiques and hair styles; background environments and architecture; even the framing of key shots and events. Graphic illustration and cinematic manufacture meet at the layer of the look, understood here not through film theory’s account of the gaze but the more pragmatic (drawable, buildable) framework of mise-en-scene.

Within this odd binary — the highlighting of Gibbons, the structuring absence of Moore — Snyder appears as something of a third term, positioned not as creator but as faithful cinematic translator. His mode of “authorship” is defined chiefly, and perhaps paradoxically, as a fierce and unremitting loyalty to the work of others.

All of these forces come together in the paratextual backstory of the film’s single biggest change. The tie-in publication Watchmen: The Art of the Film reprints storyboards created by Gibbons for the new ending. According to Peter Aperlo, “these new ‘pages’ were drawn at Zack Snyder’s request during pre-production, to ensure that the film’s re-imagined ending nevertheless drew from an authentic source.” (9) Completing the cycle, Gibbons also provided promotional artwork illustrating key images from the film for use in marketing. These interstitial storyboards perform a kind of suture at the industrial level of film-as-artifact and the communicational level of film-as-public-presence, knitting together the visualizations of Gibbons and Snyder in a fusion that guarantees the film’s pedigree while plastering over the hole left by Moore’s departure.

The versionness that has always haunted Watchmen is also present in our hesitation before the question of whether it is a closed, finite text or a sprawling serial object. Is there one Watchmen or many? This indeterminacy too seems embedded in the text’s fortunes from the start: whether you label it a “comic book” or “graphic novel” depends on whether you think of the original as a sequence of twelve discrete issues released one month apart, or as a single work collected between the covers of a book. Kin to Roland Barthes’s distinction between writerly and readerly texts, the latter perspective underpins perceptions of the story as something inviolate, to be protected from tampering; the former leans toward modification, experimentation, openness.

Similarly, Watchmen‘s storyworld — the intricately conceived and exhaustively detailed settings whose minutia, repeated from panel to panel, form a stressed palimpsest — displays characteristics of both a one-off extrapolation, complete unto itself, and the endlessly expandable backdrops of serial fictions like Star Trek, larger than any one instance of the text can contain.

Matt Hills uses the term hyperdiegesis to identify these “vast and detailed narrative spaces” and the rules of continuity and coherence that organize them into explorable terrain (10). A defining attribute of cult texts, hyperdiegesis invites expansion by official authors and fan creators alike, thus functioning as a space of contest and negotiation as well. The approach taken by Snyder, with the participation of Gibbons, is to treat the weakly serial Watchmen as a strongly serial text with a hyperdiegesis whose facts as established — not just the storyworld’s surface designs, but its history, society, and politics — are to be modified only at the adapter’s peril, risking the loss of audiences’ faith.

Watchmen (the film), in adopting its fanatically faithful approach to the visualization layer of the original, risks also replicating its sealed, hermetic qualities. “The cumulative effect of the astonishing level of attention,” writes Peter Y. Paik, “is a palpable sense of suffocation, so that the world of Watchmen ultimately takes shape in the form of a totality that has become wholly closed in upon itself.” (11) For Paik, the “constricting nature of this mortified reality” is Moore’s and Gibbons’s way of conveying the unique existential prison of Watchmen, “in which is ruled out any form of change other than an abrupt and global transformation of the very conditions of existence, such as would take place with the extinction of Homo sapiens.”

This special case of hyperdiegesis, then — in which a closed fictional universe is forced open, via translatory technologies of visualization, across a terminator dividing still comic art from moving cinema image — is one in which the authorial status of Moore and Gibbons is collapsed with that of Zack Snyder, and in turn with the world-manipulating powers of Doctor Manhattan and (in another register) Adrian Veidt. Just as Veidt alters global existence with his own bold intervention, so does Snyder enact a fundamental ontological violence to deform, remake, and relaunch Watchmen in a new medium.

Snyder’s decision to hew as closely as possible to the visual content and layout of the comic book had a number of strategic effects. By creating an elevated platform for the many contributions of Dave Gibbons, it accounted for Moore’s absence while smoothly installing a third partner, Snyder, as fair-minded referee whose task was less to achieve his own artwork than to render onscreen the comic’s nearest possible analog. His role can thus be understood as mediating an encounter between two media, bridging a gulf born both of form (comics versus movies) and of time (represented by the “new” of CGI).

The decision also made available to the production an archive of already-designed material, the extensive visual “assets” from which Watchmen is woven. More than simply a catalog of surfaces, the comic’s panels embed within themselves the history and operating principles of an alternate reality, a backdrop the production would otherwise have had to invent — placing the film in an inherently derivative status to the original. Going hyperfaithful with the adaptation deflected the production’s need to “top” its source design and instead assign itself the new mission of reverent translation.

At the same time, as I have argued, Snyder’s film risked getting stuck in a bubble of irrelevance, the strange half-life of the cult oddity, whose self-referencing network of meaning results (to outsiders) in the production of nonmeaning: a closed system. Preserving too exactly an inherited set of signs took the politically and mythologically overdetermined text of Watchmen and turned it into a “graphically” overdetermined movie, an affective state described by critics in terms like airless and inert.

From an auteurist point of view, going with a different ending may have been Snyder’s way of increasing the chance that his work would be taken as more than mere facsimile. From an industrial standpoint, dropping the squid may have seemed a way of opening up the story’s interpretations, making it available to a broader audience whose shared base of experience did not include giant teleporting squids, but most certainly did include visions of skyscrapers falling and cities smoldering.

“It seems that every generation has had its own reasons for destroying New York,” writes Max Page, tying our long history of visualizing that city’s destruction in film, television, and other media to a changing set of collective fears and preoccupations (12). From King Kong‘s periodic film rampages (1933, 1976, 2005) to Chesley Bonestell’s mid-century paintings of asteroid strikes and mushroom clouds, to nuclear-armageddon scenarios like Fail-Safe (1964) and 24 (2001-2010), the end of the city has been rendered and rerendered with the feverish compulsion of fetish: a standing investment of artistic resources in the rehearsal — and hence some meager control over the meanings — of apocalypse.

At the time Moore and Gibbons were creating Watchmen, the dominant strain in this mode of visualization had shifted from fears in the 1950s and 60s of atomic attack by outsiders to a sense in the 70s and 80s that New York City was rotting from within, its social landscape becoming an alien and threatening thing. The signifiers of this decline — “crime, drugs, urban decay” (13) — were amplified to extremes in Escape from New York (John Carpenter, 1981), shifted into the register of supernatural comedy in Ghostbusters (Harold Ramis, 1984), and refracted through superhero legend in Batman (Tim Burton, 1989).

The New York of Watchmen is divided into sharply stratified tribes, beset by street drugs, and prone to mob violence; moreover, the city stands in for a state of world affairs in which corrupt superpowers square off over the heads of balkanized countries whose diversity only lead to hostility — multiculturalism as nightmare, not the choral utopia of Coca-Cola’s “I Want to Teach the World to Sing” (1971) but the polyglot imbroglio of Blade Runner (1982).

Viewed this way (as following from a particular thesis about the nature of society’s ills), the original ending with the squid can be seen as the Tower of Babel story in reverse: a world lost in a confusion of differences becomes unified, harmonious, one. That this is achieved through an enormous hoax is the hard irony at the heart of Watchmen — a piece of existential slapstick, or to quote another Alan Moore creation, a “killing joke.”

One consequence of having so insistently committed to visual record simulations of New York’s demise was that the events of September 11 — specifically the burning and collapse of the World Trade Center — arrived as both a horrible surprise and a fulfillment of prophecy. The two towers had fallen before, in Armageddon and Deep Impact (both 1998), and the months after 9/11 were filled with self-analysis and recrimination by media suddenly conscious of their potential complicity, if not in the actual act of terrorism, then in its staging as fantasy dry-run.

Among its other effects, 9/11 brought to devastating life a conception of sheer urban destruction that had formerly existed only in the precincts of entertainment. It also crystallized — or enabled our government to crystallize — the enemies responsible, a shadowy network whose conveniently elastic boundaries could expand to encompass whole cultures or dilate to direct lethal interrogative force on individual suspects. The Bush administration’s response to the attacks, as played out in suppressions of liberty and free press in the U.S., in bellicose pronouncements of righteous vengeance in the world court, and ultimately in the Afghanistan and Iraq wars, was like a cruel proof of Veidt’s concept, conjuring into simultaneous existence a fearsome if largely fictional enemy and a “homeland” united in public avowal, if not in actual practice.

One might make the case that Moore and Gibbons in some way “predicted” 9/11 — not in the particulars of the destruction, but in its affective impact and corresponding geopolitical consequences. Just as the squid released in death a psychic shockwave, killing millions while leaving buildings untouched, so did the cognitive effects of the 9/11 attacks ripple outward: at first in a spectacular trauma borne viruslike on our media screens, later in the form of a post-9/11 mindset notable for its regressive conflation of masculinity and security (14). Ten years on, it is highly debatable whether U.S. actions after 9/11 resulted in a safer or more unified world; we seem in some ways to have ended up back where we started, poised at a tipping point of crisis, just as the concluding panels of Watchmen‘s circular narrative “leave in our hands” the decision to publish Rorschach’s journals and blow the hoax wide open. But in the initial, heady glow of late 2001 and early 2002, with the Abu Ghraib revelations (that blunt and pornographic proof of the enterprise’s rotten core) years away, it seemed briefly possible that the new reality we confronted might turn out to be “a stronger loving world.”

That mass deception underlies Veidt’s plan is, of course, another connection to 9/11, in the eyes of those who believe the attacks were carried out by the U.S. Government. Conspiracy theorizing around 9/11 coalesced so quickly it almost qualifies as its own subdiscipline of paranoid reasoning, rivaling perpetual motion and the assassination of JFK as one of the great foci of fringe scholarship. It would hardly be surprising if the original Watchmen was taken up by this movement as a piece of evidence before the fact, for as Stephen Prince observes, pre-9/11 films such as Gremlins (1984), The Peacemaker (1997), and Traffic (2000) have all been accused of embedding subliminal messages about the impending event. (15)

As a media text manufactured well after 9/11, Snyder’s adaptation faced a dual challenge: not simply whether and how to change the ending to something more narratively and conceptually streamlined, but how to negotiate showing the two towers, which by the logic of period were still standing in 1985, even an “alternative” one. The World Trade Center does not figure among the landmarks consumed by the energy wave, which in any case consumes a small amount of screen time (left unadapted is the famed series of full-sized splash pages that opens Watchmen‘s final chapter, an aftermath of panoramic carnage whose stillness builds creepily upon the static nature of the art). But the towers do appear in at least two shots, during the Comedian’s funeral and in the background of Veidt’s office. Both scenes were seized upon for commentary by fans as well as conspiracists. As one blogger wrote,

My interpretation was that Snyder was giving a nod to the fact that the terrible, horrible, completely implausible conclusion of Watchmen has, in fact, already happened. The 9/11 attacks were a lot like the finale of the book — alien, unexpected, tragic, unifying — only without the giant squid. … Snyder felt he had to at least address that in some way. The allusions to the towers were a way of saying, “Okay, I get it. This has already happened.” (16)

The effect of 9/11 on cinematic representation has been to turn either choice — to show or not to show — into a significant decision. For a time following 9/11, studios scrambled to scrub the twin towers from the frame, in films like ZoolanderSerendipity, and Spider-Man. (17) Later appearances of the World Trade Center took on inevitable thematic weight, in films such as Gangs of New York and Munich — presumably a motivation shared by Snyder’s Watchmen, which uses the towers to underscore rhetorical points.

Adaptation is not a new concern for film and media studies, any more than it is a new phenomenon in the industry; as Dudley Andrew points out, “the making of film out of an earlier text is virtually as old as the machinery of cinema itself.” (18) One of the most “frequent and tiresome” discussions of adaptation, Andrew goes on, is the debate over fidelity versus transformation — the degree to which an adapted work succeeds, or suffers, based on its allegiance to an outside source. As tired as this conversation might be, however, Watchmen demonstrates that it continues to structure adaptation both at the level of production (in which Snyder and his crew boast of their painstakingly reverence for the original) and of reception (in which fans analyze the film according to its treatment of Moore and Gibbons). In this way, controversy over the squid’s disappearance goes hand-in-hand with responses to the hyperfaithful visual setting, as referenda on Snyder’s approach; love it or hate it, we cannot resist reading the film of Watchmen alongside the comic book.

But a time of new media gives rise to new questions about adaptation. Whether we understand the mores of contemporary blockbuster filmmaking in terms of ubiquitous special effects, the swarm deployment of paratexts and transmedia, or the address of new audience formations through new channels, Watchmen reminds us that, in an era of new media, all texts mark adaptations in the evolutionary sense, forging recognizability across an alien landscape and establishing continuities with beloved texts — and the media they embody — across the digital divide.

Works Cited

1. Dave Gibbons, Chip Kidd, and Mike Essl, Watching the Watchmen(London: Titan Books, 2008), 28-29.

2. Peter Aperlo, Watchmen: The Film Companion (London: Titan Books, 2009), 16.

3. David Hughes, “Who Watches the Watchmen? How the Greatest Graphic Novel of All Time Confounded Hollywood,” in The Greatest Sci-Fi Movies Never Made (Chicago: A Cappella Books, 2001), 144.

4. Hughes, 147.

5. Aperlo, 26.

6. Kristin Thompson, The Frodo Franchise: The Lord of the Rings and Modern Hollywood (Berkeley: University of California Press, 2007).

7. Jonathan Gray, Show Sold Separately: Promos, Spoilers, and other Paratexts (New York: NYU Press, 2010).

8. Aperlo, 38.

9. Aperlo, 62.

10. Matt Hills, Fan Cultures (London: Routledge, 2002), 137-138.

11. Peter Y. Paik, From Utopia to Apocalypse: Science Fiction and the Politics of Catastrophe (Minneapolis: University of Minnesota Press, 2010), 50.

12. Max Page, The City’s End: Two Centuries of Fantasies, Fears, and Premonitions of New York’s Destruction (New Haven: Yale University Press, 2008), 7.

13. Page, 144.

14. Susan Faludi, The Terror Dream: Fear and Fantasy in Post-9/11 America (New York: Metropolitan Books, 2007).

15. Stephen Prince, Firestorm: American Film in the Age of Terrorism (New York: Columbia University Press, 2009), 78-79.

16. Seth Masket, “The Twin Towers in ‘Watchmen’ (http://enikrising.blogspot.com/2009/03/twin-towers-in-watchmen.html), accessed January 27, 2011.

17. Prince, Firestorm, 79.

18. Dudley Andrew, “Adaptation,” in Film Adaptation, Ed. James Naremore (New Brunswick: Rutgers University Press, 2000), 29.

Challenges

Reminded by the media that today marks the 25th anniversary of the Challenger disaster, I think not of that event directly, but of how even in the moment of its occurrence in 1986, I registered the horror only distantly, as a background image on TV that followed me throughout the rest of that cold January day. It was not simply my first media event — that useful term coined by Daniel Dayan and Elihu Katz to describe those moments, planned and unplanned, when the screens of the country or the world fixate on a singular happening of shared cultural, historical, or political significance — but the first time I recognized the importance of something alongside my failure to connect with it the way I suspected I should.

I was 19 years old at the time, struggling to find my place at Eastern Michigan University, where I was fitfully taking classes in the Theater Department. (The focusing discipline of English, with a “penance minor” of History, was a year or two in my future.) Shocking events on the national stage had penetrated my consciousness before — I remember whispers leaping like sparks among the nested semicircles of our orchestra chairs the day John Hinckley tried to kill Ronald Reagan in 1981, and when John Lennon died a year before, I stayed up late with the TV, waiting for details to emerge. I even remember, vaguely, being shushed by my family the night in 1978 that news of the Jonestown massacre broke. But whatever organs of emotion were developing inside me had not yet matured enough to apprehend the vastness of collective trauma, to sound along the single string of my soul a thrum of sympathetic resonance with some larger chorus of lament. That would not come for years, until after the profound illness of a close friend and the death of my older brother had broken down and rebuilt my heart in more human form.

But other factors were behind the cold inertia of my feelings the day we lost Challenger. I had drifted from a childhood infatuation with the space program, one fueled by family visits to the National Air and Space Museum and long hours spent poring over books written by and about NASA astronauts. Though I found the Mercury flights too early and primitive to hold my interest, the cool capsule design and paired teamwork of the Gemini program (like going into space with your best friend!) fired my imagination, and the Apollo moon landings were like repeated assaults on Mount Olympus, bold conquests by super-dads wrapped in science-fiction armor. Skylab, under whose orbits I aged from 7 to 12, was like a funky rec room in the sky, a padded cylinder where playful scientists in zero-G did somersaults and wobbled water globes for the cameras, which relayed their bearded grins to us.

By contrast with the alternately goofy and glorious NASA missions of the 60s and early 70s, the Space Shuttle program seemed a retreat into something more pedestrian and timid. The orbiter with its aerodynamic surfaces struck me as being too much like an airplane, a standard streamlined creature of the atmosphere instead of a spiky, boxy, bedished emissary to the stars. Those external fuel tanks turned the coolest part of the show, the rockets, into mere vestigial workhorses, to be dropped away like a shameful secret, disavowed by the pristine delta wing as it did its boring ballet turns — never going anywhere, just cautiously circling the earth, expecting applause each time it landed, though the goal seemed to be to make launches and returns as common and unexceptional as elevator rides. The whole concept, in short, was a triumph of the disposable and interchangeable over the unique and dramatic.

So I stopped paying attention to the shuttle missions, until January 28, 1986, when the predictable, in the space of a few seconds, mutated into the unique and dramatic.

That strangely horned explosion, a forking fireball marking the moment at which a precisely calibrated flightpath dissolved into the chaotic trajectories of system failure, was at first glance reminiscent of the impressive explosions Hollywood had rigged for my awed pleasure — the sabotage of the Death Star, the electrical rapture of the opened Ark of the Covenant, the Nostromo’s timed destruction — and it took much repetition and analysis for me to begin to grasp the semiotics of this particular combustion. Challenger, for me and maybe for a lot of people, was the beginning of an education in explosions, and over the decades that followed, I thought back to its incendiary lessons: Waco in 1993, Oklahoma City in 1995. The USS Cole in 2000, Columbia over Texas in 2003. And that master class in the forensics of fiery disaster, September 11, 2001.

I don’t think I’m building to any profound point here; indeed, writing about such moments makes me unhappily aware of how provincial and shallow my thinking about spectacle can be. Scenes of death, wrapped in the double abstractions of physical laws and media presentation, are still hard for me to feel, though — scored into my optics — they are never hard to remember.

Paranormal Activity 2

I was surprised to find myself eager, even impatient, to watch Paranormal Activity 2, the followup to 2007’s no-budget breakthrough in surveillance horror. I wrote of the first movie that it delivered a satisfactory double-action of filmic and commercial engineering, chambering and firing its purchased scares in a way that felt requisitely unexpected and, at the same time, reassuringly predictable. The bonus of seeing it in a theater (accompanied by my mom, and my therapist would like to know why I chose to omit that fact from the post) was a happy reminder that crowds can improve, rather than degrade, the movie experience.

PA2 I took in at home after my wife had gone to bed. I lay on a couch in a living room dark except for the low light of a small lamp: a setting well-suited to a drama that takes place largely in domestic spaces at night. My complete lack of fear or even the faint neck-tickle of eeriness probably just proves the truism that some movies work best with an audience — but let’s not forget that cinema does many kinds of work, and offers many varieties of pleasure. This is perhaps especially true of horror, whose glandular address of our viscera places it among the body genres of porn and grossout comedy (1), and whose narratives of inevitable peril and failed protection offer a plurality of identifications where X marks the intersection of the boy-girl lines of gender and the killer-victim lines of predation (2).

I’m not sure what Carol Clover would make of Paranormal Activity 2 or its predecessor (though see here for a nice discussion), built as they are on the conceit of a gaze so disinterested it has congealed into the pure alienness of technology. Shuffled among the mute witnesses of a bank of home-security cameras, we are not in the heads of Alfred Hitchcock, Stanley Kubrick, or even Gaspar Noe, but instead the sensorium — and sensibility — of HAL. A good part of the uncanniness, and hence the fun, comes from the way this setup eschews the typical constructions of cinematography: conventions of framing and phrasing that long ago (with the rise of classical film style) achieved their near-universal legibility at the cost of their willingness to truly disrupt and disturb. PA2 is grindhouse dogme, wringing chills from its formal obstructions.

Rather than situating us securely in narrative space through establishing shots and analytic closeups, shot-reverse-shot, and point-of-view editing, PA2 either suspends us outside the action, hovering at the ceiling over kitchens and family rooms rendered vast as landscapes by a wide-angle lens, or throws us into the action in handheld turmoil that even in mundane and friendly moments feels violent. The visuals and their corresponding “spatials” position viewers as ghosts themselves, alternately watching from afar in building frustration and hunger, then taking posession of bodies for brief bouts of hot corporality. Plotwise, we may remain fuzzy on what the spectral force in question (demon? haunted house? family curse?) finally wants, but on the level of spectatorial empathy, it is easy to grasp why it both hates and desires its victims.

Along with Von Trier, other arty analogs for PA2 might be Chantal Akerman’s Jeanne Dielman or Laura Mulvey and Peter Wollen’s Riddles of the Sphinx, which similarly locate us both inside and outside domestic space to reveal how houses can be “haunted” by gender and power. They share, that is, a clinical interest in the social and ideological compartmentalization of women, though in the Paranormal Activity films the critique remains mostly dormant, waiting to be activated in the readings of brainy academics. (Certainly one could write a paper on PA2’s imbrication of marriage, maternity, and hysterical “madness,” or on the role of technological prophylaxis in protecting the white bourgeois from an Other coded not just as female but as ethnic.)

But for this brainy academic, what’s most interesting about PA2 is the way it weaves itself into the first film. Forming an alternate “flightpath” through the same set of events, the story establishes a tight set of linkages to the story of Micah and Katie, unfolding before, during, and after their own deadly misadventure of spirit photography gone wrong. It is simultaneously prequel, sequel, and expansion to Paranormal Activity, and hence an example — if a tightly conscribed one — of transmedia storytelling, in which a fictional world, its characters, and events can be visited and revisited in multiple tellings. In comments on my post on 2008’s Cloverfield, Martyn Pedler pointed out that film’s transmedia characteristics, and I suggested at the time that “Rather than continuing the story of Cloverfield, future installments might choose to tell parallel or simultaneous stories, i.e. the experiences of other people in the city during the attack.”

Paranormal Activity 2 does precisely that for its tiny, spooky universe. It may not be the scariest movie I’ve seen lately, but for what it implies about the evolving strategies of genre cinema in an age of new media, it’s one of the more intriguing.

Works Cited

1. Linda Williams, “Film Bodies: Gender, Genre, and Excess,” Film Quarterly 44:4 (Summer 1991), 2-13.

2. Carol J. Clover, Men, Women and Chainsaws: Gender in the Modern Horror Film (Princeton: Princeton University Press, 1993).

Word Salad

Hello, my name is Jared Lee Loughner

The shooting yesterday in Arizona of Representative Gabrielle Giffords, which left the politician critically wounded and six others dead, almost instantaneously conjured into existence two bleak and mysterious new entities, both in their way artifacts of language: the shooter, Jared Lee Loughner, and the debate about the causes and implications of his lethal actions.

The 22-year-old Loughner seems an all-too-typical figure, one whom we might pity had he not so decisively removed himself from the precincts of empathy. An eccentric loner whose strange preoccupations, coupled with a tendency to rant about them in public, led to his suspension and withdrawal from the community college he was attending, he appears to have drifted into his homicidal mission with the sad inevitability of a piece of social flotsam awash on the tides of fringe discourse. He marks the return of a certain repressed specific to U.S. political culture — the lone gunman — whose particulars always differ but whose sad, destructive song remains the same. He is Lee Harvey Oswald, James Earl Ray, Mark David Chapman, Nidal Malik Hasan: a gallery of triply-named maniacs whose profound disconnection from the shared social world toggled overnight into the worst kind of celebrity.

Like all other modes of celebrity, the fame and fascination specific to assassins has accelerated and complexified in recent times with the proliferation of data streams and their accompanying commentaries. We deconstruct and reconstruct the killer’s trajectory: the hours, days, weeks, and months that led up to their rampage, the ideologies that brought them to that final moment of the trigger pull. We sift through blog posts, Twitter feeds, and YouTube channels to assemble the kind of composite portrait that once would have been consigned to the scribbled diary kept under the bed, or the conspiracy wall tucked away in a smelly corner of the basement, its authoritative interpretation doled out slowly by experts rather than crowdsourced in breathless realtime. We ask ourselves what could have been done to avert the personal implosion, dismantle the ticking bomb, or in the phrase to which 9/11 gave new weight, “connect the dots” before something awful happened (a literally unthinkable alternative, since without the awful event, the current conversation would never have started: this tape self-destructed before you heard it).

Amid the fantasy forensics and reverse-engineering of psychopathologic etiology, it’s hard to escape the sense that we’re building a new Jared Lee Loughner out of words, sticking a straw man of signifiers into a person-shaped hole. The fact that Loughner survived his own local apocalypse is probably irrelevant to this story’s emergence: killers are rarely allowed to speak for themselves from the jail cell, having ceded their rights of linguistic self-determination along with every other freedom. The kind of annihilation I’m talking about is the plangent sting of The Parallax View (Alan J. Pakula, 1974), a masterfully cynical political thriller in which “lone gunmen” are mass-produced commodities of a faceless corporation, and the journalist hero — Joe Frady (Warren Beatty) — is himself framed as an unhinged killer after the fact.

Loughner, of course, is as undeniably real as his horrible actions and their tragic impact on the families and friends of the dead and wounded. I don’t mean to turn him into a fiction, just to point out that the picture we are now building of him is, in its way, another kind of cultural narrative. Which makes it rather ironic that among Loughner’s many obsessions, from the fraudulent behavior of his college teachers to the 2012 prophecy, is a fixation on grammar as a tool of mind control: what a critical theorist might rephrase as the construction of subjectivity through language. Lacan argued that we lose ourselves in the Symbolic Order only to find ourselves there as an Object: self-identity at the cost of self-alienation. It’s a paradox in whose twisty folds Jared Lee Loughner evidently lost his soul.

What emerged from the outputs of this particular black box might have been some kind of furious crusader, but I suspect rage actually had little to do with what took place in Tucson yesterday. Scrolling through the inflectionless syllogisms in Loughner’s YouTube videos is like studying HAL’s readouts: one detects only the icy remoteness of pure logic (a logic, it should be added, devoid of sense) — a chain-link ladder of if-thens proceeding remorselessly to their deadly conclusion. I think some virus of language did finally get to Loughner; I think words ate him alive.

***

The question now occupying the public: whose words were they?

I try to keep my politics off this blog, in the sense of signaling party affiliation outright. That said, it will probably surprise no one to learn that I’m just another lefty intellectual, an ivory-tower Democrat. My first reaction to the Arizona shootings is to read them as evidence that the rhetoric of the right, in particular Sarah Palin and the Tea Party, has gone too far.

I’m obviously not alone in that assessment, but if I’m being honest, I must acknowledge that both sides (and forgive me for reducing our country’s ideological landscape to a red-blue binary) are hard at work spinning Loughner’s onslaught in favor of their own agendas. In fact, looking at today’s mediascape, I see a giant hurricane-spiral of words, a storm of accusations and recriminations played out — because I happen to be tracking this event through HTML on a laptop screen rather than in the shouting arenas of CNN and FOX News — in text. The ideas that chased each other through Loughner’s weird maze of a mind have externalized themselves on a national scale like the Id monster in Forbidden Planet.

I know I will sound like an old fogey for saying this, but I’m startled by how quickly our tilt-a-whirl news cycle and its cloud of commentary have moved from backlash to backlash-against-the-backlash. When I want the facts, I go to Google News or (Liberal that I am) the New York Times; when I want to read people hurling the textual equivalent of monkey poop at each other, I read the comments at Politico, where you can still reliably find someone calling our President “Barack Hussein Obama.” While I have nothing bad to say about the stories carried on the site, working through the comments is like taking the temperature of our zeitgeist — rectally.

Commenters labeling themselves as everything from conservative to independent and “undecided” have seized on a tweet from one of Loughner’s former classmates to the effect that he was “left wing, quite liberal.” Hence (in their view) it’s liberal rhetoric that led Loughner to start shooting on Saturday morning. Or as “NotPC” puts it:

Liberals are always threatening anyone with violence, starting with Obama. How ironic, a leftist attempted to assassinate another leftist, I guess Gifford is not leftist enough she must be taken with a bullet from glock. Even the guy who tried to crash his airplane to IRS building in Texas last year was a leftwing, Amy Bishop who started shooting other professors (she’s a big Obama supporter) is a leftist; the guy who held staff of Discovery Channel last year was also a leftist, follower of Al Gore.

What’s the matter with liberals? You’re into violence! This is what you get when you read and listen too much of Markos Moulitsas (DailyKos), Arianna Huffington (huffpo), media matter, Rachel Maddow, Olbermann, Ed Shultz, Chris Matthews, Van Jones, and the LEADER of them all fomenting violence who disagree with LIBERAL MANTRA – BARACK HUSSEIN OBAMA.

Nothing surprises me about LIBERALS, even Charles Manson is a LIBERAL!

LIBERALS ARE WHACKOS! BUNCH OF LEFTWINGNUTS – ONE FRIES SHORT OF A HAPPY MEAL!

Teaching my Conspiracy class for the first time in Fall 2009, I became very familiar with utterances like this, even (I will admit) rather charmed by them. I can defocus my eyes and gauge the shrillness of claims about, say, the faked Moon landings by the number of capital letters and exclamation points. But I find it’s harder to be amused when the angry word-art accretes so quickly around a wound still very much open — lives and loves quite literally in the balance. Is my mind naive for whirling at the promptness of this inversion, the rapidity with which an act of supremely irrational violence has been repurposed into another, lexical form of ammunition?

The next few weeks will surely see many more such claims thrown around, and at the higher levels of the media’s digestive system, the production of much reasoned analysis about the language question itself. I’m sure I’ll engage with the issues raised, and eventually settle on a conclusion not all that much different from the position I already hold. But as I watch politicians work through their own chain of if-thens, framing and reframing the facts of Saturday’s carnage in hopes of advancing their own agendas (left, right, and everything in between), I believe I will have a hard time shaking a sense that we have been caught up in, rather than containing, Loughner’s particular form of madness.