Sharing — or stealing? — Trek

In a neat coincidence, yesterday’s New York Times featured two articles that intersect around the concerns of internet piracy and intellectual property rights on the one hand, and struggles between fan creators and “official” owners of a transmedia franchise on the other. On the Opinions page, Rutgers professor Stuart P. Green’s essay “When Stealing Isn’t Stealing” examines the Justice Department’s case against the file-sharing site Megaupload and the larger definitions of property and theft on which the government’s case is based. Green traces the evolution of a legal philosophy in which goods are understood in singular terms as something you can own or have taken away from you; as he puts it, “for Caveman Bob to ‘steal’ from Caveman Joe meant that Bob had taken something of value from Joe — say, his favorite club — and that Joe, crucially, no longer had it. Everyone recognized, at least intuitively, that theft constituted what can loosely be defined as a zero-sum game: what Bob gained, Joe lost.”

It’s flattering to have my neanderthal namesake mentioned as the earliest of criminals, and not entirely inappropriate, as I myself, a child of the personal-computer revolution, grew up with a much more elastic and (self-)forgiving model of appropriation, one based on the easy and theoretically limitless sharing of data. As Green observes, Caveman Bob’s descendants operate on radically different terrain. “If Cyber Bob illegally downloads Digital Joe’s song from the Internet, it’s crucial to recognize that, in most cases, Joe hasn’t lost anything.” This is because modern media are intangible things, like electricity, so that “What Bob took, Joe, in some sense, still had.”

Green’s point about the intuitive moral frameworks in which we evaluate the fairness of a law (and, by implication, decide whether or not it should apply to us) accurately captures my generation’s feeling, back in the days of vinyl LPs and audiocassettes, that it was no big deal to make a mix tape and share it with friends. For that geeky subset of us who then flocked to the first personal computers — TRS-80s, Apple IIs, Commodore 64s and the like — it was easy to extend that empathic force field to excuse the rampant copying and swapping of five-and-a-quarter inch floppy disks at local gatherings of the AAPC (Ann Arbor Pirate’s Club). And while many of us undoubtedly grew up into the sort of upstanding citizens who pay for every byte they consume, I remain to this day in thrall to that first exciting rush of infinite availability promised by the computer and explosively realized by the Web. While I’m aware that pirating content does take money out of its creators’ pockets (a point Green is careful to acknowledge), that knowledge, itself watered down by the scalar conceit of micropayments, doesn’t cause me to lose sleep over pirating content the way that, say, shoplifting or even running a stop sign would. The law is a personal as well as a public thing.

The other story in yesterday’s Times, though, activates the debate over shared versus protected content on an unexpected (and similarly public/personal) front: Star Trek. Thomas Vinciguerra’s Arts story “A ‘Trek’ Script is Grounded in Cyberspace” describes the injunction brought by CBS/Paramount to stop the production of an episode of Star Trek New Voyages: Phase II, an awkwardly-named but loonily inspired fan collective that has, since 2003, produced seven hours of content that extend the 1966-1969 show. Set not just in the universe of the original series but its specific televisual utopos, the New Voyages reproduce the sets, sound effects, music, and costumes of 60s Trek in an ongoing act of mimesis that has less to do with transformative use than with simulation: the Enterprise bridge in particular is indistinguishable from the set designed by Matt Jeffries, in part because it is based on those designs and subsequent detailing by Franz Joseph and other fan blueprinters.

I’ve watched four of the seven New Voyages, and their uncanny charm has grown with each viewing. For newcomers, the biggest distraction is the recasting of Kirk, Spock, McCoy, and other regulars by different performers whose unapologetic roughness as actors is more than outweighed by their enthusiasm and attention to broad details of gesture: it’s like watching very, very good cosplayers. And now that the official franchise has itself been successfully rebooted, the sole remaining indexical connection to production history embodied by Shatner et al has been sundered. Everybody into the pool, er, transporter room!

I suspect it is the latter point — the sudden opening of a frontier that had seemed so final, encouraging every fan with a camera and an internet connection to partake in their own version of what Roddenberry pitched as a “wagon train to the stars” — that led CBS to put the kibosh on the New Voyages production of Norman Spinrad’s “He Walked Among Us,” a script written in the wake of Spinrad’s great Trek tale “The Doomsday Machine” but never filmed due to internal disputes between Roddenberry and Gene Coon about how best to rewrite it. (The whole story, along with other unrealized Trek scripts, makes for fascinating reading at Memory Alpha.) Although Spinrad was enthusiastic about the New Voyages undertaking and even planned to direct the episode, CBS, according to the Times story, decided to exert its right to hold onto the material, perhaps to publish it or mount it as some sort of online content themselves.

All of which brings us back to the question of Caveman Bob, Caveman Joe, and their cyber/digital counterparts. Corporate policing of fan production is nothing new, although Trek‘s owners have always encouraged a more permeable membrane between official and unofficial contributors than does, say, Lucasfilm. But the seriousness of purpose evidenced by the New Voyages, along with the fan base it has itself amassed, have elevated it from the half-light of the fannish imaginary — a playspace simultaneously authorized and ignored by the powers that be, like the kid-distraction zones at a McDonalds — to something more formidable, if not in its profit potential, then in its ability to deliver a Trek experience more authentic than any new corporate “monetization.” By operationalizing Spinrad’s hitherto forgotten teleplay, New Voyages reminds us of the immense generative possibilities that reside within Trek‘s forty-five years of mitochondrial DNA, waiting to be realized by anyone with the requisite resources and passion. And that’s genuinely threatening to a corporation who formerly relied on economies of scale to ensure that only they could produce new Trek at anything like the level of mass appeal.

But in proceeding as if this were the case, Green might suggest, CBS adheres to an obsolete logic of property and theft, one that insists on the uniqueness and unreproducibility of any given instantiation of Trek. They have not yet embraced the idea that, in the boundless ramifications of a healthy transmedia franchise, there is only ever “moreness”; versions do not cancel each other out, but drive new debates about canonicity and comparisons of value, fueling the discursive games that constitute the texture of an engaged and appreciative fandom. The New Voyages take nothing away from official Trek, because subtraction is an impossibility in the viral marketplace of new media. The sooner CBS realizes this, the better.

We Have Never Been Digital: CGI as the New “Clumsy Sublime”

In his essay “Before and After Right Now: Sequels in the Digital Era,” Nicholas Rombes gives an example of the troubling way that CGI has eroded our trust in visual reality. Citing the work of Lola Visual Effects to digitally “youthen” the lead actors in the 2006 film X-Men: The Last Stand, Rombes cites a line from the effects house’s website: “Our work has far-reaching implications from extending an actor’s career for one more sequel to overall success at the box of?ce. We allow actors and studios to create one more blockbuster sequel (with the actor’s fan base) by making the actor look as good (or better) than they did in their ?rst movie.” Rombes responds: “What is there to say about such a brash and unapologetic thing as this statement? The statement was not written by Aldous Huxley, nor was it a darkly funny dystopian story by George Saunders. This is a real, true, and sincere statement by a company that digitally alters the faces and bodies of the actors we see on the screen, a special effect so seamless, so natural that its very surrealism lies in the fact that it disguises itself as reality.”

Before we adjudicate Rombes’s claim, we might as a thought experiment try to imagine the position from which his assertion can be made – the nested conditionals that make such a response plausible in the first place. If a spectator encounters X-Men: The Last Stand without prior knowledge of any kind, including the likelihood that such a film will employ visual trickery; if he or she is unaware of the overarching careers, actual ages, and established physiognomies of Patrick Stewart and Ian McKellan; and perhaps most importantly if that viewer cannot spot digital airbrushing that even now, a scant six years later, looks like a heavy coat of pancake makeup and hair dye, then perhaps we can accept Rombes’s accusation of hubris on the part of the visual-effects house. On the other hand, how do we explain the larger incongruity in which Rombes premises his critique of the “seamless … natural” and thus presumably unnoticeable manipulation on a widely-available text, part of Lola’s self-marketing, that highlights its own accomplishment? In short, how can a digital effect be simultaneously a surreptitious lie in one register and a trumpeted achievement in another? Is this characterization not itself an example of misdirection, the impossible masquerading as the possible, a kind of rhetorical special effect?

The truth is that Rombes’s statement in all its dudgeon, from an otherwise astute observer of cinema in the age of digital technologies, suggests something of the problem faced by film and media studies in relation to contemporary special effects. We might describe it as a problem of blind spots, of failing to see what is right before our eyes. For it is both an irony and a pressing concern for theoretical analysis that special effects through their very visibility – a visibility achieved both in their immediate appearance, where they summon the powers of cleverly-wrought illusion to create convincing displays of fantasy, and in their public afterlife, where they replicate and spread through the circulatory flows of paratexts and replay culture – lull the critical gaze into selective inattention, foregrounding one set of questions while encouraging others to slip from view.

By hailing CGI and the digital mode of production it emblematizes as a decisive break with the practices that preceded it, Rombes acquiesces to the terms on which special effects have always – even in predigital times – offered themselves through From the starting point of what Sean Cubitt calls “the rhetoric of the unprecedented,” such scholarship can only unfold an analysis whose polarities, whether celebratory or condemnatory, mark but one axis of debate among the many opportunities special effects provide to reassess the changing nature of textuality, storytelling, authorship, genre, and performance in the contemporary mediascape. A far-ranging conversation, in other words, is shut down in favor of a single set of concerns, organized with suspicious tidiness around a (rather abstract) distinction between truth and falsehood. This distinction structures debates about special effects’ “spectacular” versus “invisible” qualities; their “success” or “failure” as illusions; their “indexicality” or lack of it; and their “naturalness” versus their “artificiality.” I mean to suggest not that such issues are irrelevant to the theorization of special effects, but that their ossification into a default academic discourse has created over time the impression that special effects are only about such matters as “seamless … disguise.”

Perniciously, by responding to CGI in this way, special-effects scholarship participates in the ongoing production of a larger episteme, “the digital,” along with its constitutive other, “the analog.” Although it is certainly true that the underlying technologies of special-effects design and manufacture, like those of the larger film, television, and video game industries in which such practices are embedded, have been comprehensively augmented and in many instances replaced outright by digital tools, the precise path and timing by which this occurred are nowhere near as clean or complete as the binary “analog/digital” makes them sound. In point of fact, CG effects, so often treated as proof-in-the-pudding of cinema’s digital makeover, not only borrowed their form from the practices and priorities of their analog ancestry, but preserve that past in a continued dependence on analog techniques that ride within their digital shell like chromosomal genetic structures. In a narrowly localized sense, digital effects may be the final product, but they emerge from, and feed in turn, complex mixtures of past and present technologies.

Our neglect of this hybridity and the counternarrative to digital succession it provides is fueled more than anything else by a refusal to engage with historical change – indeed, to engage with the very fact of history as a record of incremental and uneven development. Consider the way in which Rombes’s charge against CGI rehearses almost exactly the terms of Stephen Prince’s influential essay “True Lies: Perceptual Realism: Digital Images, and Film Theory.” “What is new and revolutionary about digital imaging,” Prince wrote, “is that it increases to an extraordinary degree a filmmaker’s control over the informational cues that establish perceptual realism. Unreal images have never before seemed so real.” (34) Prince’s claim about the “extraordinary” nature of digital effects was written in 1996 and refers to movies such as The Abyss (1989), Jurassic Park (1993), and Forrest Gump (1994), all of which featured CG effects alleged to be photorealistic to the point of undetectability. Rombes, writing in 2010, bases his claim about digital effects’ seduction of reality on the tricks in a film released in 1996. “What happens,” Rombes asks, “when we create a realism that outstrips the detail of reality itself, when we achieve and then go beyond a one-to-one correspondence with the real world?” (201) The answer, of course, is that one more special effect has been created from the technological capabilities and stylistic sensibilities of its time: capabilities and sensibilities that may appear transparent in the moment, but whose manufacture quickly becomes apparent as the imaging norm evolves. If digital effects are as subject to aging as any other sort of special effects, then concerns about the threat they pose to reality become empty alarms, destined to be viewed with amusement, if not ridicule, by future generations of film theorists.

The key to dissolving the impasse at which theories of digital visual effects find themselves lies in restoring to all special effects a temporality and interconnectedness to other layers of film and media culture. The first step lies in acknowledging that special effects are always undergoing change; the state of the art is a moving target. Laura Mulvey’s term for this process is the “clumsy sublime.” She refers to the use of process shots in classical Hollywood to rear-project footage behind actors – effects intended to pass unnoticed in their time, but which now leap out at us precisely in their clumsiness, their detectability.

The lesson we should take from this is not that some more lasting “breakthrough” in special effects waits around the corner, but that the very concept of the breakthrough is structured into state-of-the-art special effects as a lure for the imagination of spectators and theorists alike. The danger is not of realer-than-real digital effects, but our overconfidence in critically assessing objects that are predicated on misdirection and the promise of conquered frontiers – and our mistaken assumption that we as scholars see these processes more objectively or accurately than prior generations. In this sense, special-effects scholarship performs the very susceptibility of which it accuses contemporary audiences, accepting as fact the paradigm-shifting superiority of digital effects, rather than seeing that impression of superiority as itself a byproduct of special-effects discourse.

In this way, current scholarship imports a version of spectatorship from classical apparatus theory of the 1970s, along with a 70s-era conception of the extent and limit of the standard feature film text. Both are holdovers of an earlier period of theorizing the film text and its impact on the viewer, and are jarringly out of date when applied to contemporary media, in their cycles of replay and convergence which break texts apart and combine them in new ways, as well as to the audience, which navigates these swarming texts according to their own interests, their own “philias.” The use of obsolete models to describe special effects is all the more ironic for the appeals such models make to a transcendent “new.” The notion that the digital, as emblematized by CGI, represents a qualitative redrafting of cinema’s indexical contract with audiences, holds up only under the most restrictive possible picture of spectatorship: it imagines special effects as taking place in a singular, timeless instant of encounter with a viewer who has only two options, accepting the special effect as unmediated event or rejecting it as artifice. That special-effects theory from Andre Bazin and Christian Metz onward has allowed for bifurcated consciousness on the part of the viewer is, in the era of CGI, set aside for accounts of special effects that force them into a real/unreal binary. The digital effect and its implied spectator are trapped in a synchronic isolation from which it is impossible to imagine any other way to conceptualize the work of special effects outside the moment of their projection. Even accounts of special effects’ semiosis, like Dan North’s, that foreground their composite nature; their role in the genres of science fiction (Vivian Sobchack), the action blockbuster (Geoff King), or Aristotelian narrative (Shilo McClean), only scratch the surface of the complex objects special effects actually are.

What really changes in the clumsy sublime is not the special effect but our perception of it, an interpretation produced not through Stephen Prince’s perceptual cues, Scott Bukatman’s kinesthetic immersion in an artificial sublime, or Tom Gunning’s appeal of the attraction – though all three may indeed be factors in the first moment of seeing the effect – but by a more complex and longitudinal process involving conscious and unconscious comparisons to other, similar effects; repeated exposure to and scrutiny of special effects; behind-the-scenes breakdowns of how the effect was produced; and commentaries and reactions from fans. Within this matrix of evaluation, the visibility or invisibility, that is to say the “quality,” of special effects, is not a fixed attribute, but a blackboxed output of the viewer, the viewing situation, and the special effect’s enunciatory context in a langue of filmic manipulation.

According to the standard narrative, some special effects hide, while others are meant to be seen. Wire removal and other forms of “retouching” modify in subtle ways an image that is otherwise meant to pass as untampered recording of profilmic reality, events that actually occurred as they seem to onscreen. “Invisible” effects perform a double erasure, modifying images while keeping that modifying activity out of consciousness, like someone erasing their own footsteps with a broom as they walk through snow. So-called “visible” special effects, by contrast, are intended to be noticed as the production of exceptional technique, capitalizing on their own impossibility and our tacit knowledge that events on screen never took place in the way they appear to. The settings of future and fantasy worlds, objects, vehicles, and performers and their actions are common examples of visible special effects.

This much we have long agreed on; the distinction goes back at least as far as Metz, who in “Trucage and the Film” proposed a taxonomy of special effects broad enough to include wipes, fades, and other transitions as acts of optical trickery not ordinarily considered as such. Several things complicate the visible/invisible distinction, however. Special effects work is explored in publications and in home-video extras, dissected by fans, and employed in marketing appeals. These paratextual forces, which extend beyond the frame and the moment of viewing, tend inexorably to tip all effects work eventually into the category of “visible.” But the ongoing generation of a clumsy sublime reveals a more pervasive process at work: the passage of time, which steadily works to open a gap between a special effect’s intended and actual impact. Dating is key to dislodging reductive accounts of special effects’ operations. The clumsy sublime is a succinctly profound insight into the way that film trickery can shift over time to become visible in itself as a class of techniques to be evaluated and admired, opening up discussions about special effects beyond the binary of convincing/unconvincing that has hamstrung so many conversations about them.

If today’s digital special effects can age and become obsolete – and there is no reason to think they cannot – then this undermines the idea that there is some objective measure of their quality; “better” and “worse” become purely relational terms. It also raises the prospect that the digital itself is more an idea than an actual practice: a perception we hold – or a fantasy we share – about the capabilities of cinema and related entertainments. The old distinction that held during the analog era, between practical and optical effects, constituted a kind of digital avant la lettre; practical effects, performed live before the camera, were considered “real,” while optical effects, created in post-production, were “virtual.” The coming of CGI has remapped those categories, making binaries into bedfellows by collapsing practical and optical into one primitive catchall, the “analog,” defined against its contemporary other, the “digital.” Amid such lexical slippages and epistemic revisions, current scholarship is insufficiently reflexive about apprehending the special effect. We have been too quick to get caught up in and restate the terms – Philip Rosen calls it “the rhetoric of the forecast” – by which special effects discursively promote themselves. In studying illusion, we risk contributing to another, larger set of illusions about cinematic essence.

What is revealed, then, by stepping out of our blind spot to survey special effects across the full range of their operations and lifespans? First, we see that special effects are profoundly composite in nature, marrying together elements from different times and spaces. But the full implications of this have not been examined. Individual frames are indeed composites of many separate elements, but viewed diachronically, special effects are also composited into the flow of the film – live-action intercut with special effects shots as well as special effects embedded within the frame. This dilutes our ability to quarantine special effects to particular moments; we can speak of “special-effects films” or “special-effects sequences,” but what percentage of the film or sequence consists of special effects, and in what combination? Consider how such concerns shape our labeling of a given movie as a “digital effects” film. Terminator 2: Judgment Day (1991) and The Matrix (1999) each contained only a few minutes of shots in which CG elements played a part, while the rest of their special effects were produced by old-school techniques such as animatronics and prosthetics. Yet we do not call these movies “animatronics films” or “prosthetics films.” The sliding of the signified of the film under the signifier of the digital suggests that, when it comes to special effects, we follow a technicist variation of the “one-drop rule,” where the slightest collusion of computers is an excuse to treat the whole film a digital artifact.

What, then, is the actual “other” to indexicality posed by special effects, digital and analog alike? It is insufficient simply to label it the “nonindexical”; in slapping this equivalent of “here there be dragons” on the terra incognita at the edge of our map’s knowability, we have not answered the question but avoided it. The truth is that all special effects, even digital ones, are indexical to something; they can all, in a certain sense, be “sourced” to the real world and real historical moments. If nothing else, they are records of moments in the evolution of imaging, and because this evolution is driven not only by technology but by style, it is always changing without destination. (As Roland Barthes observes, the fashion system has no end.) Digital special effects record the expressions of historically specific configurations of software and hardware just as, in the past, analog special effects recorded physical arrangements of miniatures and paintings on glass. Nowadays, with all analog effects retroactively rendered “real” by the digital, even processes such as optical printing and traveling mattes have come to bear their own indexical authenticity, just as film grain and lens flares record specifics of optics and celluloid stock. But the indexical stamp of special effects goes deeper than their manufacture. Visible within them are design histories and influences, congealed into the object of the special effect and frozen there, but available for unpacking, comparison, fetishization, and emulation by audiences increasingly organized around the collective intelligence of fandom. Furthermore, because of the unique nature of special effects (that is, as “special” processes celebrated in themselves), materials can frequently be found which document the effect’s manufacture, and in many cases – preproduction art, maquettes, diagrams – themselves represent evolutionary stages of the special effect.

Every movie, by virtue of residing inside a rationalized industrial system, sits atop a monument of planning and paperwork. In films that are heavy on design and special effects, this paperwork takes on archival significance, becoming technical archeologies of manufacture. Our understanding of what a special effect is must begin by including these stages as part of its history – the creative and technological paths from which it emerged. We recognize that what we see on screen is only a surface trace of much larger underlying processes: the very phenomenon of making-of supposes there is always more to the (industrial) story.

Following this logic, we see that special effects, even digital ones, do not consist of merely the finished, final output on film, but a messy archive of materials: the separate elements used to film them and the design history recorded in documents such as concept art and animatics. Special effects leave paratextual trails like comets. It is only because of these trails that behind-the-scenes materials exist at all; it is what we look at when we go behind the scenes. Furthermore, we see that special effects, once “finished,” themselves become links in chains of textual and paratextual influence. It is not just that shots and scenes provide inspiration for can-you-top-this performances of newer effects, but that, in the amateur filmmaking environments of YouTube and “basementwood,” effects are copied, emulated, downgraded, upgraded, spun, and parodied – each action carrying the effect to a new location while rendering it, through replication, more pervasive in the mediascape. Special effects, like genre, cannot be copyrighted; they represent a domain of audiovisual replication that follows its own rules, both fast-moving and possessed of the film nerd/connoisseur’s long-tail memory. Special effects originate iconographies in which auras of authorship, collections of technical fact, artistic influences, teleologies of progress/obsolescence, franchise branding, and hyperdiegetic content coexist with the ostensible narrative in which the special effect is immediately framed. These additional histories blossom outward from our most celebrated and remembered special effects; in fact, it is the celebration and remembering that keeps the histories alive and developing.

All of this contributes to what Barbara Klinger has called the “textual diachronics” of a film’s afterlife: an afterlife which, given its proportional edge over the brief run of film exhibition, can more frankly be said to constitute its life. Special effects thus mark not the erasure of indexicality but a gold mine of knowledge for those who would study media evolution. Special effects carry information and behave in ways that go well beyond their enframement within individual stories, film properties, or even franchises. Special effects are remarkably complex objects in themselves: their engineering, their semiotic freight, their cultural appropriation, their media “travel,” their hyperdiegetic contribution.

What seems odd is that while one branch of media studies is shifting inexorably toward models of complexity and diffusion, travel and convergence, multiplicity and contradiction, the study of special effects still grapples with its objects as ingredients of an older conception of film: the two-hour self-contained text. What additional and unsuspected functions lurk in the “excess” so commonly attributed to prolonged displays of special effects? Within the domains of franchise, transmedia storytelling, and intertextuality, the fragmentation of texts and their subsequent recontainment within large-scale franchise operations makes it all the more imperative to find patterns of cluster and travel in the new mediascape, along with newly precise understandings of the individuals/audiences who drive the flow and give it meaning.

To say that CG effects have become coextensive with filmmaking is not to dismiss contemporary film as digital simulacrum but to embrace both “digital effects” and “film” as intricate, multilayered, describable, theorizable processes. To insist on the big-screened, passively immersed experience of special effects as their defining mode of reception is to ignore all the ways in which small screens, replays, and paratextual encounters open out this aspect of film culture, both as diegetic and technological engagement. To insist that special effects are mere denizens of the finished film frame is to ignore all the other phases in which they exist. And to associate them only with the optical regime of the cinematic apparatus (expressed through the hypnotic undecidable of real/false, analog/digital) is to ignore the ways in which they spread to and among other media.

The argument I have outlined in this essay suggests a more comprehensive way of conceptualizing special effects in the digital era, seeing them not just as enhancements of a mystificatory apparatus but as active agents in a busy, brightly-lit, fully conscious mediascape. In this positivist approach, digital effects contribute to the stabilizing and growth of massive fantastic-media franchises and the generation of new texts (indeed, of the concept of “new” itself). In all of these respects, digital special effects go beyond the moment of the screen that has been their primary focus of study, to become something more than meets the eye.

Radii, resets, regressions

We were out walking Zachary in his stroller this afternoon when a woman we ran into — herself a parent by adoption — gave us some good advice: “Don’t follow the advice in parenting magazines.” My wife and I laughed in agreement, and I added, “Or the advice in parenting books.” It’s not the first time someone has gifted us with this particular piece of meta-wisdom, this one-hand-clapping of zen no-advice. Back in July, before we’d even left the hospital, a wise nurse (and they’re all wise, I believe fervently), assured us we’d be hit left and right by people eager to share their parenting wisdom. “Don’t listen to them,” the nurse said. “Use your common sense.”

All this is by way of saying that I have some advice of my own to share, based on our experiences raising Z so far, but you are welcome to ignore it. I have no idea if these are universal principles; they’re just what I’ve doped out so far as a father to an eight-month-old boy whose development seems day by day to increase on a logarithmic scale, an accelerating trajectory whose skybound momentum is by turns exhilarating and terrifying.


A day before his six-month birthday, Z began crawling. It was a makeshift and ungainly thing, this crawl, a neuromuscular kludge in which, belly-down, he basically pulled himself around using only his arms. We called it the army crawl. Since then he’s graduated to a more classical four-point configuration, hands and knees in a busy scramble (and a neat trick where he tucks a leg under and tripods into a buddha sit).

Regardless of his mode of locomotion, the instant effect of the crawl was to convert Z into a free agent, newly untethered and agential, and in the same moment remap our house into a space of vectors and targets, reachable spots and desirable destinations. In short, our little boy now exists at the center of a constantly shifting circle of possibility, forcing us to adopt his perspective a la the cybernetic visual overlay of the Terminator: we look to see where he might go, where he is going right now, and move to intercept him. He exists in a radius of opportunity, and we exist in an overlapping Venn diagram of protective, even prophylatic parental anxiety, meeting him in a quantum space of superpositions, half-realized outcomes, probabalistic perils. A similar principle compels us, when sitting him at a table, to instantly sweep all graspable objects out of reach. Countering our countermeasures, he swings his stuffed green bean in wide arcs of influence, extending his zone of collision. The radii keep shifting, his hopeful, ours horrified, and in this way our home becomes a battlefield: not a real one, but the simulated space of a tabletop wargame.


As a result, the reset was invented. Resets involve picking Z up and putting him down somewhere else — nothing more, nothing less — a gentle interruptive teleportation that (so far) he fortunately seems to experience as a kind of game rather than as what it really is, a thwarting of his will. Using resets, I have successfully washed a sinkful of dishes while Z two-points and four-points across the kitchen floor. He’s headed for the cat-food dishes: reset. He’s pulling himself up on the stairs: reset. He’s about to topple the Cuisinart mixing bowl: reset. The reset is my strategic response to his tactics of the radius, and so far it’s working. As Z’s speed and range increase, all bets are off, a further way in which life as a parent has shifted us inexorably into the realm of the projective and hypothetical. (So much of our talk about Z is about what’s going to happen next, rather than what’s happening right now; it would be nice to live in the moment, but our responsibilities won’t let us.)


When Daylight Savings Time kicked in a couple of weeks ago and we set our clocks ahead by an hour, all hell broke loose in Z’s bedtime schedule; what had been a predictable ritual taking us from bath to crib became a contest of wills, the baby monitor bringing us his unhappy cries as we collapsed onto the couch to eat dinner and watch TV, taking us back upstairs to the nursery to pat his butt until he dropped off, only to wake again minutes later. Complacently, we had believed ourselves to be doing pretty well with the sleep thing, and this new wrinkle in Z’s behavior — which we experienced as a kind of un-behavior, a randomizing of his actions that was scary precisely because we lacked a pattern to deal with it — made both K and me worry that, in fact, we didn’t know what we were doing after all. Imagine our relief when we learned about the eight-month sleep regression (which can also kick in at nine months and ten months): as his brain blossoms and skill sets swell, he’s simply got so much going on inside him that he can’t relax in the old way. Of course I am aware that this is a positive spin on a worrisome situation, hence seductive in its reasoning, but I’ll take it — because the truth is, there’s nothing more exciting than witnessing the small explosions of Z’s mind and body churning toward complexity like an internal-combustion engine, and as someone whose own childhood was marked by an overheated imagination and corresponding difficulty getting a good night’s sleep, I think I know where Z is coming from. Or at least where I want him to be coming from.

And that’s the other kind of regression that’s happening here, taking place across all the phenomena I’m writing about: radii and resets are themselves forms of blissful regression for my wife and me, as we try to intuit the world inside our youngster and respond to it compassionately, intelligently, cautiously, caringly. Raising a child, I’m finding, is also an act of re-engaging with the child in oneself, imagining yourself into his skin and senses, building a foundation of empathy with an emergent network of nerves and impulses that builds itself, second by second, minute by minute, day by day, week by week, year by year, into a person.

The Walking Dead

How does the old joke go? “What a terrible restaurant — the food sucks, and such small portions!” That seems to be the way a lot of people feel about AMC’s The Walking Dead: it’s an endless source of disappointment as well as the best damn zombie show on television.

Not that there’s much competition. Contemporary TV horror is a small playing field, nothing like the heyday of the 1970s, when Night GalleryKolchak, Ghost Story, and telefilms like Don’t Be Afraid of the Dark fed home audiences a plentiful stream of dark and disturbing content, “channeling” a boom in horror cinema that began with demonic-possession blockbuster The Exorcist and morphed late in the decade, via Halloween and Friday the 13th, into the slasher genre. The only real competition for TWD is American Horror Story, a series whose unpleasantness is so expertly-wrought that I couldn’t make it past the third episode. Apart from this and an endless supply of genre-debasing quasi-reality shows on SyFy a la Paranormal Witness, there’s simply not a lot to choose from, and for this reason alone, The Walking Dead is far, far better than it needs to be.

But it’s still a frustrating show: like its zombies, slow-moving and unsure of its goals. (The guys at Penny Arcade, it should be pointed out, hold the opposite interpretation.) Following a phenomenal pilot episode that ended on one of the best cliffhangers I’ve seen since the closing shot of Best of Both Worlds, Part 1, the first season burned through four tense episodes, only to close with an implausible, shoehorned finale set in CDC control center. Season two, at twice the length, has moved at half the speed, and while I enjoyed the thoughtful pace of life at Herschel’s farm, I grew impatient — like many — with plots that seemed to circle compulsively around the same issues week after week, played out in arguments that reduced a formidable cast of characters (and likeable actors) into tiresomely broken records. (The death of Dale [Jeffrey DeMunn] in the antepenultimate episode came as a relief, signaling that the series was fed up with its own moral center.)

Too, there is simply a feeling that more should be happening on a show about the zombie apocalypse; events should play out on a larger scale, balancing the conflicts among characters with action sequences on the level of the firebombing of Atlanta that opens “Chupacabra.” Part of the problem, I suspect, is that the ZA has been visualized so thoroughly in the decades since George A. Romero’s Night of the Living Dead; books like Max Brooks’s Zombie Survival Guide and World War Z, not to mention the many sequels, remakes, and ripoffs of Romero’s 1968 breakthrough, have fleshed out the undead plague on a planetary scale. The blessing of this most fecund of horror genres (second only, perhaps, to vampires) is also its curse: too much has been said, too many bottoms of barrels scraped, too many expectations raised. When the Centers for Disease Control put out preparedness warnings, it’s a safe bet the ante has been upped.

Of course, the most proximate source of raised expectations is the comic book and graphic novel series that originated The Walking Dead; Robert Kirkman, Tony Moore, and Charlie Adlard captured lightning in a bottle with their brisk yet methodical storytelling, whose black-and-white panels powerfully recall Romero’s foundational film, and whose pacing — in monthly bites of thirty pages — lends itself to a measured unfolding that has so far eluded the TV version. I’m less interested in discrepancies between the comic and the show than in the formal (indeed, ontological) problems of adaptation they illustrate: like Zack Snyder’s Watchmen movie, some fundamental, insurmountable obstruction seems to exist between the two forms of visual storytelling that otherwise seem so suited to mutual transcoding.

On a surface level, what works in the comic — the mise-en-scene of an emptied world, a uniquely American literalization of existential crisis through the metaphor of reanimated, cannibalistic corpses — works beautifully on screen. And person by person, the show brings the characters of the page to life (an artful act of reanimation itself, I suppose). But what it hasn’t done, and maybe never can do, is recreate the comic’s particular style of punctuation, doling out panels that closely attend to nuances of expression and shifts in lighting, then interleaving those orderly moments of psychological observation with big, raw shocks of splash pages that bring home the sickening spectacle of existence as eventual prey.

I’ll tune in tonight for the finale, and without question I will be there to devour season three. Furthermore, I’ll defend The Walking Dead — in both its incarnations — as some of the best horror that’s currently out there. But I’ll be watching the show out of a certain duty to the genre, whereas the comic, which I’m saving up to read in blocks of 10 and 12 issues at a go, I’ll savor as such stories are meant to be savored: late at night, alone in the quiet house, by a lamp whose glow might as well be the last light left in a world gone dark.

Of Katniss E and Jennifer L

I’m about 30% of the way through Catching Fire, the second book in the Hunger Games trilogy, and something that jumped out at me in the first volume is even more apparent in the glare of publicity around the film adaptation, starring Jennifer Lawrence, that comes out March 23: the uncanny precision of the saga’s send-up of media culture and celebrity.

What stands out on first encounter with the story of Katniss Everdeen are, of course, other things. There’s the breathless, adrenalized competition for survival represented by the eponymous games themselves — a mashup of pop-culture nightmares familiar from other sources, primarily Battle Royale and Stephen King’s early novels (written as Richard Bachman) The Long Walk and The Running Man. Even earlier pre-texts include William Golding’s Lord of the Flies and Nigel Kneale’s BBC one-off Year of the Sex Olympics (1968); but it took The Hunger Games to reconfigure the basic scenario of people-preying-on-other-people-for-a-mass-audience around the subjectivity of a young female protagonist: final girl as must-see TV.

My own attention is captured more by the trilogy’s portrait of its totalitarian state, the nation of Panem, which arises after the U.S. has been hobbled by a vaguely-defined catastrophe. As dystopian futures go, Panem’s mechanisms of tyranny merge the historical forms of domination mapped by Michel Foucault in Discipline and Punish: there are thugs with guns enforcing martial law, but there are also elaborate, interlocked systems of surveillance and broadcast media in which Panem’s subjects live under a constant scrutiny whose public facets are the garish electronic proscenia of show biz.

Hardly surprising, given author Suzanne Collins’s explanation of the story’s origins; like Raymond Williams in the early 1970s, Collins had her brainstorm while randomly channel-surfing. She noticed a disturbing resonance between reality TV and coverage of the invasion of Iraq, influences which lent her resulting work the dual immediacies of contemporary political conflict and an entertainment culture of last-person-standing competitions.

It is the latter portions of the trilogy that fascinate me the most, as Katniss is primped, costumed, and styled into a media star and emblem of Panem’s coercive patriotism. The funniest and most biting scenes involve the team of make-up artists and hairstylists who have been assigned the task of making her over; themselves a tattooed and ornamented bunch with rainbow-hued hair, the entourage gives Collins — via Katniss — a chance to comment mordantly on the fixations of fame, often figured through torturous transformations of Katniss’s face and body, making literal John Updike’s characterization of celebrity as “a mask that eats into the face.”

It’s hard not to think of Katniss’s split between public persona and private space — a space that, in the Hunger Games, is implicitly subversive, even treasonous — when looking at this week’s coverage of the movie’s rollout. “Jennifer Lawrence steals the show at ‘The Hunger Games’ premiere,” writes Access Hollywood, in gushing tones that could have come straight from the clown-crayoned mouth of Effie Trinket. “Jennifer Lawrence stuns the crowd in a golden Prabal Gurung gown at ‘The Hunger Games’ premiere where she chats with Access’ Shaun Robinson about how her life has changed for better and worse since taking on the role of Katniss.”

Jason Mittell wrote recently about “inferred interiority,” that intersubjective artifact of serial storytelling in which the limitations of visual media to present a character’s inner life are compensated for by the viewer’s store of knowledge accumulated through exposure to and study of previous episodes. Reading this effect transmedially and paratextually — not, that is, along the solitary throughline of a single serialized fiction, but along the perpendicular axes of an actor’s larger intertextual existence, along with that of the characters they play — it’s hard not to infer beneath Lawrence’s smiling face the subtle signs of Katniss’s resistance to her own commodification through beautification.

The critical comparisons that unfold from this odd collision of realities range from the similarities between Panem and current political culture (not exactly a huge leap, given the frightening religiosity and hard-line social conservatism of the Republican presidential candidates) to the relentless spectacularization of young women’s bodies in both fictional and actual frameworks — the disciplinary operations of patriarchy marked in the one and unmarked in the other. The artistic merits of the Hunger Games franchise aside (and for the record, I’m enjoying the books and looking forward to the film), it has succeeded, like all good dystopian SF, in collapsing a certain distance between the reassuring rituals of our daily life and the troubling trends that lurk beneath its painted-on smiles.

Prometheus’s fan dance

This summer will see the release of Ridley Scott’s Prometheus, a project weighted by considerable expectations given its connection to the thirty-year-old Alien franchise. The particulars of that connection have been kept vague by producers — witness the tortuous finessing of the film’s Wikipedia page:

Conceived as a prequel to Scott’s 1979 science fiction horror film Alien, rewrites of Spaihts’ script by [Damon] Lindelof developed a separate story that precedes the events of Alien, but which is not directly connected to the films in the Alien franchise. According to Scott, though the film shares “strands of Alien’s DNA, so to speak,” and takes place in the same universe, Prometheus will explore its own mythology and ideas.

This kind of strategic ambiguity is a hallmark of the viral marketplace, which replaces the saturation bombing of traditional advertising with the planting of clues and fomenting of mysteries. It’s a fan dance in two senses, scattering meaningful fragments before an audience whose passionate interest and desire to interact are a given. Its logic is that of nonlinear equations and unpredictably large outcomes from small causes, harnessing the butterfly effect to build buzz.

Any lingering doubt about the franchise pedigree of Prometheus, however, should be put to rest by this piece of viral marketing, a simulated TED talk from the year 2023.

The link here is, of course, the identity of the speaker: Peter Weyland is one of the founders of Weyland-Yutani, the evil corporation behind most of the important events in the Alien-Predator universe. “The Company,” as it’s referred to in the 1979 film that launched the franchise, is in part a developer and supplier of armaments, and across the films, comics, and novels of the series, the Company pursues the bioweapon represented by the toothy xenomorph with an implacable willingness to sacrifice human lives: capitalism reconfigured as carnivorous, all-consuming force.

The real-world origins of Weyland-Yutani are quite specific: the name and logo were invented by illustrator Ron Cobb as part of the preproduction and concept art for Alien. Cobb designed many of the symbols and insignia that decorate the uniforms and props of the Nostromo, small but distinctive details that lend the movie’s lived-in future a unity of invented brands. One icon in particular, a set of wings modeled on an Egyptian “sun disk,” was associated with Weyland-Yutani, a name that Cobb threw together as a combination of in-joke and future history:

Science fiction films offer golden opportunities to throw in little scraps of information that suggest enormous changes in the world. There’s a certain potency in those kinds of remarks. Weylan Yutani for instance is almost a joke, but not quite. I wanted to imply that poor old England is back on its feet and has united with the Japanese, who have taken over the building of spaceships the same way they have now with cars and supertankers. In coming up with a strange company name I thought of British Leyland and Toyota, but we couldn’t use “Leyland-Toyota” in the film. Changing one letter gave me “Weylan,” and “Yutani” was a Japanese neighbor of mine.

A version of this logo and the current version of the company name (which adds a “d” to “Weylan”) ends the simulated TED talk, demonstrating another chaos dynamic that shapes the fortunes of fantastic-media franchises: minute details of production design can blossom, with the passage of the years, into giant nodes of continuity. These nodes unify not just the separate installments of a series of films, but their transmedia and paratextual extensions: the fantasy TED talk “belongs” to the Alien universe thanks to its shared use of Cobb’s design assets. Instances like this make a convincing case that 1970s production design in science fiction film laid the groundwork for the extensive transmedia fantasy worlds of today.

As for Peter Weyland’s talk, which was directed by Ridley Scott’s son and scripted by Damon Lindelof (co-creator of another complex serial narrative, LOST), the bridging of science fiction and science fact here manifests as a collusion between brands, one fictional and one “real” — but how real is TED, anyway? I mean this not as a slam, but as acknowledgment that TED often operates in what Phil Rosen has termed “the rhetoric of the forecast,” speculating about futures that lie just around the corner. In this way, perhaps the 2023 TED talk is an example of what Jean Baudrillard called a “deterrence machine,” using its own explicit fictiveness to reinforce the sense of reality around TED, much like Disneyland in relation to Los Angeles:

Disneyland is there to conceal the fact that it is the “real” country, all of “real” America, which is Disneyland (just as prisons are there to conceal the fact that it is the social in its entirety, in its banal omnipresence, which is carceral). Disneyland is presented as imaginary in order to make us believe that the rest is real, when in fact all of Los Angeles and the America surrounding it are no longer real, but of the order of the hyperreal and of simulation. It is no longer a question of a false representation of reality (ideology), but of concealing the fact that the real is no longer real, and thus of saving the reality principle.

One wonders what Baudrillard would have made of the contemporary transmediascape, with its vast and dispersed fictional worlds superintending a swarm of texts and products, the nostalgic archives of its past, the hyped adumbrations of its present. Certainly our entertainment industries are becoming ever more sophisticated in the rigor and reach of their fantasy construction: a fan dance with the future, and a process in which observant audiences eagerly assist.

William Cameron Menzies and the Clumsy Sublime

This is a paper I wrote for the SCMS 2011 Confererence, which took place last March in New Orleans. A family emergency prevented me from attending, so my colleague and good friend Chris Dumas — who organized our panel on Kings Row (Sam Wood, 1942) — kindly gave the presentation in my absence. The essay’s full title, too long for this blog, is ” ‘Each of Us Live in Multiple Worlds’: William Cameron Menzies and In/Visible Production Design Between Classical and Digital Hollywood.”


1: A Not-So-Clumsy Sublime

As a student of special effects, the first time I saw Kings Row my attention was drawn inevitably to those points in the movie where artifice tips its hand: animated lightning bolts superimposed against a stormy sky; a miniature train passing between the camera and what appears to be rear-projected footage; and most of all the backdrops that pervade the film – painted cycloramas of rolling, pastoral hills, the roof lines of houses and mills, vast skies piled with billowing clouds.

Such moments, which are vital not just to establishing the town of Kings Row as narrative space but to mapping the idyllic and nightmarish polarities of Kings Row as cinematic experience, form an inextricable part of the film’s texture. In that they also mark interventions by studio trickery, they also document the operations of classical Hollywood during a key period in the development of its illusionistic powers, when emerging articulations among shooting script, art direction, and visualization technologies – choreographed by a new managerial category, the production designer – set the industry on a path that would lead, some seventy years later, to the immersive digital worlds of contemporary blockbuster franchises.

Writing about the use of rear-projection in classical Hollywood, Laura Mulvey has coined the term clumsy sublime to refer to that weird subset of screen imagery in which a cost-saving measure – in her example, filming actors against previously-captured footage – results in a burst of visual incongruity whose “artificiality and glaring implausibility” in relation to the shots that bracket it invites a different kind of scrutiny from the spectator.[1] There is an echo here of Tom Gunning’s famous formulation of the early-cinema “attraction,” which presents itself to appreciative viewers as a startling sensorial display,[2] but Mulvey’s point is that rear projection was rarely intended to be noticed in its time; it only “seems in hindsight like an aesthetic emblem of the bygone studio era.”[3] Like the attraction, the clumsy sublime destabilizes our ontological assumptions about how the image was made (indeed, its impact stems largely from our sudden awareness that the image was manufactured in the first place). But where Gunning argues that contemporary, spectacular special effects carry on the highly self-conscious work of what he calls the “tamed attraction,” the clumsy sublime suggests a more contingent and even contentious relationship to cinema’s techniques of trompe l’oeil, in which illusions originally meant as misdirective sleight-of-hand acquire with age their own aura of movie magic.

Looking at Kings Row as a special-effects film, then, invites us to redraw the borders between visible and invisible special effects – those meant to be noticed as spectacles in themselves and those meant to pass as seamless threads in the narrative fabric – and to consider the degree to which such an apparently obvious distinction, like those that once applied to practical versus optical effects, and which now separate analog from digital modes of production, flows not from some innate property of the artifact but from the cultural and industrial discourses that frame our understanding of film artifice itself.


2: William Cameron Menzies and the Visualization of Kings Row

As David Bordwell observes in his blog post “One Forceful, Impressive Idea,” William Cameron Menzies was a pivotal figure in the evolution of film design.[4] After rising to prominence as an art director during the 1920s, he coordinated key sequences of Gone with the Wind, where he originated the title of production designer. Menzies’s detailed breakdowns of each shot, in addition to demonstrating his particular expressionist tendencies (strong diagonals, stark lighting contrasts, forced-perspective settings, and dramatically high or low camera angles), embodied a newly integrative philosophy of composing for the frame. Just as Menzies was an interstitial figure in whom were subsumed those functions of the director and cinematographer having to do with conceiving shots and scenery in dialogue with each other, his sketches and drawings embedded within themselves multiple phases of film manufacture, designating, in addition to set design and actor blocking, “the camera’s viewpoint, the lens used, and any trick effects.”[5] In this way, the first mature storyboards blurred temporal and technological lines between practical and optical special effects, pre- and post-production, while Menzies himself complicated auteurist assumptions about cinematic authorship, leaving his distinctive signature on the movies in which he played the greatest role behind the scenes – in Bordwell’s description, “abduct[ing] these films from their named directors.”[6]

This seems to have been especially true of Sam Wood, whose three-year, five-film partnership with Menzies included Our Town (1940) and The Pride of the Yankees (1942). Kings Row, while neither as lyrical as the former nor as blunt as the latter, represents a more restrained and oblique application of Menzies’s skills, eschewing obvious flourishes in favor of a more controlled approach in which the most elaborate manipulations of time and space are snugly folded into the narrative fabric. Consider, for example, the opening moments: a horse-drawn wagon, silhouetted against a characteristically sky-dominated frame, crosses the prairie as the opening credits play. As the wagon crosses between the camera and a sign reading Kings Row, there is a cut, taking us from footage shot on location to a backlot setting. A rightward tracking shot continues the motion, bringing into view an elementary school from which children emerge, including young versions of protagonists Parris Mitchell, Drake McHugh, and Cassie Tower. The soundtrack’s singing voices hover somewhere between the diegetic and nondiegetic, paralleled by Erich Wolfgang Korngold’s score, evoking the happy play of children while foreshadowing the psychoanalytic themes of the rest of the film.

The efficient encapsulation of plot information, so typical of classical Hollywood narration, is here conveyed through what is essentially a virtual shot stitched together from “real” and “artificial” elements, prefiguring the digitally-assisted establishing shots now commonplace in cinema.

An even more complex assemblage occurs later in the film, as Parris departs to begin his studies abroad. From a long shot of Drake, Parris, and Randy Monaghan on the platform, we cut to a different angle on the same scene, the image degraded and grainy in a way that suggests second-generation footage. Echoing the earlier left-to-right motion of the wagon, a train sweeps into the frame, its miniature status given away by the lack of focus on the foreground element. As the train slides past, a carefully-timed wipe shifts us back to a medium closeup of Randy and Drake. A shot-reverse-shot series shows Parris waving goodbye as the train carries him around the bend, the painted backdrop of the mill in the distance.

Elegant for their era, both of these brief passages presumably passed unnoticed by their initial audience, but with the passage of time, their sleight-of-hand has become more evident, constituting new nodes of fascination in a film text that is also – like all movies, but especially those that depend on special effects – indexical evidence of its own manufacture.

Perhaps the most eloquent of Menzies’s contributions to Kings Row are the cycloramas that pepper the film, lending it a painterly, faintly uncanny air. This feeling is present in the town’s train yard as well as its flowery fields, framing the actors in front of them in a theatrical amber similar to that which Mulvey ascribes to rear projection:

Performances … tend to become self-conscious, vulnerable, transparent. The actors can seem almost immobilized, as if they are in a tableau vivant, paradoxically at the very moment in the film when there is a fictional high point of speed, mobility, or dramatic incident.[7]

But in Kings Row the effect of the painted backdrops is different: less of an interruption, more in synch with the story’s themes. The town of Kings Row is, after all, a kind of beautiful trap, nurturing its children only to imprison them like drawings in a storybook, and beneath the pastoral languor of its more innocent vistas run undercurrents of the poisonous, narcotic, or – to adopt the film’s medicinal metaphor for its sadistic counterforces – anesthetic.

Oppositions between innocence and corruption, the sublime and the malign, that shape the film’s darker turns (Cassie’s madness, Dr. Tower’s murder-suicide, the double castration of Drake’s bankruptcy and amputation) are most evident in the shifting portrayal of its most important site, the fence line running along the Mitchell property – a space of transition whose markings of studio artifice reinforce, rather than dilute, its metamorphic extremes.

3. Building Better Screen Worlds, Then and Now

The productions for which William Cameron Menzies is perhaps most remembered are his two forays into science fiction: Things to Come (1936) and Invaders from Mars (1953), whose (admittedly very different) deployments of SF iconography enabled him to indulge his penchant for striking visual invention. His industrial legacy bears out this genetic pairing of strong, centrally-organized production design and the genres of science fiction and fantasy, whose storyworlds tend to be built from the ground up, and whose product differentiation in terms of franchise potential require the creation of distinct brand identities, recognizable by consumers and defensible by the intellectual-property law that polices a minimum necessary distance between, say, the stylistic universes of Star Wars and Star Trek, or between Harry Potter and The Chronicles of Narnia. The tools available to Menzies in crafting his worlds can be traced to the Special Effects Department at Warner Brothers, where artist-technicians such as Hans Koenekamp, Byron Haskin, and the effects supervisor for Kings Row, Robert Burks, worked on countless films from the 1920s to the 1960s.[8] Their glass shots and matte paintings – as well as their practical effects work such as the creation of wind, lightning, and other environmental effects – have their contemporary counterpart in the digital set extensions and CGI elements whose near-ubiquity says less about the inventiveness of our current screen wizardry than about its vastly increased speed and efficiency.

The classical and analog roots of digital modes of production remain relatively unexcavated in modern special-effects scholarship, whose coherence as a subdiscipline of film and media studies began with the advent of computers as all-purpose filmmaking tools and fixture of the popular imagination in the late 1990s. But as CGI performs one type of spectacular labor though its monsters, explosions, and spaceships while distracting us from its more quotidian augmentations of mise-en-scène, critical film theory stands to benefit from considering the present era’s counterintuitive linkages to the golden age of Hollywood, which foregrounded smooth verisimilitude through an equally intricate web of technological trickery.

The clumsy sublime, product of a time-based calculus of spectatorship and a shifting state of the art, is an important tool in this critique, in part because it enables new readings of familiar film texts. Seen through the lenses of technology and style that special-effects history provides, a film like Kings Row seems less like a dated artifact than a predictor of the present. For just as its narrative, set at the end of the 19th century and dawn of the 20th, stages on a manifest level the birth of psychoanalysis, its production stages in latent terms the emergence of a filmic apparatus for the production of expressive screen worlds.

[1] Laura Mulvey, “A Clumsy Sublime,” Film Quarterly 60.3 (2007).

[2] Tom Gunning, “The Cinema of Attractions: Early Film, Its Spectator, and the Avant-Garde,” in Early Cinema: Space, Frame, Narrative (Ed. Thomas Elsaesser. London: BFI, 1990), 56-62.

[3] Mulvey, “A Clumsy Sublime.” Emphasis added.

[4] David Bordwell, “One Forceful, Impressive Idea,” (accessed March 1, 2011).

[5] Ibid.

[6] Ibid.

[7] Mulvey, “A Clumsy Sublime.”

[8] Peter Cook, “Warner Bros. Presents … A Salute to the Versatility and Ingenuity of Stage 5: Warner’s Golden Era Effects Department,” (accessed February 25, 2011).


The Reimagination of Disaster

Adapting Watchmen After 9/11

For a work that gives off such a potent impression of originality and singularity, Watchmen has always been haunted by the concept of the near-parallel, the skewed copy, the version. Start with its setting, an alternate-reality 1985. Cocked at a knowing and sardonic angle to our own, the world of Watchmen is one in which superheroes and costumed crimefighters are real, American won the Vietnam War, and Richard Nixon has just been elected to his fifth term as U.S. President. Consider too the book’s industrial origins in the early 1980s, when DC purchased a set of characters from the defunct Charlton Comics and handed them to a rising star, the British writer Alan Moore, to spin into a new series. When it became clear that the “close-ended scenario” Moore envisioned would preempt the Charlton lineup’s commercial possibilities, Moore and his collaborator, illustrator Dave Gibbons, simply reinvented them: Blue Beetle became Nite Owl, Captain Atom became Dr. Manhattan, The Question became Rorschach, and so on — an act of “reimagining” avant la lettre (1). The result, a 12-issue limited series published in 1986-1987 and collected in a single volume many times thereafter, is one of the undisputed keyworks in any canon of comic-book literature (or, if you prefer, graphic novels), winning a 1988 Hugo Award and named by Time Magazine one of the 100 best English-language novels from 1923 to 2005 (2).

But if Watchmen‘s transit across the hierarchies of culture constitutes yet another level of indeterminacy — a kind of quantum tunneling among the domains of geek, popular, and elite taste — this trajectory seemed to hit its limit at the ontological divide between the printed page and moving-image media. The drive to turn Watchmen into a movie arose early and failed often over the next two decades. Producer Joel Silver and directors Terry Gilliam and Darren Aronofsky were among those who ultimately fell before the challenge of what Moore described as an “unfilmable” text for its dependence on the formal aesthetics of comic-book storytelling (3). By the time Zack Snyder’s adaptation finally made it to the screen in 2009, the mission had grown beyond a mere cinematic “take” on the material into the production of something like a simulacrum, marshalling a host of artistic and technological resources to recreate in exact detail the panels, dialogue, and world-design of the original.

Only one thing was changed: the ending.

Watchmen‘s climax involves a conspiracy by the polymathic industrialist Adrian Veidt — alter-ego of the superhero Ozymandias — to correct the course of a world on the brink of nuclear armageddon. As written by Moore and drawn by Gibbons, Veidt teleports a giant, genetically-engineered squid into the heart of New York City, killing millions, and tricking global superpowers into uniting against a perceived alien invasion. Snyder’s version omits the squid in favor of a different hoax: in the movie, Dr. Manhattan, a superbeing created by atomic accident, is set up as the fall guy for a series of explosions in major world cities. As explained by Snyder and screenwriter Alex Tse in commentaries and interviews, the substitution elegantly solves a number of storytelling tangles, cutting a Gordian knot much like the one faced by Veidt. It simplifies the narrative by eliminating a running subplot; it employs a major character, Dr. Manhattan, whose godlike powers and alienation from humanity provide a logical basis for the blame he receives; perhaps most importantly, it trades an outrageous and even laughable version of apocalypse for something more familiar and believable.

Measured against the relentless fidelity of the rest of the project, the reimagined ending of Watchmen has much to say about the process of adaptation in an era of blockbuster filmmaking and ubiquitous visual effects, as well as the discursive means by which differences between one version and another are negotiated in an insistently expansive culture of seriality and transmedia. But it is also a striking and symptomatic response to an intervening real-world event, the terrorist attacks of 9/11, and our modes of visualizing apocalypse and its aftermath.

From the start, Watchmen‘s reputation for unadaptability stemmed not just from its origin as a graphic novel but from the self-reflexive way it drew on the unique phenomenology and affordances of comic-book storytelling. Its 384 pages interpolate metafictional excerpts from personal memoirs and tabloid interviews, business memos, and product brochures. The wealth of invented and implied information in these prepackaged paratexts extends the effects of Gibbons’s artwork, laid out in precise, nine-panel grids brimming with background details, from the logo of the Gunga Diner restaurant chain to the smokeless cigarettes, electric cars, and dirigible airships that signal an alternative technological infrastructure. Shaped by symmetries large and small, the comic’s plot — a mystery about the murder of costumed heroes — emerges as a jigsaw of such quotidian pieces, assembled by readers free to pause and scan back through the pages, rereading and recontextualizing, in a singularly forensic and nonlinear experience of the text.

Nowadays, the difficulty of appreciating the unprecedented nature of Watchmen is, ironically, proof of its genre-altering influence: grim dystopias and grittily “realistic” reinventions of superheroes quickly became a trend in comics as well as film, making Watchmen a distant parent of everything from Kick-Ass (Mark Millar and John Romita, Jr., 2008-2010) to Heroes (2006-2010) and The Dark Knight (Christopher Nolan, 2008). It may also be hard, in retrospect, to grasp why a detailed fictional world should pose a challenge to filmmakers. For his 1998 “special editions,” George Lucas added layers of CG errata to the backgrounds of his original Star Wars trilogy, and contemporary techniques of match-moving and digital compositing make it possible to jam every frame with fine detail. (Given that this layered ornamentation is best appreciated through freeze-frames and replays — cinematic equivalents of flipping back and forth through the pages — one might trace this aesthetic back to Watchmen as well.) Simply put, the digital transformation of filmmaking, a shift most visible in the realm of special effects but operative at every level and stage of production, has made the mounting of projects like Watchmen relatively straightforward, at least in technical terms.

But the state of the art did not emerge overnight, and Watchmen‘s path to adaptation was a slow and awkward one. Two concerns dominated early efforts to convert Moore’s and Gibbons’s scenario into a workable screenplay: compression of the comic’s scope (with a consequent reduction of its intricacy); and how to handle its setting. The economics of the blockbuster, built on the exigencies of Classical Hollywood, dictate that confusion on the audience’s part must be carefully choreographed — they should be intrigued and mystified, but not to the point where they choose to take their entertainment dollars elsewhere. Turning Watchmen into a successful feature film meant committing, or not, to a premise that probably seemed more formidable at a time when alternate realities (again, “versions”) were a limited subset of science fiction.

Sam Hamm, the first screenwriter to tackle the adaptation, rejected the squid subplot as an implausible lever for world peace, saying, “While I thought the tenor of the metaphor was right, I couldn’t go for the vehicle.” (4) His 1989 script climaxes with Veidt opening a portal in time in order to assassinate Jon Osterman before he can become Dr. Manhattan — undoing the superhuman’s deforming effects on the world. Although Veidt fails, Manhattan grasps the logic of his plan, and opts to erase himself from the timeline. The ensuing paradox unravels the alternate reality and plunges surviving heroes Nite Owl, Rorschach, and the Silk Spectre into “our” unaffected world.

It is tempting to view Hamm’s ending as a blend of science-fiction film motifs dominant at the end of the 1980s, fusing the time-traveling assassin of The Terminator (James Cameron, 1984) with the branching realities of Back to the Future 2 (Robert Zemeckis, 1989). Certainly David Hayter’s 2003 script takes the ending in a different direction: here Veidt bombards New York City with concentrated solar radiation, killing one million, in order to establish himself as a kind of benevolent dictator. Again, Veidt dies, but — in a denouement closer to the original’s — the hoax is allowed to stand, since to reveal the truth would return the world to the brink of war.

The version that Snyder finally filmed, based on a script co-authored with Alex Tse, makes just one final adjustment to the ending, one that could be seen as a synthesis of the versions that had come before: instead of solar radiation, it is Dr. Manhattan’s energy signature that destroys cities around the world, and Dr. Manhattan who takes the blame, exiling himself to explore other forms of reality.

It may be overestimating the depth of Hollywood’s imagination to suggest that its pairing of Zack Snyder and Watchmen was an inspired rather than merely functional decision — that Warner Brothers saw Snyder not just as a director with the right specializations to make their film, but as a kind of auteur of adaptation. Certainly Snyder had proved himself comfortable with vivid, subculturally cherished texts, as well as effects-intensive film production, with his first two movies, Dawn of the Dead (2004) and 300(2006). The latter film in particular, shot on a digital backlot of greenscreens with minimal sets, must have seemed an ideal resume for Watchmen, demonstrating Snyder’s ability to bring static artwork — in this case, Frank Miller’s and Lynn Varley’s — to strange, half-animated life on screen while maintaining his own distinct layer of stylization. (Snyder’s predilection for speed ramping, in which action shifts temporal gears midshot, may be his most obvious and mockable tic, but it is a clever way to substitute a cinematic signifier for comic art’s speed-line symbolalia.)

Rejecting prior half-measures at sanding off the story’s rough edges, Snyder embraced Moore’s and Gibbons’s work almost literally as a bible, letting the collected Watchmen guide production design “like an illuminated text, like it was written 2000 years ago.” (5) The prominence of this sentiment and others like it in the host of materials accompanying Watchmen‘s marketing can be seen as a discursive strategy as much as anything else, a move to reassure prospective audiences — a group clearly identified early on as a small but important base worthy of wooing, in much the manner that New Line Cinemas cultivated and calibrated its relationships with fans during the making of The Lord of the Rings. (6) Enacted at the layer of the manufactured paratextual penumbra that, as Jonathan Gray reminds us, is now de rigeur for blockbuster film events, public performances of allegiance to a single, accepted reference constitute a crucial discursive technology of adaptation, working in concert with the production technologies involved in translating fan-cherished works (7)

One such production technology doubling as discursive tool was the participation of Dave Gibbons. In DVD and Blu-Ray extras, Gibbons tours the set, posing with Snyder, all smiles, confirming the authenticity and integrity of the production. “I’m overwhelmed by the depth and detail of what I’m seeing,” he wrote of his visit. “I’m overwhelmed by the commitment, the passion, the palpable desire to do this right.” (8) Working with Moore in the 80s, Gibbons’s part in the origination of Watchmen was profound, extending far beyond the simple illustration of a script; the two brainstormed together extensively, and the story’s more reflexive and medium-aware qualities demanded near-microscopic coordination of story and image. Further, Moore’s choice to remove himself from the chain of mandatory citation that constitutes authorship as legal status, relinquishing any claim on the filmic realization of Watchmen, leaves only Gibbons to take credit for the work.

But it is hard to escape a suspicion that promotional discourses around the film lay a surreptitious finger on the scale, biasing the source’s creative center of gravity toward Gibbons, whose contributions are, after all, those most “available” to film production: the design of sets, costumes, props; character physiques and hair styles; background environments and architecture; even the framing of key shots and events. Graphic illustration and cinematic manufacture meet at the layer of the look, understood here not through film theory’s account of the gaze but the more pragmatic (drawable, buildable) framework of mise-en-scene.

Within this odd binary — the highlighting of Gibbons, the structuring absence of Moore — Snyder appears as something of a third term, positioned not as creator but as faithful cinematic translator. His mode of “authorship” is defined chiefly, and perhaps paradoxically, as a fierce and unremitting loyalty to the work of others.

All of these forces come together in the paratextual backstory of the film’s single biggest change. The tie-in publication Watchmen: The Art of the Film reprints storyboards created by Gibbons for the new ending. According to Peter Aperlo, “these new ‘pages’ were drawn at Zack Snyder’s request during pre-production, to ensure that the film’s re-imagined ending nevertheless drew from an authentic source.” (9) Completing the cycle, Gibbons also provided promotional artwork illustrating key images from the film for use in marketing. These interstitial storyboards perform a kind of suture at the industrial level of film-as-artifact and the communicational level of film-as-public-presence, knitting together the visualizations of Gibbons and Snyder in a fusion that guarantees the film’s pedigree while plastering over the hole left by Moore’s departure.

The versionness that has always haunted Watchmen is also present in our hesitation before the question of whether it is a closed, finite text or a sprawling serial object. Is there one Watchmen or many? This indeterminacy too seems embedded in the text’s fortunes from the start: whether you label it a “comic book” or “graphic novel” depends on whether you think of the original as a sequence of twelve discrete issues released one month apart, or as a single work collected between the covers of a book. Kin to Roland Barthes’s distinction between writerly and readerly texts, the latter perspective underpins perceptions of the story as something inviolate, to be protected from tampering; the former leans toward modification, experimentation, openness.

Similarly, Watchmen‘s storyworld — the intricately conceived and exhaustively detailed settings whose minutia, repeated from panel to panel, form a stressed palimpsest — displays characteristics of both a one-off extrapolation, complete unto itself, and the endlessly expandable backdrops of serial fictions like Star Trek, larger than any one instance of the text can contain.

Matt Hills uses the term hyperdiegesis to identify these “vast and detailed narrative spaces” and the rules of continuity and coherence that organize them into explorable terrain (10). A defining attribute of cult texts, hyperdiegesis invites expansion by official authors and fan creators alike, thus functioning as a space of contest and negotiation as well. The approach taken by Snyder, with the participation of Gibbons, is to treat the weakly serial Watchmen as a strongly serial text with a hyperdiegesis whose facts as established — not just the storyworld’s surface designs, but its history, society, and politics — are to be modified only at the adapter’s peril, risking the loss of audiences’ faith.

Watchmen (the film), in adopting its fanatically faithful approach to the visualization layer of the original, risks also replicating its sealed, hermetic qualities. “The cumulative effect of the astonishing level of attention,” writes Peter Y. Paik, “is a palpable sense of suffocation, so that the world of Watchmen ultimately takes shape in the form of a totality that has become wholly closed in upon itself.” (11) For Paik, the “constricting nature of this mortified reality” is Moore’s and Gibbons’s way of conveying the unique existential prison of Watchmen, “in which is ruled out any form of change other than an abrupt and global transformation of the very conditions of existence, such as would take place with the extinction of Homo sapiens.”

This special case of hyperdiegesis, then — in which a closed fictional universe is forced open, via translatory technologies of visualization, across a terminator dividing still comic art from moving cinema image — is one in which the authorial status of Moore and Gibbons is collapsed with that of Zack Snyder, and in turn with the world-manipulating powers of Doctor Manhattan and (in another register) Adrian Veidt. Just as Veidt alters global existence with his own bold intervention, so does Snyder enact a fundamental ontological violence to deform, remake, and relaunch Watchmen in a new medium.

Snyder’s decision to hew as closely as possible to the visual content and layout of the comic book had a number of strategic effects. By creating an elevated platform for the many contributions of Dave Gibbons, it accounted for Moore’s absence while smoothly installing a third partner, Snyder, as fair-minded referee whose task was less to achieve his own artwork than to render onscreen the comic’s nearest possible analog. His role can thus be understood as mediating an encounter between two media, bridging a gulf born both of form (comics versus movies) and of time (represented by the “new” of CGI).

The decision also made available to the production an archive of already-designed material, the extensive visual “assets” from which Watchmen is woven. More than simply a catalog of surfaces, the comic’s panels embed within themselves the history and operating principles of an alternate reality, a backdrop the production would otherwise have had to invent — placing the film in an inherently derivative status to the original. Going hyperfaithful with the adaptation deflected the production’s need to “top” its source design and instead assign itself the new mission of reverent translation.

At the same time, as I have argued, Snyder’s film risked getting stuck in a bubble of irrelevance, the strange half-life of the cult oddity, whose self-referencing network of meaning results (to outsiders) in the production of nonmeaning: a closed system. Preserving too exactly an inherited set of signs took the politically and mythologically overdetermined text of Watchmen and turned it into a “graphically” overdetermined movie, an affective state described by critics in terms like airless and inert.

From an auteurist point of view, going with a different ending may have been Snyder’s way of increasing the chance that his work would be taken as more than mere facsimile. From an industrial standpoint, dropping the squid may have seemed a way of opening up the story’s interpretations, making it available to a broader audience whose shared base of experience did not include giant teleporting squids, but most certainly did include visions of skyscrapers falling and cities smoldering.

“It seems that every generation has had its own reasons for destroying New York,” writes Max Page, tying our long history of visualizing that city’s destruction in film, television, and other media to a changing set of collective fears and preoccupations (12). From King Kong‘s periodic film rampages (1933, 1976, 2005) to Chesley Bonestell’s mid-century paintings of asteroid strikes and mushroom clouds, to nuclear-armageddon scenarios like Fail-Safe (1964) and 24 (2001-2010), the end of the city has been rendered and rerendered with the feverish compulsion of fetish: a standing investment of artistic resources in the rehearsal — and hence some meager control over the meanings — of apocalypse.

At the time Moore and Gibbons were creating Watchmen, the dominant strain in this mode of visualization had shifted from fears in the 1950s and 60s of atomic attack by outsiders to a sense in the 70s and 80s that New York City was rotting from within, its social landscape becoming an alien and threatening thing. The signifiers of this decline — “crime, drugs, urban decay” (13) — were amplified to extremes in Escape from New York (John Carpenter, 1981), shifted into the register of supernatural comedy in Ghostbusters (Harold Ramis, 1984), and refracted through superhero legend in Batman (Tim Burton, 1989).

The New York of Watchmen is divided into sharply stratified tribes, beset by street drugs, and prone to mob violence; moreover, the city stands in for a state of world affairs in which corrupt superpowers square off over the heads of balkanized countries whose diversity only lead to hostility — multiculturalism as nightmare, not the choral utopia of Coca-Cola’s “I Want to Teach the World to Sing” (1971) but the polyglot imbroglio of Blade Runner (1982).

Viewed this way (as following from a particular thesis about the nature of society’s ills), the original ending with the squid can be seen as the Tower of Babel story in reverse: a world lost in a confusion of differences becomes unified, harmonious, one. That this is achieved through an enormous hoax is the hard irony at the heart of Watchmen — a piece of existential slapstick, or to quote another Alan Moore creation, a “killing joke.”

One consequence of having so insistently committed to visual record simulations of New York’s demise was that the events of September 11 — specifically the burning and collapse of the World Trade Center — arrived as both a horrible surprise and a fulfillment of prophecy. The two towers had fallen before, in Armageddon and Deep Impact (both 1998), and the months after 9/11 were filled with self-analysis and recrimination by media suddenly conscious of their potential complicity, if not in the actual act of terrorism, then in its staging as fantasy dry-run.

Among its other effects, 9/11 brought to devastating life a conception of sheer urban destruction that had formerly existed only in the precincts of entertainment. It also crystallized — or enabled our government to crystallize — the enemies responsible, a shadowy network whose conveniently elastic boundaries could expand to encompass whole cultures or dilate to direct lethal interrogative force on individual suspects. The Bush administration’s response to the attacks, as played out in suppressions of liberty and free press in the U.S., in bellicose pronouncements of righteous vengeance in the world court, and ultimately in the Afghanistan and Iraq wars, was like a cruel proof of Veidt’s concept, conjuring into simultaneous existence a fearsome if largely fictional enemy and a “homeland” united in public avowal, if not in actual practice.

One might make the case that Moore and Gibbons in some way “predicted” 9/11 — not in the particulars of the destruction, but in its affective impact and corresponding geopolitical consequences. Just as the squid released in death a psychic shockwave, killing millions while leaving buildings untouched, so did the cognitive effects of the 9/11 attacks ripple outward: at first in a spectacular trauma borne viruslike on our media screens, later in the form of a post-9/11 mindset notable for its regressive conflation of masculinity and security (14). Ten years on, it is highly debatable whether U.S. actions after 9/11 resulted in a safer or more unified world; we seem in some ways to have ended up back where we started, poised at a tipping point of crisis, just as the concluding panels of Watchmen‘s circular narrative “leave in our hands” the decision to publish Rorschach’s journals and blow the hoax wide open. But in the initial, heady glow of late 2001 and early 2002, with the Abu Ghraib revelations (that blunt and pornographic proof of the enterprise’s rotten core) years away, it seemed briefly possible that the new reality we confronted might turn out to be “a stronger loving world.”

That mass deception underlies Veidt’s plan is, of course, another connection to 9/11, in the eyes of those who believe the attacks were carried out by the U.S. Government. Conspiracy theorizing around 9/11 coalesced so quickly it almost qualifies as its own subdiscipline of paranoid reasoning, rivaling perpetual motion and the assassination of JFK as one of the great foci of fringe scholarship. It would hardly be surprising if the original Watchmen was taken up by this movement as a piece of evidence before the fact, for as Stephen Prince observes, pre-9/11 films such as Gremlins (1984), The Peacemaker (1997), and Traffic (2000) have all been accused of embedding subliminal messages about the impending event. (15)

As a media text manufactured well after 9/11, Snyder’s adaptation faced a dual challenge: not simply whether and how to change the ending to something more narratively and conceptually streamlined, but how to negotiate showing the two towers, which by the logic of period were still standing in 1985, even an “alternative” one. The World Trade Center does not figure among the landmarks consumed by the energy wave, which in any case consumes a small amount of screen time (left unadapted is the famed series of full-sized splash pages that opens Watchmen‘s final chapter, an aftermath of panoramic carnage whose stillness builds creepily upon the static nature of the art). But the towers do appear in at least two shots, during the Comedian’s funeral and in the background of Veidt’s office. Both scenes were seized upon for commentary by fans as well as conspiracists. As one blogger wrote,

My interpretation was that Snyder was giving a nod to the fact that the terrible, horrible, completely implausible conclusion of Watchmen has, in fact, already happened. The 9/11 attacks were a lot like the finale of the book — alien, unexpected, tragic, unifying — only without the giant squid. … Snyder felt he had to at least address that in some way. The allusions to the towers were a way of saying, “Okay, I get it. This has already happened.” (16)

The effect of 9/11 on cinematic representation has been to turn either choice — to show or not to show — into a significant decision. For a time following 9/11, studios scrambled to scrub the twin towers from the frame, in films like ZoolanderSerendipity, and Spider-Man. (17) Later appearances of the World Trade Center took on inevitable thematic weight, in films such as Gangs of New York and Munich — presumably a motivation shared by Snyder’s Watchmen, which uses the towers to underscore rhetorical points.

Adaptation is not a new concern for film and media studies, any more than it is a new phenomenon in the industry; as Dudley Andrew points out, “the making of film out of an earlier text is virtually as old as the machinery of cinema itself.” (18) One of the most “frequent and tiresome” discussions of adaptation, Andrew goes on, is the debate over fidelity versus transformation — the degree to which an adapted work succeeds, or suffers, based on its allegiance to an outside source. As tired as this conversation might be, however, Watchmen demonstrates that it continues to structure adaptation both at the level of production (in which Snyder and his crew boast of their painstakingly reverence for the original) and of reception (in which fans analyze the film according to its treatment of Moore and Gibbons). In this way, controversy over the squid’s disappearance goes hand-in-hand with responses to the hyperfaithful visual setting, as referenda on Snyder’s approach; love it or hate it, we cannot resist reading the film of Watchmen alongside the comic book.

But a time of new media gives rise to new questions about adaptation. Whether we understand the mores of contemporary blockbuster filmmaking in terms of ubiquitous special effects, the swarm deployment of paratexts and transmedia, or the address of new audience formations through new channels, Watchmen reminds us that, in an era of new media, all texts mark adaptations in the evolutionary sense, forging recognizability across an alien landscape and establishing continuities with beloved texts — and the media they embody — across the digital divide.

Works Cited

1. Dave Gibbons, Chip Kidd, and Mike Essl, Watching the Watchmen(London: Titan Books, 2008), 28-29.

2. Peter Aperlo, Watchmen: The Film Companion (London: Titan Books, 2009), 16.

3. David Hughes, “Who Watches the Watchmen? How the Greatest Graphic Novel of All Time Confounded Hollywood,” in The Greatest Sci-Fi Movies Never Made (Chicago: A Cappella Books, 2001), 144.

4. Hughes, 147.

5. Aperlo, 26.

6. Kristin Thompson, The Frodo Franchise: The Lord of the Rings and Modern Hollywood (Berkeley: University of California Press, 2007).

7. Jonathan Gray, Show Sold Separately: Promos, Spoilers, and other Paratexts (New York: NYU Press, 2010).

8. Aperlo, 38.

9. Aperlo, 62.

10. Matt Hills, Fan Cultures (London: Routledge, 2002), 137-138.

11. Peter Y. Paik, From Utopia to Apocalypse: Science Fiction and the Politics of Catastrophe (Minneapolis: University of Minnesota Press, 2010), 50.

12. Max Page, The City’s End: Two Centuries of Fantasies, Fears, and Premonitions of New York’s Destruction (New Haven: Yale University Press, 2008), 7.

13. Page, 144.

14. Susan Faludi, The Terror Dream: Fear and Fantasy in Post-9/11 America (New York: Metropolitan Books, 2007).

15. Stephen Prince, Firestorm: American Film in the Age of Terrorism (New York: Columbia University Press, 2009), 78-79.

16. Seth Masket, “The Twin Towers in ‘Watchmen’ (, accessed January 27, 2011.

17. Prince, Firestorm, 79.

18. Dudley Andrew, “Adaptation,” in Film Adaptation, Ed. James Naremore (New Brunswick: Rutgers University Press, 2000), 29.

Awaiting Avatar

Apparently Avatar, which opened on Friday at an immersive neural simulation pod near you, posits an intricate and very real connection between the natural world and its inhabitants: animus in action, the Gaia Hypothesis operationalized on a motion-capture stage. If this is so — if some oceanic metaconsciousness englobes and organizes our reality, from blood cells to weather cells — then perhaps it’s not surprising that nature has provided a perfect metaphor for the arrival of James Cameron’s new film in the form of a giant winter storm currently coloring radar maps white and pink over most of the eastern seaboard, and trapping me and my wife (quite happily) at home.

Avatar comes to mind because, like the blizzard, it’s been approaching for some time — on a scale of years and months rather than hours and minutes, admittedly — and I’ve been watching its looming build with identical avidity. I know Avatar’s going to be amazing, just as I knew this weekend’s storm was going to be a doozy (the expectation is 12-18 inches in the Philadelphia area, and out here in our modest suburb, the accumulation is already enough to make cars look as though they have fuzzy white duplicates of themselves balanced on their roofs). In both cases, of course, this foreknowledge is not as monolithic or automatic a thing as it might appear. The friendly meteorologists on the Weather Channel had to instruct me in the storm’s scale and implacability, teaching me my awe in advance; similarly, we all (and I’m referring here to the entire population of planet earth) have been well and thoroughly tutored in the pleasurable astonishment that awaits us when the lights go down and we don our 3D glasses to take in Cameron’s fable of Jake Sully’s time among the Na’vi.

If it isn’t clear yet, I haven’t seen Avatar. I’m waiting out the weekend crowds (and, it turns out, a giant blizzard) and plan to catch a matinee on Tuesday, along with a colleague and her son, through whose seven-year-old subjectivity I ruthlessly intend to focalize the experience. (I did something similar with my nephew, then nine, whom I took to see The Phantom Menace in 1999; turns out the prequels are much more watchable when you have an innocent beside you with no memory of what George Lucas and Star Wars used to be.) But I still feel I know just about everything there is to know about Avatar, and can name-drop its contents with confidence, thanks to the broth of prepublicity in which I’ve been marinating for the last several weeks.

All of that information, breathlessly assuring me that Avatar will be either complete crap (the /tv/ anons on 4chan) or something genuinely revolutionary (everyone else), partakes of a cultural practice spotlighted by my friend Jonathan Gray in his smart new book Show Sold Separately: Promos, Spoilers, and Other Media Paratexts. While we tend to speak of film and television in an always-already past tense (“Did you see it?” “What did you think?”), the truth is something very different. “Films and television programs often begin long before we actively seek them out,” Jon observes, going on to write about “the true beginnings of texts as coherent clusters of meaning, expectation, and engagement, and about the text’s first initial outposts, in particular trailers, posters, previews, and hype” (47). In this sense, we experience certain media texts a priori — or rather, we do everything but experience them, gorging on adumbration with only that tiny coup de grace, the film itself, arriving at the end to provide a point of capitation.

The last time I experienced anything as strong as Avatar‘s advance shockwave of publicity was with Paranormal Activity (and a couple of years ago before that with Cloverfield), but I am not naive enough to think such occurrences rare, particularly in blockbuster culture. If anything, the infrequency with which I really rev up before a big event film suggests that the well-coordinated onslaught is as much an intersubjective phenomenon as an industrial one; marketing can only go so far in setting the merry-go-round in motion, and each of us must individually make the choice to hop on the painted horse.

And having said that, I suppose I may not be as engaged with Avatar‘s prognosticatory mechanisms as I claim to be.  I’ve kept my head down, refusing to engage fully with the tableaux being laid out before me. As a fan of science-fiction film generally, and visual effects in particular, this seemed only wise; in the face of Avatar hype, the only choices appear to be total embrace or outright and hostile rejection. I want neither to bless nor curse the film before I see it. But it’s hard to stay neutral, especially when a film achieves such complete (if brief) popular saturation and friends who know I study this stuff keep asking me for my opinion. (Note: I am very glad that friends who know I study this stuff keep asking me for my opinion.)

So, a few closing thoughts on Avatar, offered in advance of seeing the thing. Think of them as open-ended clauses, half-told jokes awaiting a punchline; I’ll come back with a new post later this week.

  • Language games. One aspect of the film that’s drawn a great deal of attention is the invention of a complete Na’vi vocabulary and grammar. Interesting to me as an example of Cameron’s endless depth of invention — and desire for control — as well as an aggressive counter to the Klingon linguistics that arose more organically from Star Trek. Will fan cultures accrete around Avatar as hungrily as they did around that more slowly-building franchise, their consciousness organized (to misquote Lacan) by a language?
  • Start the revolution without me. We’ve been told repeatedly and insistently that Avatar is a game-changer, a paradigm shift in science-fiction storytelling. For me, the question this raises is not Is it or isn’t it? but rather, What is the role of the revolutionary in our SF movies, and in filmmaking more generally? How and why, in other words, is the “breakthrough” marketed to us as a kind of brand — most endemically, perhaps, in movies like Avatar that wear their technologies on their sleeve?
  • Multiple meanings of “Avatar.” The film’s story, as by now everyone knows, revolves around the engineering of alien bodies in which human subjectivities can ride, a kind of biological cosplay. But on another, artifactual level, avatarial bodies and mechanisms of emotional “transfer” underpin the entire production, which employs performance capture and CG acting at an unprecedented level. In what ways is Avatar a movie about itself, and how do its various messages about nature and technology interact with that supertext?

British Invasion


Ordinarily I’d start my post with a by-now-boilerplate apology for lagging behind the news, but in this case I will leave aside the ritual lament (“I’m just so busy this semester!”) and instead make proud boast of my lateness, boldly owning up to the fact that, although it was forty years ago last week that Monty Python’s Flying Circus had its first broadcast, I’m just getting around to remarking on it today. Seems only (il)logical to do so, given that one of Python‘s most fundamental and lasting alterations to the cultural landscape in which I grew up was to validate the non sequitur as an acceptable conversational — and often behavioral — gambit.

Let me explain. For me and my friends in grade school, the early-to-mid-seventies were a logarithmically-increasing series of social revelations, sometimes depressingly gradual, other times bruisingly abrupt, that we were “weird.” Our weirdness went by several aliases. The labels bestowed by forgiving parents and teachers were things like “smart,” “bright,” “eccentric,” “unusual,” and “creative.” Whereas the ones that arrived not from above but laterally, hurled like snowballs in the schoolyard or graffitied in ball-point across our notebooks, were more brutally and colorfully direct, and thus of course more convincing: “freak,” “spaz,” and — for me in particular, since it vaguely rhymes with Rehak — “retard.”

I see now that almost all of these phrases had their grain of truth, their icy core, their scored ink-line. In our weirdness we were smart and unusual and creative; we were also undeniably freakish, and as our emotional gyroscopes whirled wildly in search of some stable configuration, we were, by turns, spastically overenthusiastic and retardedly slow to adapt. We were book and comic readers, TV watchers, play actors, cartoon artists, model builders, rock collectors. We were boys. We liked science fiction and fantasy. Our skills and deficits were misdistributed and extreme: vastly vocabularied but garbled by braces and retainers; carefully observant but blindered by thick glasses; handsome heroes in our hearts, chubby or skinny buffoons in person. Many of us were good at science and math, others at art and theater. None of us did particularly well on the athletic field, though we did provide workouts for the kids who chased us.

Me, I made model kits of monsters like the Mummy, the Wolfman, and the Creature from the Black Lagoon — all supplied by the great company Aurora, with the last mile from hobby store to home facilitated by my indulgent parents — painted them in garish and inappropriate colors, situated them behind cardboard drum kits and guitars on yarn neckstraps, and pretended they were a rock supergroup while blasting the Monkees and the Archies from my record player. (I am not making this up.)

I was also a media addict, even back then, and when Monty Python episodes began airing over our local PBS station, I was instantly and utterly devoted to it. Which is not to say I liked everything I saw — a nascent fan, I quickly began drawing distinctions between the unquestionably great, the merely good, the tolerably adequate, and the terminally lame paroles that constituted the show’s langue, learning connections between these variations in quality and the industrial microepochs that gave rise to them: early, middle, and late Python. I had my favorite bits (Terry Gilliam’s animations, anything ever done or said by John Cleese) and my “mehs” (Terry Gilliam’s acting and the episode devoted to hot-air ballooning). Although or because I was stranded somewhere in the long latency separating my phallic and genital stages, I found every mention of sex and every glimpse of boob a fascinating magma of hypothetical desire and unearned shame. And, of course, it was all hysterically, tear-squirting, stomach-cramp-inducing funny.

The downside of Monty Python‘s funniness was the same as its upside: it gave all of us weirdos a shared social circuit. The show’s peculiar and specific argot of slapstick and trangression, dada and doo-doo, spread overnight to recess and classroom, connecting by a kind of dedicated party line any schlub who could memorize and repeat lines and skits from the show. In short, Monty Python colonized us, or more accurately it lit up like a discursive barium trace the preexisting nerd colony that theretofore had hidden underground in a nervous relay of quick glances, buried smiles, and raised eyebrows. Suddenly outed by a humor system from across the sea, we pint-sized Python fans stood revealed as a brotherhood of nudge-nudge-wink-wink, a schoolyard samizdat.

A good thing, but also a bad thing. The New York Times gets it exactly wrong when describing the “couple of guys in your dorm (usually physics majors, for some reason, and otherwise not known for their wit) who could recite every sketch”; according to Ben Brantley, “They could be pretty funny, those guys, especially if you hadn’t seen the real thing.” Nope — people who recite every Monty Python sketch are by definition not funny, or rather are funny only within an extremely bounded circle of folks who (A) already know the jokes and (B) accept said recitation as legal tender in their subcultural social capital. In my experience, there was no surer date-killer, no quicker way to get people to edge away from you at parties than by launching into such bonafide gems of genius as the Cheese Shoppe or the Argument Clinic. Yet we went on tagging each other as geek untouchables, comedy as contagion, as helpless before Pythonism’s viral spread as we would be, a few years on, by the replicating errata of Middle Earth and the United Federation of Planets.

Monty Python was merely the first infusion of obsessive-compulsive nerd scholarship into which I and my friends were forced by a series of cultural imports from Britain: grand stuff like The Fall and Rise of Reginald Perrin, The Hitchhiker’s Guide to the Galaxy, Alan Moore, and the computer game Elite. The three movies I like to name as my favorites of all time each have substantial UK components: Star Wars (1977) was filmed partly at Elstree Studios, Superman (1978) at Pinewood and Shepperton Studios, and Alien (1979), with Ridley Scott at the helm, at Shepperton and Bray Studios. And the trend continues right to present day: my favorite band is Genesis, I can’t get enough of Robbie Coltrane’s Cracker, and the science-fiction masterpiece of the summer was not District 9 (which gets high marks nevertheless) but the superb Children of Earth.

I sometimes wonder what to call this collection of British art and entertainment, this odd cultural constellation that seems to obey no organizing principle except its origins in England and its relevance to my development. How do you draw a boundary around a miscellany of so much that is good and essential about imaginary lives and their real social extrusions? Maybe I’m seeking a word like supergenre or metagenre, but those seem too big; try idiogenre, some way of systematizing a group of texts whose common element is their locus in a particular, historically-shaped subjectivity (my own) that is simultaneously a shared condition. The comic tragedy of the nerd, a figure both stranded on the social periphery yet crowded by his peers, lonely yet overfriended, renegade frontiersman and communal sheep, a silly-walking man with an entire Ministry of Silly Walks looming behind him.

I blame, and thank, England.