We Have Never Been Digital: CGI as the New “Clumsy Sublime”

In his essay “Before and After Right Now: Sequels in the Digital Era,” Nicholas Rombes gives an example of the troubling way that CGI has eroded our trust in visual reality. Citing the work of Lola Visual Effects to digitally “youthen” the lead actors in the 2006 film X-Men: The Last Stand, Rombes cites a line from the effects house’s website: “Our work has far-reaching implications from extending an actor’s career for one more sequel to overall success at the box of?ce. We allow actors and studios to create one more blockbuster sequel (with the actor’s fan base) by making the actor look as good (or better) than they did in their ?rst movie.” Rombes responds: “What is there to say about such a brash and unapologetic thing as this statement? The statement was not written by Aldous Huxley, nor was it a darkly funny dystopian story by George Saunders. This is a real, true, and sincere statement by a company that digitally alters the faces and bodies of the actors we see on the screen, a special effect so seamless, so natural that its very surrealism lies in the fact that it disguises itself as reality.”

Before we adjudicate Rombes’s claim, we might as a thought experiment try to imagine the position from which his assertion can be made – the nested conditionals that make such a response plausible in the first place. If a spectator encounters X-Men: The Last Stand without prior knowledge of any kind, including the likelihood that such a film will employ visual trickery; if he or she is unaware of the overarching careers, actual ages, and established physiognomies of Patrick Stewart and Ian McKellan; and perhaps most importantly if that viewer cannot spot digital airbrushing that even now, a scant six years later, looks like a heavy coat of pancake makeup and hair dye, then perhaps we can accept Rombes’s accusation of hubris on the part of the visual-effects house. On the other hand, how do we explain the larger incongruity in which Rombes premises his critique of the “seamless … natural” and thus presumably unnoticeable manipulation on a widely-available text, part of Lola’s self-marketing, that highlights its own accomplishment? In short, how can a digital effect be simultaneously a surreptitious lie in one register and a trumpeted achievement in another? Is this characterization not itself an example of misdirection, the impossible masquerading as the possible, a kind of rhetorical special effect?

The truth is that Rombes’s statement in all its dudgeon, from an otherwise astute observer of cinema in the age of digital technologies, suggests something of the problem faced by film and media studies in relation to contemporary special effects. We might describe it as a problem of blind spots, of failing to see what is right before our eyes. For it is both an irony and a pressing concern for theoretical analysis that special effects through their very visibility – a visibility achieved both in their immediate appearance, where they summon the powers of cleverly-wrought illusion to create convincing displays of fantasy, and in their public afterlife, where they replicate and spread through the circulatory flows of paratexts and replay culture – lull the critical gaze into selective inattention, foregrounding one set of questions while encouraging others to slip from view.

By hailing CGI and the digital mode of production it emblematizes as a decisive break with the practices that preceded it, Rombes acquiesces to the terms on which special effects have always – even in predigital times – offered themselves through From the starting point of what Sean Cubitt calls “the rhetoric of the unprecedented,” such scholarship can only unfold an analysis whose polarities, whether celebratory or condemnatory, mark but one axis of debate among the many opportunities special effects provide to reassess the changing nature of textuality, storytelling, authorship, genre, and performance in the contemporary mediascape. A far-ranging conversation, in other words, is shut down in favor of a single set of concerns, organized with suspicious tidiness around a (rather abstract) distinction between truth and falsehood. This distinction structures debates about special effects’ “spectacular” versus “invisible” qualities; their “success” or “failure” as illusions; their “indexicality” or lack of it; and their “naturalness” versus their “artificiality.” I mean to suggest not that such issues are irrelevant to the theorization of special effects, but that their ossification into a default academic discourse has created over time the impression that special effects are only about such matters as “seamless … disguise.”

Perniciously, by responding to CGI in this way, special-effects scholarship participates in the ongoing production of a larger episteme, “the digital,” along with its constitutive other, “the analog.” Although it is certainly true that the underlying technologies of special-effects design and manufacture, like those of the larger film, television, and video game industries in which such practices are embedded, have been comprehensively augmented and in many instances replaced outright by digital tools, the precise path and timing by which this occurred are nowhere near as clean or complete as the binary “analog/digital” makes them sound. In point of fact, CG effects, so often treated as proof-in-the-pudding of cinema’s digital makeover, not only borrowed their form from the practices and priorities of their analog ancestry, but preserve that past in a continued dependence on analog techniques that ride within their digital shell like chromosomal genetic structures. In a narrowly localized sense, digital effects may be the final product, but they emerge from, and feed in turn, complex mixtures of past and present technologies.

Our neglect of this hybridity and the counternarrative to digital succession it provides is fueled more than anything else by a refusal to engage with historical change – indeed, to engage with the very fact of history as a record of incremental and uneven development. Consider the way in which Rombes’s charge against CGI rehearses almost exactly the terms of Stephen Prince’s influential essay “True Lies: Perceptual Realism: Digital Images, and Film Theory.” “What is new and revolutionary about digital imaging,” Prince wrote, “is that it increases to an extraordinary degree a filmmaker’s control over the informational cues that establish perceptual realism. Unreal images have never before seemed so real.” (34) Prince’s claim about the “extraordinary” nature of digital effects was written in 1996 and refers to movies such as The Abyss (1989), Jurassic Park (1993), and Forrest Gump (1994), all of which featured CG effects alleged to be photorealistic to the point of undetectability. Rombes, writing in 2010, bases his claim about digital effects’ seduction of reality on the tricks in a film released in 1996. “What happens,” Rombes asks, “when we create a realism that outstrips the detail of reality itself, when we achieve and then go beyond a one-to-one correspondence with the real world?” (201) The answer, of course, is that one more special effect has been created from the technological capabilities and stylistic sensibilities of its time: capabilities and sensibilities that may appear transparent in the moment, but whose manufacture quickly becomes apparent as the imaging norm evolves. If digital effects are as subject to aging as any other sort of special effects, then concerns about the threat they pose to reality become empty alarms, destined to be viewed with amusement, if not ridicule, by future generations of film theorists.

The key to dissolving the impasse at which theories of digital visual effects find themselves lies in restoring to all special effects a temporality and interconnectedness to other layers of film and media culture. The first step lies in acknowledging that special effects are always undergoing change; the state of the art is a moving target. Laura Mulvey’s term for this process is the “clumsy sublime.” She refers to the use of process shots in classical Hollywood to rear-project footage behind actors – effects intended to pass unnoticed in their time, but which now leap out at us precisely in their clumsiness, their detectability.

The lesson we should take from this is not that some more lasting “breakthrough” in special effects waits around the corner, but that the very concept of the breakthrough is structured into state-of-the-art special effects as a lure for the imagination of spectators and theorists alike. The danger is not of realer-than-real digital effects, but our overconfidence in critically assessing objects that are predicated on misdirection and the promise of conquered frontiers – and our mistaken assumption that we as scholars see these processes more objectively or accurately than prior generations. In this sense, special-effects scholarship performs the very susceptibility of which it accuses contemporary audiences, accepting as fact the paradigm-shifting superiority of digital effects, rather than seeing that impression of superiority as itself a byproduct of special-effects discourse.

In this way, current scholarship imports a version of spectatorship from classical apparatus theory of the 1970s, along with a 70s-era conception of the extent and limit of the standard feature film text. Both are holdovers of an earlier period of theorizing the film text and its impact on the viewer, and are jarringly out of date when applied to contemporary media, in their cycles of replay and convergence which break texts apart and combine them in new ways, as well as to the audience, which navigates these swarming texts according to their own interests, their own “philias.” The use of obsolete models to describe special effects is all the more ironic for the appeals such models make to a transcendent “new.” The notion that the digital, as emblematized by CGI, represents a qualitative redrafting of cinema’s indexical contract with audiences, holds up only under the most restrictive possible picture of spectatorship: it imagines special effects as taking place in a singular, timeless instant of encounter with a viewer who has only two options, accepting the special effect as unmediated event or rejecting it as artifice. That special-effects theory from Andre Bazin and Christian Metz onward has allowed for bifurcated consciousness on the part of the viewer is, in the era of CGI, set aside for accounts of special effects that force them into a real/unreal binary. The digital effect and its implied spectator are trapped in a synchronic isolation from which it is impossible to imagine any other way to conceptualize the work of special effects outside the moment of their projection. Even accounts of special effects’ semiosis, like Dan North’s, that foreground their composite nature; their role in the genres of science fiction (Vivian Sobchack), the action blockbuster (Geoff King), or Aristotelian narrative (Shilo McClean), only scratch the surface of the complex objects special effects actually are.

What really changes in the clumsy sublime is not the special effect but our perception of it, an interpretation produced not through Stephen Prince’s perceptual cues, Scott Bukatman’s kinesthetic immersion in an artificial sublime, or Tom Gunning’s appeal of the attraction – though all three may indeed be factors in the first moment of seeing the effect – but by a more complex and longitudinal process involving conscious and unconscious comparisons to other, similar effects; repeated exposure to and scrutiny of special effects; behind-the-scenes breakdowns of how the effect was produced; and commentaries and reactions from fans. Within this matrix of evaluation, the visibility or invisibility, that is to say the “quality,” of special effects, is not a fixed attribute, but a blackboxed output of the viewer, the viewing situation, and the special effect’s enunciatory context in a langue of filmic manipulation.

According to the standard narrative, some special effects hide, while others are meant to be seen. Wire removal and other forms of “retouching” modify in subtle ways an image that is otherwise meant to pass as untampered recording of profilmic reality, events that actually occurred as they seem to onscreen. “Invisible” effects perform a double erasure, modifying images while keeping that modifying activity out of consciousness, like someone erasing their own footsteps with a broom as they walk through snow. So-called “visible” special effects, by contrast, are intended to be noticed as the production of exceptional technique, capitalizing on their own impossibility and our tacit knowledge that events on screen never took place in the way they appear to. The settings of future and fantasy worlds, objects, vehicles, and performers and their actions are common examples of visible special effects.

This much we have long agreed on; the distinction goes back at least as far as Metz, who in “Trucage and the Film” proposed a taxonomy of special effects broad enough to include wipes, fades, and other transitions as acts of optical trickery not ordinarily considered as such. Several things complicate the visible/invisible distinction, however. Special effects work is explored in publications and in home-video extras, dissected by fans, and employed in marketing appeals. These paratextual forces, which extend beyond the frame and the moment of viewing, tend inexorably to tip all effects work eventually into the category of “visible.” But the ongoing generation of a clumsy sublime reveals a more pervasive process at work: the passage of time, which steadily works to open a gap between a special effect’s intended and actual impact. Dating is key to dislodging reductive accounts of special effects’ operations. The clumsy sublime is a succinctly profound insight into the way that film trickery can shift over time to become visible in itself as a class of techniques to be evaluated and admired, opening up discussions about special effects beyond the binary of convincing/unconvincing that has hamstrung so many conversations about them.

If today’s digital special effects can age and become obsolete – and there is no reason to think they cannot – then this undermines the idea that there is some objective measure of their quality; “better” and “worse” become purely relational terms. It also raises the prospect that the digital itself is more an idea than an actual practice: a perception we hold – or a fantasy we share – about the capabilities of cinema and related entertainments. The old distinction that held during the analog era, between practical and optical effects, constituted a kind of digital avant la lettre; practical effects, performed live before the camera, were considered “real,” while optical effects, created in post-production, were “virtual.” The coming of CGI has remapped those categories, making binaries into bedfellows by collapsing practical and optical into one primitive catchall, the “analog,” defined against its contemporary other, the “digital.” Amid such lexical slippages and epistemic revisions, current scholarship is insufficiently reflexive about apprehending the special effect. We have been too quick to get caught up in and restate the terms – Philip Rosen calls it “the rhetoric of the forecast” – by which special effects discursively promote themselves. In studying illusion, we risk contributing to another, larger set of illusions about cinematic essence.

What is revealed, then, by stepping out of our blind spot to survey special effects across the full range of their operations and lifespans? First, we see that special effects are profoundly composite in nature, marrying together elements from different times and spaces. But the full implications of this have not been examined. Individual frames are indeed composites of many separate elements, but viewed diachronically, special effects are also composited into the flow of the film – live-action intercut with special effects shots as well as special effects embedded within the frame. This dilutes our ability to quarantine special effects to particular moments; we can speak of “special-effects films” or “special-effects sequences,” but what percentage of the film or sequence consists of special effects, and in what combination? Consider how such concerns shape our labeling of a given movie as a “digital effects” film. Terminator 2: Judgment Day (1991) and The Matrix (1999) each contained only a few minutes of shots in which CG elements played a part, while the rest of their special effects were produced by old-school techniques such as animatronics and prosthetics. Yet we do not call these movies “animatronics films” or “prosthetics films.” The sliding of the signified of the film under the signifier of the digital suggests that, when it comes to special effects, we follow a technicist variation of the “one-drop rule,” where the slightest collusion of computers is an excuse to treat the whole film a digital artifact.

What, then, is the actual “other” to indexicality posed by special effects, digital and analog alike? It is insufficient simply to label it the “nonindexical”; in slapping this equivalent of “here there be dragons” on the terra incognita at the edge of our map’s knowability, we have not answered the question but avoided it. The truth is that all special effects, even digital ones, are indexical to something; they can all, in a certain sense, be “sourced” to the real world and real historical moments. If nothing else, they are records of moments in the evolution of imaging, and because this evolution is driven not only by technology but by style, it is always changing without destination. (As Roland Barthes observes, the fashion system has no end.) Digital special effects record the expressions of historically specific configurations of software and hardware just as, in the past, analog special effects recorded physical arrangements of miniatures and paintings on glass. Nowadays, with all analog effects retroactively rendered “real” by the digital, even processes such as optical printing and traveling mattes have come to bear their own indexical authenticity, just as film grain and lens flares record specifics of optics and celluloid stock. But the indexical stamp of special effects goes deeper than their manufacture. Visible within them are design histories and influences, congealed into the object of the special effect and frozen there, but available for unpacking, comparison, fetishization, and emulation by audiences increasingly organized around the collective intelligence of fandom. Furthermore, because of the unique nature of special effects (that is, as “special” processes celebrated in themselves), materials can frequently be found which document the effect’s manufacture, and in many cases – preproduction art, maquettes, diagrams – themselves represent evolutionary stages of the special effect.

Every movie, by virtue of residing inside a rationalized industrial system, sits atop a monument of planning and paperwork. In films that are heavy on design and special effects, this paperwork takes on archival significance, becoming technical archeologies of manufacture. Our understanding of what a special effect is must begin by including these stages as part of its history – the creative and technological paths from which it emerged. We recognize that what we see on screen is only a surface trace of much larger underlying processes: the very phenomenon of making-of supposes there is always more to the (industrial) story.

Following this logic, we see that special effects, even digital ones, do not consist of merely the finished, final output on film, but a messy archive of materials: the separate elements used to film them and the design history recorded in documents such as concept art and animatics. Special effects leave paratextual trails like comets. It is only because of these trails that behind-the-scenes materials exist at all; it is what we look at when we go behind the scenes. Furthermore, we see that special effects, once “finished,” themselves become links in chains of textual and paratextual influence. It is not just that shots and scenes provide inspiration for can-you-top-this performances of newer effects, but that, in the amateur filmmaking environments of YouTube and “basementwood,” effects are copied, emulated, downgraded, upgraded, spun, and parodied – each action carrying the effect to a new location while rendering it, through replication, more pervasive in the mediascape. Special effects, like genre, cannot be copyrighted; they represent a domain of audiovisual replication that follows its own rules, both fast-moving and possessed of the film nerd/connoisseur’s long-tail memory. Special effects originate iconographies in which auras of authorship, collections of technical fact, artistic influences, teleologies of progress/obsolescence, franchise branding, and hyperdiegetic content coexist with the ostensible narrative in which the special effect is immediately framed. These additional histories blossom outward from our most celebrated and remembered special effects; in fact, it is the celebration and remembering that keeps the histories alive and developing.

All of this contributes to what Barbara Klinger has called the “textual diachronics” of a film’s afterlife: an afterlife which, given its proportional edge over the brief run of film exhibition, can more frankly be said to constitute its life. Special effects thus mark not the erasure of indexicality but a gold mine of knowledge for those who would study media evolution. Special effects carry information and behave in ways that go well beyond their enframement within individual stories, film properties, or even franchises. Special effects are remarkably complex objects in themselves: their engineering, their semiotic freight, their cultural appropriation, their media “travel,” their hyperdiegetic contribution.

What seems odd is that while one branch of media studies is shifting inexorably toward models of complexity and diffusion, travel and convergence, multiplicity and contradiction, the study of special effects still grapples with its objects as ingredients of an older conception of film: the two-hour self-contained text. What additional and unsuspected functions lurk in the “excess” so commonly attributed to prolonged displays of special effects? Within the domains of franchise, transmedia storytelling, and intertextuality, the fragmentation of texts and their subsequent recontainment within large-scale franchise operations makes it all the more imperative to find patterns of cluster and travel in the new mediascape, along with newly precise understandings of the individuals/audiences who drive the flow and give it meaning.

To say that CG effects have become coextensive with filmmaking is not to dismiss contemporary film as digital simulacrum but to embrace both “digital effects” and “film” as intricate, multilayered, describable, theorizable processes. To insist on the big-screened, passively immersed experience of special effects as their defining mode of reception is to ignore all the ways in which small screens, replays, and paratextual encounters open out this aspect of film culture, both as diegetic and technological engagement. To insist that special effects are mere denizens of the finished film frame is to ignore all the other phases in which they exist. And to associate them only with the optical regime of the cinematic apparatus (expressed through the hypnotic undecidable of real/false, analog/digital) is to ignore the ways in which they spread to and among other media.

The argument I have outlined in this essay suggests a more comprehensive way of conceptualizing special effects in the digital era, seeing them not just as enhancements of a mystificatory apparatus but as active agents in a busy, brightly-lit, fully conscious mediascape. In this positivist approach, digital effects contribute to the stabilizing and growth of massive fantastic-media franchises and the generation of new texts (indeed, of the concept of “new” itself). In all of these respects, digital special effects go beyond the moment of the screen that has been their primary focus of study, to become something more than meets the eye.

SCMS 2012: We Have Never Been Digital

March is here — in fact, it arrived three days ago, and I’m only just now noticing it like a UPS box left on my doorstep — and the Society for Cinema and Media Studies conference is only three weeks away. Depending on where the dial is set on your own personal Procrastinometer®, you will find the following sentence either (A) shockingly lax, (B) remarkably foresighted, or (C) just about right: time to start writing the paper.

It’s even more important that I compose my essay in advance, because this year my wife and son are coming with me to Boston. My days, er, nights of sitting in a hotel bathtub with a pad of legal paper, pulling together presentations at the last minute, are done. And while I would like to believe there is a certain Keith-Richards-style glamour to such decadent showboating — beneath the surface of this mild academic beats the heart of a Lizard King — I do not miss those days. Empirical testing verifies that it is much, much, much less stressful to work from a script, even a script that contains such stage directions as “MAKE JOKE HERE.”

So by way of jumpstarting my process, here is the abstract I submitted as part of a panel on “Archaeologies of the Future: Popular Cinema and Film History in the Age of Digital Technologies,” organized and chaired by my former IU colleague Jason Sperb (whose highly recommended blog can be found here).

We Have Never Been Digital: CGI and the New “Clumsy Sublime”

Digital visual effects have been hailed as a breakthrough in the engineering of screen illusion, generating new forms of filmic phenomenology and spectatorial engagement while fueling a crisis discourse in which the very indexical foundations of the medium are said to be dissolving into their uncanny, computer-generated replacement. Both as an assessment of current aesthetic trends and the larger narrative of technological and stylistic change in which they are embedded, such accounts fall prey to the historical amnesia implied by the term “state of the art” – accepting, as a kind of discursive special effect, the alleged superiority and perfection of digital imaging while neglecting the way in which all special effects age and become obsolete (which is to say, visible precisely as compromised attempts at simulation). Exploring the temporality of special effects, this essay presents a brake and counternarrative to the emerging consensus of alterity dividing digital and analog eras of special effects, by drawing on Laura Mulvey’s concept of the “clumsy sublime,” which suggests that the passing of time lends classical Hollywood special-effects methods such as rear projection their own particular charisma as ambitious but failed visual machinations. Scrutinizing key “breakthrough” moments in the recent evolution of digital visual effects films and the critical discourses that both celebrate and condemn them as decisive breaks with a flawed analog past, I argue that today’s special effects are as susceptible to dating as those of the past – that, in fact, we are always witnessing the production of a future generation’s clumsy sublime.

Borrowing its title from a subheading in my lengthy post on Tron: Legacy, this project is intended as a polemic and antidote to a cinema studies that too often accepts as transparent given the idea that digital image creation, and the larger colonization of film production, distribution, and exhibition by digital technologies, marks the arrival of perfect photorealistic simulation and undetectable manipulation on the one hand, and the extinction of the index on the other. Digital special effects are a linchpin of arguments for a fundamental shift in the ontology and phenomenology of cinema, hence a menacing metonym for an epochal, irreversible transit across a historical dividing line between analog and digital. It’s much like the singularity, a supposed event horizon we can’t see past. Yet we continue to fantasize ourselves on the other side of the terminator, describing what-will-never-arrive in the verb tense of it-already-happened.

Much like the month of March.

Tron: Legacy

This review is dedicated to my friends David Surman and Will Brooker.

Part One: We Have Never Been Digital


If Avatar was in fact the “gamechanger” its prosyletizers claimed, then it’s fitting that the first film to surpass it is itself about games, gamers, and gaming. Arriving in theaters nearly a year to the day after Cameron’s florid epic, Tron: Legacy delivers on the promise of an expanded blockbuster cinema while paradoxically returning it to its origins.

Those origins, of course, date back to 1982, when the first Tron — brainchild of Steven Lisberger, who more and more appears to be the Harper Lee of pop SF, responsible for a single inspired act of creation whose continued cultural resonance probably doomed any hope of a career — showed us what CGI was really about. I refer not to the actual computer-generated content in that film, whose 96-minute running time contains only 15-20 minutes of CG animation (the majority of the footage was achieved through live-action plates shot in high contrast, heavily rotoscoped, and backlit to insert glowing circuit paths into the environment and costumes), but instead to the discursive aura of the digital frontier it emits: another sexy, if equally illusory, glow. Tron was the first narrative feature film to serve up “the digital” as a governing design aesthetic as well as a marketing gimmick. Sold as high-tech entertainment event, audiences accepted Lisberger’s folly as precisely that: a time capsule from the future, coming attraction as main event. Tron taught us, in short, to mistake a hodgepodge of experiment and tradition as a more sweeping change in cinematic ontology, a spell we remain under to this day.

But the state of the art has always been a makeshift pact between industry and audience, a happy trance of “I know, but even so …” For all that it hinges on a powerful impression of newness, the self-applied declaration of vanguard status is, ironically, old hat in filmmaking, especially when it comes to the periodic eruptions of epic spectacle that punctuate cinema’s more-of-the-same equilibrium. The mutations of style and technology that mark film’s evolutionary leaps are impossible to miss, given how insistently they are promoted: go to YouTube and look at any given Cecil B. DeMille trailer if you don’t believe me. “Like nothing you’ve ever seen!” may be an irresistible hook (at least to advertisers), but it’s rarely true, if only because trailers, commercials, and other advance paratexts ensure we’ve looked at, or at least heard about, the breakthrough long before we purchase our tickets.

In the case of past breakthroughs, the situation becomes even more vexed. What do you do with a film like Tron, which certainly was cutting-edge at the time of its release, but which, over the intervening twenty-eight years, has taken on an altogether different veneer? I was 16 when I first saw it, and have frequently shown its most famous setpiece — the lightcycle chase — in courses I teach on animation and videogames. As a teenager, I found the film dreadfully inert and obvious, and rewatching it to prepare for Tron: Legacy,  I braced myself for a similarly graceless experience. What I found instead was that a magical transformation had occurred. Sure, the storytelling was as clumsy as before, with exposition that somehow managed to be both overwritten and underexplained, and performances that were probably half-decent before an editor diced them them into novocained amateurism. The visuals, however, had aged into something rather beautiful.

Not the CG scenes — I’d looked at those often enough to stay in touch with their primitive retrogame charm. I’m referring to the live-action scenes, or rather, the suturing of live action and animation that stands in for computer space whenever the camera moves close enough to resolve human features. In these shots, the faces of Flynn (Jeff Bridges), Tron (Bruce Boxleitner), Sark (David Warner), and the film’s other digital denizens are ovals of flickering black-and-white grain, their moving lips and darting eyes hauntingly human amid the neon cartoonage.

Peering through their windows of backlit animation, Tron‘s closeups resemble those in Dreyer’s Passion of Joan of Arc — inspiration for early film theorist Béla Balázs’s lyrical musings on “The Face of Man” — but are closer in spirit to the winking magicians of George Méliès’s trick films, embedded in their phantasmagoria of painted backdrops, double exposures, and superimpositions. Like Lisberger, who would intercut shots of human-scaled action with tanks, lightcycles, and staple-shaped “Recognizers,” Méliès alternated his stagebound performers with vistas of pure artifice, such as animated artwork of trains leaving their tracks to shoot into space. Although Tom Gunning argues convincingly that the early cinema of attractions operated by a distinctive logic in which audiences sought not the closed verisimilar storyworlds of classical Hollywood but the heightened, knowing presentation of magical illusions, narrative frameworks are the sauce that makes the taste of spectacle come alive. Our most successful special effects have always been the ones that — in an act of bistable perception — do double duty as story.

In 1982, the buzzed-about newcomer in our fantasy neighborhoods was CGI, and at least one film that year — Star Trek II: The Wrath of Khan — featured a couple of minutes of computer animation that worked precisely because they were set off from the rest of the movie, as special documentary interlude. Other genre entries in that banner year for SF, like John Carpenter’s remake of The Thing and Steven Spielberg’s one-two punch of E.T. and Poltergeist (the latter as producer and crypto-director), were content to push the limits of traditional effects methods: matte paintings, creature animatronics, gross-out makeup, even a touch of stop-motion animation. Blade Runner‘s effects were so masterfully smoggy that we didn’t know what to make of them — or of the movie, for that matter — but we seemed to agree that they too were old school, no matter how many microprocessors may have played their own crypto-role in the production.

“Old school,” however, is another deceptively relative term, and back then we still thought of special effects as dividing neatly into categories of the practical/profilmic (which really took place in front of the camera) and optical/postproduction (which were inserted later through various forms of manipulation). That all special effects — and all cinematic “truths” — are at heart manipulation was largely ignored; even further from consciousness was the notion that soon we would redefine every “predigital” effect, optical or otherwise, as possessing an indexical authenticity that digital effects, well, don’t. (When, in 1998, George Lucas replaced some of the special-effects shots in his original Star Wars trilogy with CG do-overs, the outrage of many fans suggested that even the “fakest” products of 70’s-era filmmaking had become, like the Velveteen Rabbit, cherished realities over time.)

Tron was our first real inkling that a “new school” was around the corner — a school whose presence and implications became more visible with every much-publicized advance in digital imaging. Ron Cobb’s pristine spaceships in The Last Starfighter (1984); the stained-glass knight in Young Sherlock Holmes (1985); the watery pseudopod in The Abyss (1989); each in its own way raised the bar, until one day — somewhere around the time of Independence Day (1996), according to Michele Pierson — it simply stopped mattering whether a given special effect was digital or analog. In the same way that slang catches on, everything overnight became “CGI.” That newcomer to the neighborhood, the one who had people peering nervously through their drapes at the moving truck, had moved in and changed the suburb completely. Special-effects cinema now operated under a technological form of the one-drop rule: all it took was a dab of CGI to turn the whole thing into a “digital effects movie.” (Certain film scholars regularly use this term to refer to both Titanic [1997] and The Matrix [1999], neither of which employs more than a handful of digitally-assisted shots — many of these involving intricate handoffs from practical miniatures or composited live-action elements.)

Inscribed in each frame of Tron is the idea, if not the actual presence, of the digital; it was the first full-length rehearsal of a special-effects story we’ve been telling ourselves ever since. Viewed today, what stands out about the first film is what an antique and human artifact — an analog artifact — it truly is. The arrival of Tron: Legacy, simultaneously a sequel, update, and reimagining of the original, gives us a chance to engage again with that long-ago state of the art; to appreciate the treadmill evolution of blockbuster cinema, so devoted to change yet so fixed in its aims; and to experience a fresh and vastly more potent vision of what’s around the corner. The unique lure (and trap) of our sophisticated cinematic engines is that they never quite turn that corner, never do more than freeze for an instant, in the guise of its realization, a fantasy of film’s future. In this sense — to rephrase Bruno Latour — we have never been digital.

Part Two: 2,415 Times Smarter


In getting a hold on what Tron: Legacy (hereafter T:L) both is and isn’t, I find myself thinking about a line from its predecessor. Ed Dillinger (David Warner), figurative and literal avatar of the evil corporation Encom, sits in his office — all silver slabs and glass surfaces overlooking the city’s nighttime gridglow, in the cleverest and most sustained of the thematic conceits that run throughout both films: the paralleling, to the point of indistinguishability, of our “real” architectural spaces and the electronic world inside the computer. (Two years ahead of Neuromancer and a full decade before Snow Crash, Tron invented cyberspace.)

Typing on a desk-sized touchscreen keyboard that neatly predates the iPad, Dillinger confers with the Master Control Program or MCP, a growling monitorial application devoted to locking down misbehavior in the electronic world as it extends its own reach ever outward. (The notion of fascist algorithm, policing internal imperfection while growing like a malignancy, is remapped in T:L onto CLU — another once-humble program omnivorously metastasized.) MCP complains that its plans to infiltrate the Pentagon and General Motors will be endangered by the presence of a new and independent security watchdog program, Tron. “This is what I get for using humans,” grumbles MCP, which in terms of human psychology we might well rename OCD with a touch of NCP. “Now wait a minute,” Dillinger counters, “I wrote you.” MCP replies coldly, “I’ve gotten 2,415 times smarter since then.”

The notion that software — synecdoche for the larger bugaboo of technology “itself” — could become smarter on its own, exceeding human intelligence and transcending the petty imperatives of organic morality, is of course the battery that powers any number of science-fiction doomsday scenarios. Over the years, fictionalizations of the emergent cybernetic predator have evolved from single mainframe computers (Colossus: The Forbin Project [1970], WarGames [1983]) to networks and metal monsters (Skynet and its time-traveling assassins in the Terminator franchise) to graphic simulations that run on our own neural wetware, seducing us through our senses (the Matrix series [1999-2003]). The electronic world established in Tron mixes elements of all three stages, adding an element of alternative storybook reality a la Oz, Neverland … or Disneyworld.

Out here in the real world, however, what runs beneath these visions of mechanical apocalypse is something closer to the Technological Singularity warned of by Ray Kurzweil and Vernor Vinge, as our movie-making machinery — in particular, the special-effects industry — approaches a point where its powers of simulation merge with its custom-designed, mass-produced dreams and nightmares. That is to say: our technologies of visualization may incubate the very futures we fear, so intimately tied to the futures we desire that it’s impossible to sort one from the other, much less to dictate which outcome we will eventually achieve.

In terms of its graphical sophistication as well as the extended forms of cultural and economic control that have come to constitute a well-engineered blockbuster, Tron: Legacy is at least 2,415 times “smarter” than its 1982 parent, and whatever else we may think of it — whatever interpretive tricks we use to reduce it to and contain it as “just a movie” — it should not escape our attention that the kind of human/machine fusion, not to mention the theme of runaway AI, at play in its narrative are surface manifestations of much more vast and far-reaching transformations: a deep structure of technological evolution whose implications only start with the idea that celluloid art has been taken over by digital spectacle.

The lightning rod for much of the anxiety over the replacement of one medium by another, the myth of film’s imminent extinction, is the synthespian or photorealistic virtual actor, which, following the logic of the preceding paragraphs, is one of Tron: Legacy‘s chief selling points. Its star, Jeff Bridges, plays two roles — the first as Flynn, onetime hotshot hacker, and the second as CLU, his creation and nemesis in the electronic world. Doppelgangers originally, Flynn has aged while CLU remains unchanged, the spitting image of Flynn/Bridges circa 1982.

Except that this image doesn’t really “spit.” It stares, simmers, and smirks; occasionally shouts; knocks things off tables; and does some mean acrobatic stunts. But CLU’s fascinating weirdness is just as evident in stillness as in motion (see the top of this post), for it’s clearly not Jeff Bridges we’re looking at, but a creepy near-miss. Let’s pause for a moment on this question: why a miss at all? Why couldn’t the filmmakers have conjured up a closer approximation, erasing the line between actor and digital double? Nearly ten years after Final Fantasy: The Spirits Within, it seems that CGI should have come farther. After all, the makers of T:L weren’t bound by the aesthetic obstructions that Robert Zemeckis imposed on his recent films, a string of CG waxworks (The Polar Express [2004], Beowulf [2007], A Christmas Carol [2009], and soon — shudder — a Yellow Submarine remake) in which the inescapable wrongness of the motion-captured performances are evidently a conscious embrace of stylization rather than a failed attempt at organic verisimilitude. And if CLU were really intended to convince us, he could have been achieved through the traditional retinue of doubling effects: split-frame mattes, body doubles in clever shot-reverse-shot arrangements, or the combination of these with motion-control cinematography as in the masterful composites of Back to the Future 2, which, made in 1989, is only seven years older than the first Tron.

The answer to the apparent conundrum is this: CLU is supposed to look that way; we are supposed to notice the difference, because the effect wouldn’t be special if we didn’t. The thesis of Dan North’s excellent book Performing Illusions is that no special effect is ever perfect — we can always spot the joins, and the excitement of effects lies in their ceaseless toying with our faculties of suspicion and detection, the interpretation of high-tech dreams. Updating the argument for synthespian performances like CLU’s, we might profitably dispose of the notion that the Uncanny Valley is something to be crossed. Instead, smart special effects set up residence smack-dab in the middle.

Consider by analogy the use of Botox. Is the point of such cosmetic procedures to absolutely disguise the signs of age? Or are they meant to remain forever fractionally detectable as multivalent signifiers — of privilege and wealth, of confident consumption, of caring enough about flaws in appearance to (pretend to) hide them? Here too is evidence of Tron: Legacy’s amplified intelligence, or at least its subtle cleverness: dangling before us a CLU that doesn’t quite pass the visual Turing Test, it simultaneously sells us the diegetically crucial idea of a computer program in the shape of human (which, in fact, it is) and in its apparent failure lulls us into overconfident susceptibility to the film’s larger tapestry of tricks. 2,415 times smarter indeed!

Part Three: The Sea of Simulation


Doubles, of course, have always abounded in the works that constitute the Tron franchise. In the first film, both protagonist (Flynn/Tron) and antagonist (Sark/MCP) exist as pairs, and are duplicated yet again in the diegetic dualism of real world/electronic world. (Interestingly, only MCP seems to lack a human manifestation — though it could be argued that Encom itself fulfills that function, since corporations are legally recognized as people.) And the hall of mirrors keeps on going. Along the axis of time, Tron and Tron: Legacy are like reflections of each other in their structural symmetry. Along the axis of media, Jeff Bridges dominates the winter movie season with performances in both T:L and True Grit, a kind of intertextual cloning. (The Dude doesn’t just abide — he multiplies.)

Amid this rapture of echoes, what matters originality? The critical disdain for Tron: Legacy seems to hinge on three accusations: its incoherent storytelling; its dependence on special effects; and the fact that it’s largely a retread of Tron ’82. I’ll deal with the first two claims below, but on the third count, T:L must surely plead “not guilty by reason of nostalgia.” The Tron ur-text is a tale about entering a world that exists alongside and within our own — indeed, that subtends and structures our reality. Less a narrative of exploration than of introspection, its metaphysics spiral inward to feed off themselves. Given these ouroboros-like dynamics, the sequel inevitably repeats the pattern laid down in the first, carrying viewers back to another embedded experience — that of encountering the first Tron — and inviting us to contrast the two, just as we enjoy comparing Flynn and CLU.

But what about those who, for reasons of age or taste, never saw the first Tron? Certainly Disney made no effort to share the original with us; their decision not to put out a Blu-Ray version, or even rerelease the handsome two-disk 20th anniversary DVD, has led to conspiratorial muttering in the blogosphere about the studio’s coverup of an outdated original, whose visual effects now read as ridiculously primitive. Perhaps this is so. But then again, Disney has fine-tuned the business of selectively withholding their archive, creating rarity and hence demand for even their flimsiest products. It wouldn’t at all surprise me if the strategy of “disappearing” Tron pre-Tron: Legacy were in fact an inspired marketing move, one aimed less at monetary profit than at building discursive capital. What, after all, do fans, cineastes, academics, and other guardians of taste enjoy more than a privileged “I’ve seen it and you haven’t” relationship to a treasured text? Comic-Con has become the modern agora, where the value of geek entertainment items is set for the masses, and carefully coordinated buzz transmutes subcultural fetish into pop-culture hit.

It’s maddeningly circular, I know, to insist that it takes an appreciation of Tron to appreciate Tron: Legacy. But maybe the apparent tautology resolves if we substitute terms of evaluation that don’t have to do with blockbuster cinema. Does it take appreciation of Ozu (or Tarkovsky or Haneke or [insert name here]) to appreciate other films by the same director? Tron: Legacy is not in any classical sense an auteurist work — I couldn’t tell you who directed it without checking IMDb — but who says the brand itself can’t function as an auteur, in the sense that a sensitive reading of it depends on familiarity with tics and tropes specific to the larger body of work? Alternatively, we might think of Tron as sub-brand of a larger industrial genre, the blockbuster, whose outward accessibility belies the increasingly bizarre contours of its experience. With its diffuse boundaries (where does a blockbuster begin and end? — surely not within the running time of a single feature-length movie) and baroque textual patterns (from the convoluted commitments of transmedia continuity to rapidfire editing and slangy shorthands of action pacing), the contemporary blockbuster possesses its own exotic aesthetic, one requiring its own protocols of interpretation, its own kind of training, to properly engage. High concept does not necessarily mean non-complex.

Certainly, watching Tron: Legacy, I realized it must look like visual-effects salad to an eye untrained in sensory overwhelm. I don’t claim to enjoy everything made this way: Speed Racer made me queasy, and Revenge of the Fallen put me into an even deeper sleep than did the first Transformers. T:L, however, is much calmer in its way, perhaps because its governing look — blue, silver, and orange neon against black — keeps the frame-cramming to a minimum. (The post-1983 George Lucas committed no greater sin than deciding to pack every square inch of screen with nattering detail.) Here the sequel’s emulation of Tron‘s graphics is an accidental boon: limited memory and storage led in the original to a reliance on black to fill in screen space, a restriction reinvented in T:L as strikingly distinctive design. Our mad blockbusters may indeed be getting harder to watch and follow. But perhaps we shouldn’t see this as proof of commercially-driven intellectual bankruptcy and inept execution, but as the emergence of a new — and in its way, wonderfully difficult and challenging — mode of popular art.

T:L works for me as a movie not because its screenplay is particularly clever or original, but because it smoothly superimposes two different orders of technological performance. The first layer, contained within the film text, is the synthesis of live action and computer animation that in its intricate layering succeeds in creating a genuinely alternate reality: action-adventure seen through the kino-eye. Avatar attempted this as well, but compared to T:L, Cameron’s fantasia strikes me as disingenuous in its simulationist strategy. The lush green jungles of Pandora and glittering blue skin of the Na’vi are the most organic of surfaces in which CGI could cloak itself: a rendering challenge to be sure, but as deceptively sentimental in its way as a Thomas Kinkade painting. Avatar is the digital performing in “greenface,” sneakily dissembling about its technological core. Tron: Legacy, by contrast, takes as its representational mission simulation itself. Its tapestry of visual effects is thematically and ontologically coterminous with the world of its narrative; it is, for us and for its characters, a sea of simulation.

Many critics have missed this point, insisting that the electronic world the film portrays should have reflected the networked environment of the modern internet. But what T:L enshrines is not cyberspace as the shared social web it has lately become, but the solipsistic arena of first-person combat as we knew it in videogames of the late 1970s. As its plotting makes clear, T:L is at heart about the arcade: an ethos of rastered pyrotechnics and three-lives-for-a-quarter. The adrenaline of its faster scenes and the trances of its slower moments (many of them cued by the silver-haired Flynn’s zen koans) perfectly capture the affective dialectics of cabinet contests like Tempest or Missile Command: at once blazing with fever and stoned on flow.

The second technological performance superimposed on Tron: Legacy is, of course, the exhibition apparatus of IMAX and 3D, inscribed in the film’s planning and execution even for those who catch the print in lesser formats. In this sense, too, T:L advances the milestone planted by Avatar, beacon of an emerging mode of megafilm engineering. It seems the case that every year will see one such standout instance of expanded blockbuster cinema — an event built in equal parts from visual effects and pop-culture archetypes, impossible to predict but plain in retrospect. I like to imagine that these exemplars will tend to appear not in the summer season but at year’s end, as part of our annual rituals of rest and renewal: the passing of the old, the welcoming of the new. Tron: Legacy manages to be about both temporal polarities, the past and the future, at once. That it weaves such a sublime pattern on the loom of razzle-dazzle science fiction is a funny and remarkable thing.


To those who have read to the end of this essay, it’s probably clear that I dug Tron: Legacy, but it may be less clear — in the sense of “twelve words or less” — exactly why. I confess I’m not sure myself; that’s what I’ve tried to work out by writing this. I suppose in summary I would boil it down to this: watching T:L, I felt transported in a way that’s become increasingly rare as I grow older, and the list of movies I’ve seen and re-seen grows ever longer. Once upon a time, this act of transport happened automatically, without my even trying; I stumbled into the rabbit-holes of film fantasy with the ease of … well, I’ll let Laurie Anderson have the final words.

I wanted you. And I was looking for you.
But I couldn’t find you.
I wanted you. And I was looking for you all day.
But I couldn’t find you. I couldn’t find you.

You’re walking. And you don’t always realize it,
but you’re always falling.
With each step you fall forward slightly.
And then catch yourself from falling.
Over and over, you’re falling.
And then catching yourself from falling.
And this is how you can be walking and falling
at the same time.

Digital Dogsbodies

It’s funny — and perhaps, in the contagious-episteme fashion of Elisha Gray and Alexander Graham Bell filing patents for the telephone on the very same date, a bit creepy — that Dan North of Spectacular Attractions should write on the topic of dog movies while I was preparing my own post about Space Buddies. This Disney film, which pornlike skipped theaters to go straight to DVD and Blu-Ray, is one of a spate of dog-centered films that have become a crowdpleasing filmic staple of late. Dan poses the question, “What is it about today that people need so many dog movies?” and goes on to speculate that we’re collectively drowning our sorrows at the world’s ugliness with a massive infusion of cute: puppyism as cultural anodyne.

Maybe so. It seems to me, though, that another dynamic is in operation here — and with all due respect to my follow scholar of visual effects, Dan may be letting the grumbly echoes of the Frankfurt School distract him from a fascinating nexus of technology, economics, and codes of expressive aesthetics driving the current crop of cinematic canines. Simply put, dogs make excellent cyberstars.

Think about it. Nowadays we’re used to high-profile turns by hybrids of human and digital performance: Angelina Jolie as Grendel’s goldplated mother in Beowulf, Brad Pitt as the wizened baby in The Curious Case of Benjamin Button. (Hmm, it only now strikes me that this intertextual madonna-and-child are married in real life; perhaps the nuclear family is giving way to the mo-capped one?) Such top-billed performances are based on elaborate rendering pipelines, to be sure, but their celebrity/notoriety is at least as much about the uniquely sexy and identifiable star personae attached to these magic mannequins: a higher order of compositing, a discursive special effect. It takes a ton of processing power to paint the sutured stars onto the screen, and an equivalent amount of marketing and promotion — those other, Foucauldian technologies — to situate them as a specific case of the more general Steady March Toward Viable Synthespianism. Which means, in terms of labor and capital, they’re bloody expensive. Mountains of data are moved in service of the smallest details of realism, and even then, nobody can get the eyes right.

But what of the humble cur and the scaled-down VFX needed to sell its blended performance? The five puppy stars of Space Buddies are real, indexically photographed dogs with digitally-retouched jaw movements and eyebrow expressions; child voice actors supply the final, intangible, irreplaceable proof of character and personality. (To hell with subsurface skin scatter and other appeals to our pathetically seducible eyes; the real threshold of completely virtual performance remains believable speech synthesis.) The canine cast of Beverly Hills Chihuahua, while built on similar principles, are ontologically closer to the army of Agent Smiths in The Matrix Reloaded’s burly brawl — U-Capped fur wrapped over 3D doll armatures and arrayed in Busby-Berkeleyish mass ornament. They are, in short, digital dogsbodies, and as we wring our hands over the resurrection of Fred Astaire in vacuum-cleaner ads and debate whether Ben Burtt’s sound design in Wall-E adds up to a best-actor Oscar, our screens are slowly filling with animals’ special-effects-driven stardom. How strange that we’re not treating them as the landmarks they are — despite their immense profitability, popularity, and paradoxical common-placeness. It’s like Invasion of the Body Snatchers, only cuddly!

I don’t mean to sound alarmist — though writing about the digital’s supposed incursion into the real always seems to bring out the edge in my voice. In truth, the whole thing seems rather wonderful to me, not just because I really dug Space Buddies, but because the dogsbody has been around a long time, selling audiences on the dramatic “realism” of talking animals. From Pluto to Astro, Scooby Doo to Rowlf, Lady and the Tramp to K-9, Dynomutt, and Gromit, dogs have always been animated beyond their biological station by technologies of the screen; we accept them as narrative players far more easily than we do more elaborate and singular constructions of the monstrous and exotic. The latest digital tools for imparting expression to dogs’ mouths and muzzles were developed, of all places, in pet-food ads: clumsy stepping stones that now look as dated as poor LBJ’s posthumous lipsynching in Forrest Gump.

These days it’s the rare dog (or cat, bear, and fish) onscreen whose face hasn’t been partially augmented with virtual prosthetics. Ultimately, this is less about technological capability than the legal and monetary bottom line: unlike human actors, animal actors can’t go ballistic on the lighting guy, or write cumbersome provisions into their contracts to copyright their “aura” in the age of mechanical reproduction. Our showbiz beasts exist near the bottom of the labor pool: just below that other mass of bodies slowly being fed into the meat-grinder of digitization, stuntpeople, and just above the nameless hoardes of Orcs jam-packing the horizon shots of Lord of the Rings. I think it was Jean Baudrillard, in The System of Objects, who observed that pets hold a unique status, poised perfectly between people and things. It’s a quality they happen to share with FX bodies, and for this reason I expect we’ll see menageries in the multiplex for years to come.


I look at Blade Runner as the last analog science-fiction movie made, because we didn’t have all the advantages that people have now. And I’m glad we didn’t, because there’s nothing artificial about it. There’s no computer-generated images in the film.

— David L. Snyder, Art Director

Any movie that gets a “Five-Disc Ultimate Collectors Edition” deserves serious attention, even in the midst of a busy semester, and there are few films more integral to the genre of science fiction or the craft of visual effects than Blade Runner. (Ordinarily I’d follow the stylistic rules about which I browbeat my Intro to Film students and follow this title with the year of release, 1982. But one of the many confounding and wonderful things about Blade Runner is the way in which it resists confinement to any one historical moment. By this I refer not only to its carefully designed and brilliantly realized vision of Los Angeles in 2019 [now a mere 11 years away!] but the many-versioned indeterminacy of its status as an industrial artifact, one that has been revamped, recut, and released many times throughout the two and a half decades of its cultural existence. Blade Runner in its revisions has almost dissolved the boundaries separating preproduction, production, and postproduction — the three stages of the traditional cinematic lifecycle — to become that rarest of filmic objects, the always-being-made. The only thing, in fact, that keeps Blade Runner from sliding into the same sad abyss as the first Star Wars [an object so scribbled-over with tweaks and touch-ups that it has almost unraveled the alchemy by which it initially transmuted an archive of tin-plated pop-culture precursors into a golden original] is the auteur-god at the center of its cosmology of texts: unlike George Lucas, Ridley Scott seems willing to use words like “final” and “definitive” — charged terms in their implicit contract to stop futzing around with a collectively cherished memory.)

I grabbed the DVDs from Swarthmore’s library last week to prep a guest lecture for a seminar a friend of mine is teaching in the English Department, and in the course of plowing through the three-and-a-half-hour production documentary “Dangerous Days” came across the quote from David L. Snyder that opens this post. What a remarkable statement — all the more amazing for how quickly and easily it goes by. If there is a conceptual digestive system for ideas as they circulate through time and our ideological networks, surely this is evidence of a successfully broken-down and assimilated “truth,” one which we’ve masticated and incorporated into our perception of film without ever realizing what an odd mouthful it makes. There’s nothing artificial about it, says David Snyder. Is he referring to the live-action performances of Harrison Ford, Rutger Hauer, and Sean Young? The “retrofitted” backlot of LA 2019, packed with costumed extras and drenched in practical environmental effects from smoke machines and water sprinklers? The cars futurized according to the extrapolative artwork of Syd Mead?

No: Snyder is talking about visual effects — the virtuoso work of a small army headed by Douglas Trumbull and Richard Yuricich — a suite of shots peppered throughout the film that map the hellish, vertiginous altitudes above the drippy neon streets of Lawrence G. Paull’s production design. Snyder refers, in other words, to shots produced exclusively through falsification: miniature vehicles, kitbashed cityscapes, and painted mattes, each piece captured in multiple “passes” and composited into frames that present themselves to the eye as unified gestalts but are in fact flattened collages, mosaics of elements captured in radically different scales, spaces, and times but made to coexist through the layerings of the optical printer: an elaborate decoupage deceptively passing itself off as immediate, indexical reality.

I get what Snyder is saying. There is something natural and real about the visual effects in Blade Runner; watching them, you feel the weight and substance of the models and lighting rigs, can almost smell the smoky haze being pumped around the light sources to create those gorgeous haloes, a signature of Trumbull’s FX work matched only by his extravagant ballet of ice-cream-cone UFOs amid boiling cloudscapes and miniature mountains in Close Encounters of the Third Kind. But what no one points out is that all of these visual effects — predigital visual effects — were once considered artificial. We used to think of them as tricks, hoodwinks, illusions. Only now that the digital revolution has come and gone, turning everything into weightless, effortless CG, do we retroactively assign the fakery of the past a glorious authenticity.

Or so the story goes. As I suggest above, and have argued elsewhere, the difference between “artificial” and “actual” in filmmaking is as much a matter of ideology as industrial method; perceptions of the medium are slippery and always open to contestation. Special and visual effects have always functioned as a kind of reality pump, investing the “nonspecial” scenes and sequences around them with an air of indexical reliability which is, itself, perhaps the most profound “effect.” With vanishingly few exceptions, actors speak lines written for them; stories are stitched into seamless continuity from fragments of film shot out of order; and, inescapably, a camera is there to record what’s happening, yet never reveals its own existence. Cinema is, prior to everything else, an artifact, and special effects function discursively to misdirect our attention onto more obvious classes of manipulation.

Now the computer has arrived as the new trick in town, enabling us to rebrand everything that came before as “real.” It’s an understandable turn of mind, but one that scholars and critics ought to navigate carefully. (Case in point: Snyder speaks as though computers didn’t exist at the time of Blade Runner. Yet it is only through the airtight registration made possible by motion-control cinematography, dependent on microprocessors for precision and memory storage for repeatability, that the film’s beautiful miniatures blend so smoothly with their surroundings.) It is possible, and worthwhile, to immerse ourselves in the virtual facade of ideology’s trompe-l’oeil — a higher order of special effect — while occasionally stepping back to acknowledge the brush strokes, the slightly imperfect matte lines that seam the composited elements of our thought.

Speed Indeed


The trailer for Speed Racer has been available for a little under a week, and word of it is spreading through social channels almost as quickly as through the manifold viral vectors of information space. (The world of organic embodied communications can only stand back and shake its head in wonder at its fleet digital progeny. YouTube’s version is here; I recommend viewing it in higher quality through the official website.) I’ve watched the trailer several times myself, in increasing fascination; students and colleagues have emailed me links to it; I even overheard two students discussing it excitedly, as though it were the movie itself: It’s already out? Cool! Whatever the merits of the work-in-progress the trailer is advertising, it has certainly achieved its intended purpose, acting not so much as a preview, but rather a demo of the full-length version that will hit theaters in May 2008. It captures the movie in miniature, scales it down to an iPod-sized burst of visual attractions and narrative beats.

I admit to being suckered (or sucker-punched) by the look of Speed Racer, a hypperreal funhouse crafted from neon candy and shot in an infinitely deep focus that would make Gregg Toland or James Wong Howe weep for joy. I guess it’s not surprising that Larry and Andy Wachowski, following up the silvery-green slickness of their Matrix trilogy, have prepared another film whose brand identity depends largely upon its visual texture: an internally consistent cinematic VR — a graphic engine in the truest sense — in which cinematography, visual effects, and mise-en-sc??ne have flowed into each other like gooey fudge.

Actually, add editing to that mix, for the Speed Racer trailer is the first I can think of to offer a scene transition as a visual hook. The image at the top of this article shows the endpoint of a camera move: tracking around protagonist Speed (Emile Hirsch), the background blurs into a rainbow ribbon, and Hirsch’s shoulder “wipes” the next shot into existence. The moment features prominently in the trailer and in stills grabbed from it (like the one I found by Googling), yet it seems to be neither a turning point in the narrative, a revelation of character, nor a generic marker. Instead, it showcases a new “verb” in film grammar, signaling that Speed Racer will not simply tell a great story, but will tell it using an entirely new set of rules.

Yeah, right. We’ve all heard this before; cinema probably started making promises it couldn’t keep on December 29, 1895, the day after the first public screening of a motion picture. But unlike the Lumi??re Brothers — who called cinema “an invention without a future” — the Wachowskis have set themselves the task of forging cinema’s next epoch. Whether they can do it with Speed Racer remains to be seen. On the surface, it’s a giddy experiment in mapping anime style into live action, though I suspect the production has stretched the concept of digital animation so far that any ontological divide between it and live action has long since ceased to matter. It may end up no more successful than Ang Lee’s Hulk (2003), which also toyed with a new kind of transition, in that case a pattern of orthogonal wipes based on comic-book panels. Lee’s experiment didn’t do much to pep up that dismal movie, but something tells me that Speed Racer will fare better. Here’s hoping.