Tron: Legacy

This review is dedicated to my friends David Surman and Will Brooker.

Part One: We Have Never Been Digital


If Avatar was in fact the “gamechanger” its prosyletizers claimed, then it’s fitting that the first film to surpass it is itself about games, gamers, and gaming. Arriving in theaters nearly a year to the day after Cameron’s florid epic, Tron: Legacy delivers on the promise of an expanded blockbuster cinema while paradoxically returning it to its origins.

Those origins, of course, date back to 1982, when the first Tron — brainchild of Steven Lisberger, who more and more appears to be the Harper Lee of pop SF, responsible for a single inspired act of creation whose continued cultural resonance probably doomed any hope of a career — showed us what CGI was really about. I refer not to the actual computer-generated content in that film, whose 96-minute running time contains only 15-20 minutes of CG animation (the majority of the footage was achieved through live-action plates shot in high contrast, heavily rotoscoped, and backlit to insert glowing circuit paths into the environment and costumes), but instead to the discursive aura of the digital frontier it emits: another sexy, if equally illusory, glow. Tron was the first narrative feature film to serve up “the digital” as a governing design aesthetic as well as a marketing gimmick. Sold as high-tech entertainment event, audiences accepted Lisberger’s folly as precisely that: a time capsule from the future, coming attraction as main event. Tron taught us, in short, to mistake a hodgepodge of experiment and tradition as a more sweeping change in cinematic ontology, a spell we remain under to this day.

But the state of the art has always been a makeshift pact between industry and audience, a happy trance of “I know, but even so …” For all that it hinges on a powerful impression of newness, the self-applied declaration of vanguard status is, ironically, old hat in filmmaking, especially when it comes to the periodic eruptions of epic spectacle that punctuate cinema’s more-of-the-same equilibrium. The mutations of style and technology that mark film’s evolutionary leaps are impossible to miss, given how insistently they are promoted: go to YouTube and look at any given Cecil B. DeMille trailer if you don’t believe me. “Like nothing you’ve ever seen!” may be an irresistible hook (at least to advertisers), but it’s rarely true, if only because trailers, commercials, and other advance paratexts ensure we’ve looked at, or at least heard about, the breakthrough long before we purchase our tickets.

In the case of past breakthroughs, the situation becomes even more vexed. What do you do with a film like Tron, which certainly was cutting-edge at the time of its release, but which, over the intervening twenty-eight years, has taken on an altogether different veneer? I was 16 when I first saw it, and have frequently shown its most famous setpiece — the lightcycle chase — in courses I teach on animation and videogames. As a teenager, I found the film dreadfully inert and obvious, and rewatching it to prepare for Tron: Legacy,  I braced myself for a similarly graceless experience. What I found instead was that a magical transformation had occurred. Sure, the storytelling was as clumsy as before, with exposition that somehow managed to be both overwritten and underexplained, and performances that were probably half-decent before an editor diced them them into novocained amateurism. The visuals, however, had aged into something rather beautiful.

Not the CG scenes — I’d looked at those often enough to stay in touch with their primitive retrogame charm. I’m referring to the live-action scenes, or rather, the suturing of live action and animation that stands in for computer space whenever the camera moves close enough to resolve human features. In these shots, the faces of Flynn (Jeff Bridges), Tron (Bruce Boxleitner), Sark (David Warner), and the film’s other digital denizens are ovals of flickering black-and-white grain, their moving lips and darting eyes hauntingly human amid the neon cartoonage.

Peering through their windows of backlit animation, Tron‘s closeups resemble those in Dreyer’s Passion of Joan of Arc — inspiration for early film theorist Béla Balázs’s lyrical musings on “The Face of Man” — but are closer in spirit to the winking magicians of George Méliès’s trick films, embedded in their phantasmagoria of painted backdrops, double exposures, and superimpositions. Like Lisberger, who would intercut shots of human-scaled action with tanks, lightcycles, and staple-shaped “Recognizers,” Méliès alternated his stagebound performers with vistas of pure artifice, such as animated artwork of trains leaving their tracks to shoot into space. Although Tom Gunning argues convincingly that the early cinema of attractions operated by a distinctive logic in which audiences sought not the closed verisimilar storyworlds of classical Hollywood but the heightened, knowing presentation of magical illusions, narrative frameworks are the sauce that makes the taste of spectacle come alive. Our most successful special effects have always been the ones that — in an act of bistable perception — do double duty as story.

In 1982, the buzzed-about newcomer in our fantasy neighborhoods was CGI, and at least one film that year — Star Trek II: The Wrath of Khan — featured a couple of minutes of computer animation that worked precisely because they were set off from the rest of the movie, as special documentary interlude. Other genre entries in that banner year for SF, like John Carpenter’s remake of The Thing and Steven Spielberg’s one-two punch of E.T. and Poltergeist (the latter as producer and crypto-director), were content to push the limits of traditional effects methods: matte paintings, creature animatronics, gross-out makeup, even a touch of stop-motion animation. Blade Runner‘s effects were so masterfully smoggy that we didn’t know what to make of them — or of the movie, for that matter — but we seemed to agree that they too were old school, no matter how many microprocessors may have played their own crypto-role in the production.

“Old school,” however, is another deceptively relative term, and back then we still thought of special effects as dividing neatly into categories of the practical/profilmic (which really took place in front of the camera) and optical/postproduction (which were inserted later through various forms of manipulation). That all special effects — and all cinematic “truths” — are at heart manipulation was largely ignored; even further from consciousness was the notion that soon we would redefine every “predigital” effect, optical or otherwise, as possessing an indexical authenticity that digital effects, well, don’t. (When, in 1998, George Lucas replaced some of the special-effects shots in his original Star Wars trilogy with CG do-overs, the outrage of many fans suggested that even the “fakest” products of 70’s-era filmmaking had become, like the Velveteen Rabbit, cherished realities over time.)

Tron was our first real inkling that a “new school” was around the corner — a school whose presence and implications became more visible with every much-publicized advance in digital imaging. Ron Cobb’s pristine spaceships in The Last Starfighter (1984); the stained-glass knight in Young Sherlock Holmes (1985); the watery pseudopod in The Abyss (1989); each in its own way raised the bar, until one day — somewhere around the time of Independence Day (1996), according to Michele Pierson — it simply stopped mattering whether a given special effect was digital or analog. In the same way that slang catches on, everything overnight became “CGI.” That newcomer to the neighborhood, the one who had people peering nervously through their drapes at the moving truck, had moved in and changed the suburb completely. Special-effects cinema now operated under a technological form of the one-drop rule: all it took was a dab of CGI to turn the whole thing into a “digital effects movie.” (Certain film scholars regularly use this term to refer to both Titanic [1997] and The Matrix [1999], neither of which employs more than a handful of digitally-assisted shots — many of these involving intricate handoffs from practical miniatures or composited live-action elements.)

Inscribed in each frame of Tron is the idea, if not the actual presence, of the digital; it was the first full-length rehearsal of a special-effects story we’ve been telling ourselves ever since. Viewed today, what stands out about the first film is what an antique and human artifact — an analog artifact — it truly is. The arrival of Tron: Legacy, simultaneously a sequel, update, and reimagining of the original, gives us a chance to engage again with that long-ago state of the art; to appreciate the treadmill evolution of blockbuster cinema, so devoted to change yet so fixed in its aims; and to experience a fresh and vastly more potent vision of what’s around the corner. The unique lure (and trap) of our sophisticated cinematic engines is that they never quite turn that corner, never do more than freeze for an instant, in the guise of its realization, a fantasy of film’s future. In this sense — to rephrase Bruno Latour — we have never been digital.

Part Two: 2,415 Times Smarter


In getting a hold on what Tron: Legacy (hereafter T:L) both is and isn’t, I find myself thinking about a line from its predecessor. Ed Dillinger (David Warner), figurative and literal avatar of the evil corporation Encom, sits in his office — all silver slabs and glass surfaces overlooking the city’s nighttime gridglow, in the cleverest and most sustained of the thematic conceits that run throughout both films: the paralleling, to the point of indistinguishability, of our “real” architectural spaces and the electronic world inside the computer. (Two years ahead of Neuromancer and a full decade before Snow Crash, Tron invented cyberspace.)

Typing on a desk-sized touchscreen keyboard that neatly predates the iPad, Dillinger confers with the Master Control Program or MCP, a growling monitorial application devoted to locking down misbehavior in the electronic world as it extends its own reach ever outward. (The notion of fascist algorithm, policing internal imperfection while growing like a malignancy, is remapped in T:L onto CLU — another once-humble program omnivorously metastasized.) MCP complains that its plans to infiltrate the Pentagon and General Motors will be endangered by the presence of a new and independent security watchdog program, Tron. “This is what I get for using humans,” grumbles MCP, which in terms of human psychology we might well rename OCD with a touch of NCP. “Now wait a minute,” Dillinger counters, “I wrote you.” MCP replies coldly, “I’ve gotten 2,415 times smarter since then.”

The notion that software — synecdoche for the larger bugaboo of technology “itself” — could become smarter on its own, exceeding human intelligence and transcending the petty imperatives of organic morality, is of course the battery that powers any number of science-fiction doomsday scenarios. Over the years, fictionalizations of the emergent cybernetic predator have evolved from single mainframe computers (Colossus: The Forbin Project [1970], WarGames [1983]) to networks and metal monsters (Skynet and its time-traveling assassins in the Terminator franchise) to graphic simulations that run on our own neural wetware, seducing us through our senses (the Matrix series [1999-2003]). The electronic world established in Tron mixes elements of all three stages, adding an element of alternative storybook reality a la Oz, Neverland … or Disneyworld.

Out here in the real world, however, what runs beneath these visions of mechanical apocalypse is something closer to the Technological Singularity warned of by Ray Kurzweil and Vernor Vinge, as our movie-making machinery — in particular, the special-effects industry — approaches a point where its powers of simulation merge with its custom-designed, mass-produced dreams and nightmares. That is to say: our technologies of visualization may incubate the very futures we fear, so intimately tied to the futures we desire that it’s impossible to sort one from the other, much less to dictate which outcome we will eventually achieve.

In terms of its graphical sophistication as well as the extended forms of cultural and economic control that have come to constitute a well-engineered blockbuster, Tron: Legacy is at least 2,415 times “smarter” than its 1982 parent, and whatever else we may think of it — whatever interpretive tricks we use to reduce it to and contain it as “just a movie” — it should not escape our attention that the kind of human/machine fusion, not to mention the theme of runaway AI, at play in its narrative are surface manifestations of much more vast and far-reaching transformations: a deep structure of technological evolution whose implications only start with the idea that celluloid art has been taken over by digital spectacle.

The lightning rod for much of the anxiety over the replacement of one medium by another, the myth of film’s imminent extinction, is the synthespian or photorealistic virtual actor, which, following the logic of the preceding paragraphs, is one of Tron: Legacy‘s chief selling points. Its star, Jeff Bridges, plays two roles — the first as Flynn, onetime hotshot hacker, and the second as CLU, his creation and nemesis in the electronic world. Doppelgangers originally, Flynn has aged while CLU remains unchanged, the spitting image of Flynn/Bridges circa 1982.

Except that this image doesn’t really “spit.” It stares, simmers, and smirks; occasionally shouts; knocks things off tables; and does some mean acrobatic stunts. But CLU’s fascinating weirdness is just as evident in stillness as in motion (see the top of this post), for it’s clearly not Jeff Bridges we’re looking at, but a creepy near-miss. Let’s pause for a moment on this question: why a miss at all? Why couldn’t the filmmakers have conjured up a closer approximation, erasing the line between actor and digital double? Nearly ten years after Final Fantasy: The Spirits Within, it seems that CGI should have come farther. After all, the makers of T:L weren’t bound by the aesthetic obstructions that Robert Zemeckis imposed on his recent films, a string of CG waxworks (The Polar Express [2004], Beowulf [2007], A Christmas Carol [2009], and soon — shudder — a Yellow Submarine remake) in which the inescapable wrongness of the motion-captured performances are evidently a conscious embrace of stylization rather than a failed attempt at organic verisimilitude. And if CLU were really intended to convince us, he could have been achieved through the traditional retinue of doubling effects: split-frame mattes, body doubles in clever shot-reverse-shot arrangements, or the combination of these with motion-control cinematography as in the masterful composites of Back to the Future 2, which, made in 1989, is only seven years older than the first Tron.

The answer to the apparent conundrum is this: CLU is supposed to look that way; we are supposed to notice the difference, because the effect wouldn’t be special if we didn’t. The thesis of Dan North’s excellent book Performing Illusions is that no special effect is ever perfect — we can always spot the joins, and the excitement of effects lies in their ceaseless toying with our faculties of suspicion and detection, the interpretation of high-tech dreams. Updating the argument for synthespian performances like CLU’s, we might profitably dispose of the notion that the Uncanny Valley is something to be crossed. Instead, smart special effects set up residence smack-dab in the middle.

Consider by analogy the use of Botox. Is the point of such cosmetic procedures to absolutely disguise the signs of age? Or are they meant to remain forever fractionally detectable as multivalent signifiers — of privilege and wealth, of confident consumption, of caring enough about flaws in appearance to (pretend to) hide them? Here too is evidence of Tron: Legacy’s amplified intelligence, or at least its subtle cleverness: dangling before us a CLU that doesn’t quite pass the visual Turing Test, it simultaneously sells us the diegetically crucial idea of a computer program in the shape of human (which, in fact, it is) and in its apparent failure lulls us into overconfident susceptibility to the film’s larger tapestry of tricks. 2,415 times smarter indeed!

Part Three: The Sea of Simulation


Doubles, of course, have always abounded in the works that constitute the Tron franchise. In the first film, both protagonist (Flynn/Tron) and antagonist (Sark/MCP) exist as pairs, and are duplicated yet again in the diegetic dualism of real world/electronic world. (Interestingly, only MCP seems to lack a human manifestation — though it could be argued that Encom itself fulfills that function, since corporations are legally recognized as people.) And the hall of mirrors keeps on going. Along the axis of time, Tron and Tron: Legacy are like reflections of each other in their structural symmetry. Along the axis of media, Jeff Bridges dominates the winter movie season with performances in both T:L and True Grit, a kind of intertextual cloning. (The Dude doesn’t just abide — he multiplies.)

Amid this rapture of echoes, what matters originality? The critical disdain for Tron: Legacy seems to hinge on three accusations: its incoherent storytelling; its dependence on special effects; and the fact that it’s largely a retread of Tron ’82. I’ll deal with the first two claims below, but on the third count, T:L must surely plead “not guilty by reason of nostalgia.” The Tron ur-text is a tale about entering a world that exists alongside and within our own — indeed, that subtends and structures our reality. Less a narrative of exploration than of introspection, its metaphysics spiral inward to feed off themselves. Given these ouroboros-like dynamics, the sequel inevitably repeats the pattern laid down in the first, carrying viewers back to another embedded experience — that of encountering the first Tron — and inviting us to contrast the two, just as we enjoy comparing Flynn and CLU.

But what about those who, for reasons of age or taste, never saw the first Tron? Certainly Disney made no effort to share the original with us; their decision not to put out a Blu-Ray version, or even rerelease the handsome two-disk 20th anniversary DVD, has led to conspiratorial muttering in the blogosphere about the studio’s coverup of an outdated original, whose visual effects now read as ridiculously primitive. Perhaps this is so. But then again, Disney has fine-tuned the business of selectively withholding their archive, creating rarity and hence demand for even their flimsiest products. It wouldn’t at all surprise me if the strategy of “disappearing” Tron pre-Tron: Legacy were in fact an inspired marketing move, one aimed less at monetary profit than at building discursive capital. What, after all, do fans, cineastes, academics, and other guardians of taste enjoy more than a privileged “I’ve seen it and you haven’t” relationship to a treasured text? Comic-Con has become the modern agora, where the value of geek entertainment items is set for the masses, and carefully coordinated buzz transmutes subcultural fetish into pop-culture hit.

It’s maddeningly circular, I know, to insist that it takes an appreciation of Tron to appreciate Tron: Legacy. But maybe the apparent tautology resolves if we substitute terms of evaluation that don’t have to do with blockbuster cinema. Does it take appreciation of Ozu (or Tarkovsky or Haneke or [insert name here]) to appreciate other films by the same director? Tron: Legacy is not in any classical sense an auteurist work — I couldn’t tell you who directed it without checking IMDb — but who says the brand itself can’t function as an auteur, in the sense that a sensitive reading of it depends on familiarity with tics and tropes specific to the larger body of work? Alternatively, we might think of Tron as sub-brand of a larger industrial genre, the blockbuster, whose outward accessibility belies the increasingly bizarre contours of its experience. With its diffuse boundaries (where does a blockbuster begin and end? — surely not within the running time of a single feature-length movie) and baroque textual patterns (from the convoluted commitments of transmedia continuity to rapidfire editing and slangy shorthands of action pacing), the contemporary blockbuster possesses its own exotic aesthetic, one requiring its own protocols of interpretation, its own kind of training, to properly engage. High concept does not necessarily mean non-complex.

Certainly, watching Tron: Legacy, I realized it must look like visual-effects salad to an eye untrained in sensory overwhelm. I don’t claim to enjoy everything made this way: Speed Racer made me queasy, and Revenge of the Fallen put me into an even deeper sleep than did the first Transformers. T:L, however, is much calmer in its way, perhaps because its governing look — blue, silver, and orange neon against black — keeps the frame-cramming to a minimum. (The post-1983 George Lucas committed no greater sin than deciding to pack every square inch of screen with nattering detail.) Here the sequel’s emulation of Tron‘s graphics is an accidental boon: limited memory and storage led in the original to a reliance on black to fill in screen space, a restriction reinvented in T:L as strikingly distinctive design. Our mad blockbusters may indeed be getting harder to watch and follow. But perhaps we shouldn’t see this as proof of commercially-driven intellectual bankruptcy and inept execution, but as the emergence of a new — and in its way, wonderfully difficult and challenging — mode of popular art.

T:L works for me as a movie not because its screenplay is particularly clever or original, but because it smoothly superimposes two different orders of technological performance. The first layer, contained within the film text, is the synthesis of live action and computer animation that in its intricate layering succeeds in creating a genuinely alternate reality: action-adventure seen through the kino-eye. Avatar attempted this as well, but compared to T:L, Cameron’s fantasia strikes me as disingenuous in its simulationist strategy. The lush green jungles of Pandora and glittering blue skin of the Na’vi are the most organic of surfaces in which CGI could cloak itself: a rendering challenge to be sure, but as deceptively sentimental in its way as a Thomas Kinkade painting. Avatar is the digital performing in “greenface,” sneakily dissembling about its technological core. Tron: Legacy, by contrast, takes as its representational mission simulation itself. Its tapestry of visual effects is thematically and ontologically coterminous with the world of its narrative; it is, for us and for its characters, a sea of simulation.

Many critics have missed this point, insisting that the electronic world the film portrays should have reflected the networked environment of the modern internet. But what T:L enshrines is not cyberspace as the shared social web it has lately become, but the solipsistic arena of first-person combat as we knew it in videogames of the late 1970s. As its plotting makes clear, T:L is at heart about the arcade: an ethos of rastered pyrotechnics and three-lives-for-a-quarter. The adrenaline of its faster scenes and the trances of its slower moments (many of them cued by the silver-haired Flynn’s zen koans) perfectly capture the affective dialectics of cabinet contests like Tempest or Missile Command: at once blazing with fever and stoned on flow.

The second technological performance superimposed on Tron: Legacy is, of course, the exhibition apparatus of IMAX and 3D, inscribed in the film’s planning and execution even for those who catch the print in lesser formats. In this sense, too, T:L advances the milestone planted by Avatar, beacon of an emerging mode of megafilm engineering. It seems the case that every year will see one such standout instance of expanded blockbuster cinema — an event built in equal parts from visual effects and pop-culture archetypes, impossible to predict but plain in retrospect. I like to imagine that these exemplars will tend to appear not in the summer season but at year’s end, as part of our annual rituals of rest and renewal: the passing of the old, the welcoming of the new. Tron: Legacy manages to be about both temporal polarities, the past and the future, at once. That it weaves such a sublime pattern on the loom of razzle-dazzle science fiction is a funny and remarkable thing.


To those who have read to the end of this essay, it’s probably clear that I dug Tron: Legacy, but it may be less clear — in the sense of “twelve words or less” — exactly why. I confess I’m not sure myself; that’s what I’ve tried to work out by writing this. I suppose in summary I would boil it down to this: watching T:L, I felt transported in a way that’s become increasingly rare as I grow older, and the list of movies I’ve seen and re-seen grows ever longer. Once upon a time, this act of transport happened automatically, without my even trying; I stumbled into the rabbit-holes of film fantasy with the ease of … well, I’ll let Laurie Anderson have the final words.

I wanted you. And I was looking for you.
But I couldn’t find you.
I wanted you. And I was looking for you all day.
But I couldn’t find you. I couldn’t find you.

You’re walking. And you don’t always realize it,
but you’re always falling.
With each step you fall forward slightly.
And then catch yourself from falling.
Over and over, you’re falling.
And then catching yourself from falling.
And this is how you can be walking and falling
at the same time.

Back to Back to the Future

Revisiting Back to the Future on Blu-Ray, it’s hard not to get sucked into an infinite regress the likes of which would probably have pleased screenwriters Robert Zemeckis and Bob Gale: the 1985 original, viewed 25 years later, has acquired layers of unintended convolution, discovery, and loss.

As with any smartly-executed time travel story (see: La JeteePrimer, Timecrimes, and “City on the Edge of Forever”), the plot gets you thinking in loops, attentive to major and minor patterns of similarity and difference, playing textual detective long before Christopher Nolan came along with the narrative games that are his auteurist signature. And BTTF is nothing if not a well-constructed mousetrap, full of cleverly self-referential production design, dropping hints and clues from its very first frames about the saga that’s about to unfold. (Panning across Doc Brown’s Rube-Goldberg-esque jumble of a laboratory, the camera lingers on a miniature clock from whose hands a figure hangs comically, in a shoutout both to Harold Lloyd in Safety Last! [1923] and to BTTF’s own climax.) In this sense, it’s a film designed for multiple viewings, though the valence of that repetition has changed over the quarter-century since the film’s first release: in the mid-eighties, BTTF was a quintessential summer blockbuster, its commercial life predicated on repeat business and the just-emerging market of home rentals. Nowadays, the fuguelike structure of BTTF lends itself perfectly to the digitalized echo chamber of what Barbara Klinger terms replay culture and the encrusted supplementation of making-of materials sparked in the random-access format of DVDs and fanned into a blaze by the enormously expanded storage of Blu-Rays. (Learning to navigate the interlocked and labyrinthine documentaries on the new Alien set is like being lost on the Nostromo.)

But in 1985, all this was yet to come, and BTTF’s bubbly yet machine-lathed feat of comic SF is all the more remarkable for maintaining its innocence across the gulf of the changing technological and economic contexts that define modern blockbuster culture. It still feels fresh, crisp, alive with possibilities. If anything, it’s somehow gotten younger and more graceful: expository setups that seemed leaden now trip off the actors’ tongues like inspired wordplay, while action setpieces that seemed unnecessarily prolonged — in particular, the elaborate climax in which a time-traveling DeLorean must intersect a lightning-bolt-fueled surge of energy while traveling at exactly 88 miles per hour — now unspool with the precision of a Fred Astaire dance routine. Perhaps the inspired lightness of BTFF is simply a matter of contrast with our current blockbusters, which have grown so dense and heavy with production design and ubiquitous visual effects, and whose chapterized storyworlds so entangled with encyclopedic continuity, that engaging with them feels like subscribing to a magazine — or a gym membership.

BTTF, of course, is a guilty player in this evolution. Its two sequels were filmed back-to-back, an early instance of the “threequelization” that would later result in such elephantine letdowns as the Matrix and Pirates of the Caribbean followups. But just as the story of BTTF involves traveling to an unblemished past, when sin was a tantalizing temptation to be toyed with rather than a buried regret, the reappearance of the film in 2010 allows us to revisit a lost moment of cinema, newly pure-looking in relation to the rote and tired wonders of today. For a special-effects comedy, there are surprisingly few visual tricks in evidence: ILM’s work is confined to some animated electrical discharges, a handful of tidy and unshowy matte shots, and a motion-controlled flying-car takeoff at the end that pretty much sums up the predigital state of the art. Far in the future is Robert Zemeckis’s unfortunate obsession with CGI waxworks a la The Polar Express, Beowulf, A Christmas Carol, and soon — shudder — Yellow Submarine.

As for the Blu-Ray presentation, the prosthetic wattles of old-age makeup stand out as sharply as the heartbreakingly unlined and untremored features of Michael J. Fox, then in his early 20s and a paragon of comic timing. It makes me think of how I’ve changed in the twenty-five years since I first saw the movie, at the age of 19. And somewhere in this circuit of private recollection and public viewing, I get lost again, with both sadness and joy, in the temporal play of popular culture that defines so much of my imagination and experience. The originating cleverness of BTTF’s high concept has been mirrored and folded in on itself as much by the passage of time as by Universal’s marketing strategies, so that in 2010 — once an inconceivably far-flung punchline destination for Doc Brown in the tag that closes the film and sets up its continuation in BTTF 2 — we encounter our own future-as-past, past-as-future: time travel of an equally profound, if more reversible, kind.

Watching Avatar


Apologies for taking a while to get around to writing about Avatar — befitting the film’s almost absurd graphical heft, the sheer surfeit of its spectacle, I decided to watch it a second time before putting my thoughts into words. In one way, this strategy was useful as a check on my initial enthusiasm; the blissful swoon of first viewing gave way, in the second, to a state resembling boredom during the movie’s more langourous stretches. (Banshee flight training, let’s just say, is not a lightning-fast process.)  But in another way, waiting to write might not have been all that smart, since by now the movie has been discussed to death. Yet for all the hot air and cold type that’s been spent dissecting Avatar, the map of the dialogue still divides neatly into two camps: one insisting that Cameron’s movie is an instant classic of cinematic science fiction, a technological breakthrough and a grand adventure of visual imagination; the other grudgingly admitting that the film is pretty, but beyond that, a trite and obvious story lifted from Pocahontas and Dances With Wolves and populated, moreover, by a bland and predictable set of character-types.

I tend to be forgiving toward experiments as grand as Avatar, especially when they’ve done such a good job laying the groundwork of hopeful expectation. Indeed, as I walked into the theater last week, ripping open the plastic bag containing my 3D glasses, I remember thinking I’d already gotten my money’s worth simply by looking forward so intensely to the experience. There’s also the matter of auteurist precedent: James Cameron has built up an enormous amount of goodwill — and, dare I say it, faith — with his contributions of Terminator, Terminator 2: Judgment Day, and Aliens to the pantheon of SF greatness. (I’m also a closet fan of Battle Beyond the Stars, the derivative but fun 1980 Roger Corman production on which Cameron served as art director and contributed innovative visual effects.)

So I’m not fussed about whether Avatar‘s story is particularly deep or original. This is, to me, a case of the dancer over the dance; the important thing is not the tale, but Avatar‘s telling of it. And I’m sympathetic to the argument that in such a technically intricate production, a relatively simple narrative gearing is required to anchor audiences and lead them, as in a rail game, along a precise path through the jungle. (That said, Cameron’s first “scriptment” was apparently a much more complex and nuanced saga, and one wonders to what degree his narrative ambitions were stripped away as the humongous physical nature of the undertaking became clear.) Cameron is correctly understood as a techno-auteur of the highest order, a man who doesn’t make films so much as build them, and if he has, post-Titanic, become complicit in fanning the flames of his own worshipful publicity, we ought to take that as simply another feat of engineering — in this instance discursive rather than digital. It would hardly be the first time (I’m looking at you, Alfred Hitchcock) and is certainly better-deserved than some (I’m looking at you, George Lucas).

Did I like Avatar? Very much so — but as I indicated above, this is practically a foregone conclusion; to disavow the thing now would be tantamount to aesthetic seppuku. Of course, in the strange numismatics of fandom, hatred is just the other side of the coin from veneration, and the raging “avatolds” (as in, You just got avatold!) of 4chan may or may not realize that, love it or hate it, we’re all playing in Cameron’s world now. And what a world it is, literally! Avatar the film is something of a delivery system for Pandora the planet (OK, moon), an act of subcreation so extensive it has generated its own wiki. The detailed landscapes we see in the movie are merely the topmost layer of a topography and ecosystem fathoms deep, an enormous bank of 3D assets and encyclopedic autotextuality that, now established as a profitable pop-culture phenomenon, stands ready for extrapolation and exploration in transmedia to come. (Ironic, then, that a launching narrative so opposed to stripmining is itself destined to be mined, or in Jason Mittell’s evocative term, drilled.)

And in this sense, I suspect, we can locate a double meaning to the idea of the avatar, or tank-grown alien body driven by human operators via direct neural link. A biological vessel designed to allow visitors to explore an alien world, the story’s avatars are but metaphors for Avatar the movie, itself a technological prosthesis for viewers hungry to experience new landscapes (and for whom the exotics of Jersey Shore don’t cut it). 3D, IMAX, and great sound systems are merely sensory upgrades for our cinematic avatarialism, and as I watched the audience around me check the little glowing squares of their cell phones, my usual dismay was mitigated by the notion that, like the human characters in the movie, they were merely augmenting their immersion with floating GUIs and HUDs.

My liking for the film isn’t entirely unalloyed, and deep down I’m still wondering by what promotional magic we have collectively agreed to see Avatar as a live-action movie with substantial CG components rather than a CG animated film (a la Up, or more analogously Final Fantasy: The Spirits Within) into which human performances have cunningly been threaded. Much has been made of the motion-capture technology by which actors Sam Worthington, Zoe Saldana, Sigourney Weaver et al performed their roles into one end of a real-time rendering apparatus while Cameron peered into a computer display — essentially his own avatarial envoy to Pandora — directing his troupe through their videogame doubles. But this is merely the latest sexing-up of an “apparatus” as old as cinema, by which virtual bodies are brought to life on an animation stand, their features and vocals synched to a dialogue track (and sometimes reference footage of the original performances).

Cameron’s nifty trick, though, has always been to frame his visual and practical effects in ways that lend them a crucial layer of believability. I’m not talking about photorealism, that unreachable horizon (unreachable precisely because it’s a moving target, a fantasized attribute we hallucinate within the imaginary body of cinema: as Lacan would put it, in you more than you). I’m talking about the way he cast Arnold Schwarzenegger as the human skin around a robotic core in the Terminator films, craftily selling an actor of limited expressiveness through the conceit of a cyborg trying to pass as human; Arnold’s stilted performance, rather than a disbelief-puncturing liability, became proof of his (diegetically) mechanoid nature, and when the cutaways to stop-motion stand-ins and Stan Winston’s animatronics took over, we accepted the endoskeleton as though it had been there all along, the real star, just waiting to be discovered. An identical if hugely more expensive logic underlies the human-inhabited Nav’i of Avatar: if Jake Sully’s alien body doesn’t register as absolutely realistic and plausible, it’s OK — for as the editing constantly reminds us, we are watching a performance within a performance, Sully playing his avatar as Worthington plays Sully, Cameron and his cronies at WETA and ILM playing us in a game of high-tech Russian nesting dolls. The biggest “special effect” in Cameron’s films is the way in which diegesis and production reality collapse into each other.

I’m not saying that Avatar isn’t revolutionary, just that amid the more colorful flora and fauna of its technological garden we should be careful to note that other layer of “movie magic,” the impression of reality that is as much a discursive and ideological production as any clump of pixels pushed through a pipeline. We submit, in other words, to Avatar‘s description of itself as a step forward, an excursion into a future cinema as alien and exhilarating as anything to be found on Pandora, and that too is part of the spell the movie casts. Yet the animating spirit behind that future cinema — the ghost in the machine — remains the familiar package of hopes and beliefs we always bring to the darkened theater: the desire to escape into another body, and when the adventure is over, to wake up and go home.

Paranormal Activity


[Some broad spoilers below]

I’ve said it before: these days, seeing certain movies means coming to the endpoint of an experience, rather than its beginning; closing a door rather than opening it. Think of how something like Star Wars in 1977 seeded an entire universe of story (and franchise) possibilities, or how The Rocky Horror Picture Show ignited a subculture of ritual performance and camp remixes of genre chestnuts. By contrast, a new kind of movie, exemplified currently by Paranormal Activity, hits theaters with a conclusive thump, like the punchline of a joke or the ending of a whodunit. After you’ve watched it, there is little more to say.

Such movies sail toward us on a sea of buzz, phantom vessels that hang maddeningly at the horizon of visibility, of knowability. Experienced fannish spotters stand with their spyglasses, picking out details in the mist and relaying their interpretations back to the rest of us. Insiders leak morsels of information about the ship’s construction and configuration. Old salts grumble about the good old days. It’s the modern cinematic equivalent of augury: awaiting the movie’s arrival is like awaiting a predestined fate, and we gaze into the abyss of our own inevitable future with a mixture of horror and appetite.

It sounds like I didn’t care for Paranormal Activity, but in fact I did; it’s as spare and spooky as promised, with a core of unexpected sweetness (due mainly to the performance of Katie Featherston) and consequently a sense of loss, even tragedy, at the end. It occurs to me that we are seeing another phenomenon in low-budget, buzz-driven, scary filmmaking: a trend toward annihiliation narratives. The Blair Witch Project, Open Water, Cloverfield, now Paranormal Activity — these are stories in which no one survives, and their biggest twist is that they disobey a fundamental rule of horror and suspense storytelling by which we understand that no matter how bad things get, at least one person, the hero, will make it through the gauntlet. With this principle guiding our expectations, we can affix our identifications to one or more figures, trusting them to safely convey us through the charnelhouse, evading the claws of monsters or razor-edged deathtraps.

No such comfort in the annihilation narrative, which blends the downbeat endings of early-70s New Hollywood with the clinical finality of the snuff film or autopsy report. Such brutal endings are encouraged by the casting of unknown or non-actors, whose public and intertextual lives presumably won’t be harmed by seeing them dispatched onscreen — though the more important factor, I suspect, is the blurring of the line between the character’s ontological existence and their own.

The usual symptom of this is identical first names: Daniel Travis plays Daniel Kintner in Open Water; Heather, Josh, and Michael are all “themselves” in Blair Witch; Paranormal‘s Katie and Micah are played by actors named Katie and Micah. There is, in other words, no supervening celebrity identity, no star persona, to yank us out of the fiction, to remind us simply by gravitational necessity that there must be a reality outside the fiction. The collapse of actor and character corresponds to the mockumentary mode that all these films share — a mode that itself depends on handheld cameras, recognizable, nonexotic settings, and an absence of standard continuity editing and background scoring.

Taken together, these factors (no-name actors, conscientiously unadorned and “unprofessional” filmmaking) would seem to recall Italian neorealism. But this being Hollywood, the goal is to tell stories that fit into familiar genres while reinventing them: horror seems to be the order of the day. A more subtle point is that, with the exception of Cloverfield‘s sophisticated matchmoving of digital monsters into shakycammed cityscapes, movies in this emerging genre cost almost nothing to make. The budget for Paranormal Activity was $11,000, a datum I didn’t even have to look up, because it’s been foregrounded so relentlessly in the film’s publicity. Oddly, these facts of the film’s manufacture don’t seem to detract from the envelope of “reality” in which its thrills are delivered; for all the textual (non)labor that goes into assuring us this really happened, we are just as entertained by the saga of scrappy Oren Peli and his sudden success as by the thumpings and scarrings inflicted on poor Micah and Katie.

And we are entertained, I think, by our own entertainment — the way in which we willingly give ourselves over to a machine whose cold operations we understand very well. I certainly felt this way as I took my seat at one of the few remaining non-multiplexed moviehouses in Ann Arbor, the tawdry but venerable State Theater. The 7 p.m. crowd was a throng of University of Michigan students, a few clusters of friends packed in with lots and lots of couples. Paranormal Activity is the kind of movie where you want to be able to clutch somebody. More to the point, it’s a genuine group experience: scares are amplified by a factor of ten when people around you are screaming.

Which brings me back to my opening point: we all knew what we were there for, even as the movie’s central mysteries — from the exact nature of its big bad to the specific escalating sequence of its scares — awaited discovery like painted eggs on an Easter-day hunt. (The film’s discretely doled-out shocks, which get us watching the screen with hypnotic attentiveness, are reminiscent of the animated GIFs one finds on the /x/ board of 4chan.) We were there for the movie, certainly, but we were also there for each other, enjoying the echo chamber of each others’ emotions and performative displays of fear. And we were there for ourselves, reverberating happily within the layers of our knowing and not-knowing, our simultaneous awareness of the film as cunning construct and as rivetingly believable bedtime story, our innocence and cynicism so expertly shaped by months of hype and misdirection, viral marketing, rumors and debunkings, word of mouth.

All of which constitutes, of course, the real paranormal activity: a mediascape that haunts and taunts us, foreshadowing our worst fears as well as our fiercest pleasures.

Predestination Paradox


It would be nice if ABC’s new series, Flashforward, didn’t stylistically model itself quite so slavishly on Lost — which is not to deny a legitimate familial relationship between the two shows. Indeed, it’s largely thanks to Lost that broadcast television now periodically risks acts of serial storytelling with genuine intricacy and depth, sizeable and interesting casts of characters, and generic inflections that flirt with science fiction and fantasy without ever quite falling into the proud but doomed ghetto of, say, Virtuality and Firefly. Nowadays we seem to prefer our fantastic extrapolations blended with a strong tincture of “reality”; while I might privately consider series such as Mad Men and Jericho to be as bizarre in setting and plot machination as Farscape ever was, the truth is it will be a long time before we see a space-set show lasting more than a season or two. (And before you ask, no, I haven’t gotten around to watching Defying Gravity, though some trusted friends have been telling me to give it a try.)

So Flashforward clearly owes a debt to Lost for tutoring audiences in the procedures and pleasures of the complex narratives so deftly dissected by Jason Mittell: in this specific case, the shuttling game of memory and induction by which viewers stitch together a tale told in pieces. Where 24 builds itself around the synchronic, crosscutting among simultaneous story-streams until the very concept of a pause, of downtime, is squeezed out of existence, Lost and Flashforward take as their structuring principle the diachronic, bouncing us backwards and forwards through time until one can no longer tell present from backstory. (I will admit that the most recent season of Lost finally threw off this faithful viewer like a bucking bronco; while I’m all for time-traveling back to the glory years of the 1970s, the show’s intertitled convolutions have become too much for me to keep up with, especially when further diced and sliced by the timeshifting mechanism of my DVR.)

No wonder, then, that David S. Goyer (late of Blade) and Brannon Braga (who in the 1990s both saved and ruined the Star Trek franchise, IMO) felt the moment was ripe to adapt Robert J. Sawyer‘s novel for TV. (Apparently there’s a history involving HBO and a tug-of-war over rights; perhaps a branching feature on the show’s eventual box-set release will as a deconstructive extra interweave this additional knotted plotting, an industrial Z-axis, into the general mayhem.) I remember reading Flashforward-the-book when it first came out, but it took Wikipedia to remind me how it all ended. Now that original ending has of course been jettisoned, in the process of retrofitting the story to serial form.

And a clever adaptation it looks to be. By moving up the collective “flashforward” experienced by the entire human race from twenty-odd years to six months, the TV show embeds its own climax within a different kind of foreseeable future: the conclusion of season one. That is, as the characters catch up with their own precognated fates on April 29, 2010 (in show-reality), so will we the watchers (in audience-reality), making for what I expect to be a delicious and delirious moment of suture. Like the first season of Heroes, Flashforward constructs itself around its own endpoint, arriving like clockwork twenty-odd episodes from now.

Clever, but maybe not smart. Look what happened to Heroes, which did great until collapsing into meaningless narratorhea with the start of its second season. I can think of countless TV series done in by their own cruelly relentless seriality, overstaying their welcome, swapping in cast members and increasingly baroque storytelling gimmicks until the final result is a ghoulish, cyborged facsimile of the show we once knew and loved. People speak of “jumping the shark,” but the truth of a TV show that’s lost its soul is something much more depressing: an elderly parent babbling in the grip of Alzheimer’s, a friend lost to dementia, a young and innocent heart curdled by prostitution or drug addiction. The excitement of Flashforward will consist of watching as it knowingly exploits the feints and deferrals of serial form, doling out clues and red herrings that keep us guessing even as the destination comes inexorably into greater focus — a finale that, by its final arrival, will appear perfectly logical. Good storytelling gets us to the expected endpoint by unexpected means, and I wonder if Flashforward has it in itself to pull off the trick more than once.

In the meantime, let’s sit back and appreciate the tapestry as it emerges for the first, unrepeatable time. The characters have already begun to build a “conspiracy wall,” tacking up photos, scribbled notes, and lengths of string to make a tableau that simultaneously constructs the future as solution while decoding it as mystery. And don’t forget the wonderful opportunity for meta-reflection on the existential whys and wherefores of TV as the first episode ends with another kind of “flashforward” — this one a promotional montage enticing us with glimpses of the season to come. In this sense, of course, the show is a perfect commercial animal, advertising itself and its high concept with every beat of its crass and calculated heart. But in another, purer sense, it is a kind of koan, an invitation to meditate on the deeper patterns of the stories we tell; the time in which we experience them; the nature of narrative consciousness itself.

Flashforward may be, in short, one last chance to live in the media present (even as its central conceit destroys any sense of simple present-ness). Here’s to enjoying the experience before the show is ruined by its own need to respawn in 2010-2011, by the ongoing efforts of the spoiler community and devout Wiki priesthood, or by the aforementioned box sets, downloads, and torrents. A series like this is perfectly engineered for its time, which is to say, paced to the week-by-week parceling of information, the micro-gaps of commercial breaks and the macro-gaps between episodes.

Yet even as we put a name to the temporality of TV, it is already past. For all such gaps are dissolving in the quick waters of new media, and with them the gaps in knowledge (precisely-lathed questions with carefully-choreographed answers) on which a show like Flashforward, and by extension all serial storytelling, thrives. We who are “lucky” enough to straddle this historical juncture — at which the digital is reworking the media forms with which we grew up — face our own version of the predestination paradox: knowing full well where we’re going, yet helpless before the forces that deliver us there.

Watchmen: Stuck in the Uncanny Valley

[Warning: this review contains spoilers — and at the end, a blue penis.]

One wonders if Masahiro Mori, the roboticist who introduced the term “uncanny valley” in 1970, now wishes he’d had the foresight to trademark it; after laying largely dormant for a couple of decades, the concept came into its own, bigtime, with the advent of photorealistic CG. Or perhaps I should say CG posing as photorealistic, for what nearly passes muster in one year — the liquid pseudopod in The Abyss (1986), the lipsynched LBJ in Forrest Gump (1994), the entire casts of Final Fantasy: The Spirits Within (2001), The Polar Express (2004), and Beowulf (2007) — lapses into reassuringly spottable artifice the next. The only strategy by which CG convincingly and sustainably replicates organic life, in fact, is Pixar’s, and their method is simultaneously a cheat and a transcendent knight’s-move of FX design. Engaging in what Andrew Darley has called second-order realism, Pixar’s characters wear their manufactured status on their skin, er, surface (toys, bugs, fish, cars) while drawing on expressive traditions derived from cel- and stop-motion animation to deliver believable, inhabited performances and make us forget that what we’re watching are essentially, with their sandwiching of organic and synthetic elements, cyborgs.

Of course, by casting its films in this manner, Pixar retreats from true uncanniness. Humanity has always been an easier sell when it comes to cartoonish abstractions; ask anyone from Pac-man to Punch and Judy. The uncanny valley actually kicks in when a simulation comes close enough to almost fool us, only to fall back into uncomfortable, irreducible alterity. Such is the fate, I think, of Watchmen.

That calm, urbane salon that is the internet is already abuzz with evaluations of the movie adaptation of Alan Moore’s and Dave Gibbons’s graphic novel — actually, it’s been buzzing for months, even years, another instance of what I have elsewhere termed the cometary halo that precedes any hotly-anticipated media property. Fans have been speculating, cogitating, and arguing about the whys and wherefores of overnight techno-auteur Zack Snyder’s approach for so long that the arrival of the film itself marks the conversation’s end rather than its beginning. The Christmas present has been unwrapped; Schoedinger’s Cat is out of the box; we’ve traveled into the Forbidden Zone only to learn we were on Earth the whole time.

My own take on Watchmen is that it’s an impressive feat of engineering: detailed, intricate, and surprisingly unified in tone. Beyond a splendid opening-credit sequence, however, it isn’t particularly invigorating or dramatic — hell, I’ll just say it, the thing’s boring for long stretches. As Alexandra DuPont, my absolute favorite reviewer on Ain’t-It-Cool News, points out, the boring bits are unfortunately more prevalent toward the end, giving the movie as a whole the feeling of a party that peters out once the fun people leave, or a hot date that takes a wrong turn when someone brings up religion. (And while I’m linking recommendations, let me encourage you toward the smartest movie site out there.) Watchmen is still a rather miraculous object, an oddly introverted and idiosyncratic epic whose very existence lends support to the idea that fans have become an audience important enough to warrant their own blockbuster. Tastewise, the periphery has become the center; the niche the norm.

For Watchmen, love it or hate it, is fanservice with a $120 million budget. Let’s remember that to hardcore fandom, love and hate are as difficult to disarticulate as the tattoos on Harry Powell‘s knuckles; the object is never simply accepted or rejected outright (that’s a mark of the fickle mainstream, whose media engagements resemble one-night stands or trips to the drive-through) but instead studied and anatomized with scholarly rigor, its faults and achievements tabulated and ranked with an accountant’s thoroughness. Intimacy is the name of the game — that and passing the object from hand to hand until it is worn smooth as a worry bead. I’ve no doubt that Watchmen will be worried to death in coming weeks, the only thing keeping it from complete erasure the periodic infusion of new material: transmedia expansions like Tales of the Black Freighter, or the four-hour director’s cut rumored to be lurking on Snyder’s hard drive.

The principal focus of all this discussion will undoubtedly be the pros and cons of adaptation, for that is the process which Watchmen foregrounds in all its contradiction and mystery. Viewing the film, I thought irritably of all the adaptations to which we give a free pass, the ones that don’t get scrutinized at a subatomic level: endless versions of Pride and Prejudice, and let’s not forget that little gift that keeps on giving, Romeo and Juliet. Shakespeare, it seems to me, is as viral as it gets, and Masterpiece Theater a breeding ground that could compete with Nadya Suleman‘s womb. Watchmen simply takes faithfulness and fidelity to a cosmic degree, its mise-en-scene a mimetic map of the printed panels that were its source.

Or source code; for what Synder has achieved is not so much adaptation as transcription, operationalization; a phenotypic readout of a genetic program, a “run” in cinematic hardware of an underlying instruction set. Watchmen verges, that is, on emulation, and its spiritual fathers are not Moore and Gibbons but Turing and von Neumann. Snyder probably thought his hands were tied; there’s no transposing Watchmen to a new setting without disrupting its elaborate weavework of political and pop-cultural signifiers. The origin of the story in graphic form means that cinema’s primary ability, visualization, had already been usurped; faced with a publicly-available reference archive and a legion of fans ready to apply it, Synder may have felt his only option was to replicate down to the tiniest prop and wardobe detail what’s shown in the panel. Next level up is the determining rhythm of Moore’s scripting: storytelling and dialogue have been similarly transposed from printed page to filmed frame, and while some critics laughingly excoriate Rorschach’s purple prose, his overheated voice-overs sounded fine to me (Rorschach’s a rather self-important character, after all — as narcissistic and monomaniacal in his way as the ostensible villain Ozymandias). Editing, too, copies over with surprising fluidity: some of the most effective sequences, like the Comedian’s funeral interwoven with flashbacks to his ignoble career, or Jon Osterman’s tragic temporal tapestry of an origin story, are crosscut almost precisely as laid out on the page.

What’s left to Synder and the cinematic signifier, then, is a handful of sensory registers, deployed sometimes with subtlety and sometimes an almost slapstick obviousness. Music plays a crucial role in the film, as does the casting; some choices are dead-on effective, others (Malin Akerman, I’m looking at you — with sympathy) not so much. The most talked-about aspect of Synderian style is probably his use of variable speed or “ramping,” and on this front I’m with DuPont: the effect of all the slow-mo is to suggest something of the fascinated readerly gaze we bring to comic books, lingering over splash pages, reconstituting in our internal perceptions the hieroglyphic symbolia of speed lines and large-fonted “WHOOSHES.”

But back to the uncanny valley. The world of Watchmen is undoubtedly digital in ways we can’t even detect; there are certainly some showstopping visual moments, but I’d argue that more important to the movie’s cumulative immersive impact are the framing, composition, and patterns of hue, saturation, and texture that only a digital intermediate makes possible. It’s less garish than Sin City, and nowhere near the green-screened limbo of Sky Captain and the World of Tomorrow, but all the exterior shots and real-world sets shouldn’t blind us to the essential constructedness of what we’re seeing.

And here’s where the real uncanniness resides. We’re often hoodwinked into thinking that the visual (indeed, existential) crisis of our times is the rapidly closing gap between profilmic truth and what’s been simulated with computer graphics. But CG is merely the latest offspring of a vast heritage of manipulation, a tradition of trickery indistiguishable from cinema itself. Watchmen is uncanny not because of its visual effects, but because it comes precariously close to convincing us that we are seeing Moore’s and Gibbons’s graphic novel preserved intact, when, after all, it is only a copy — and a lossy one at that. In flashes, the film fools us into forgetting that another version exists; but then the knowledge of an original, an other, comes crashing back in to sour the experience. It is not reality and its digital double whose narrowing difference freaks us out, but the aesthetic convergence between two media, threatening to collapse into each other through the use of ever more elaborate production tools and knowing appeals to fannish competencies. At stake: the very grounds of authenticity — the epistemic rules by which we recognize our originals.

I’ll conclude by noting that the character who most fascinated me in the graphic novel is also the one I couldn’t tear my eyes from onscreen: Dr. Manhattan. He’s “played” by Billy Crudup, and as I noted in my post on Space Buddies, the actor’s voice is our central means of accepting Manhattan as a living character. Equally magnificent, though, is the physical performance supplied by Manhattan’s digital surface, an iridescent azure body hanging from Crudup’s motion-captured face. Whether intentionally or through limits in the technology, Dr. Manhattan never quite fits into his surroundings, and that’s exactly as it should be; as conceived by Moore, he’s a buzzing collection of hyperparticles, a quantum ghost, and Snyder uses digital effects to nail Manhattan’s transhuman ontology. (He is, both diegetically and non-, a walking visual effect.) Presciently, the print version of Watchmen — published between 1986 and 1987, when CG characters were just starting to creep into movies (see Young Sherlock Holmes) — gave us in Dr. Manhattan our first viable personification of digital technology. The metaphysical underpinning and metaphorical implications of the print Manhattan, of course, are radioactivity and the atomic age, not digitality and the information revolution. But in the notion of an otherwordly force, decanted into a man-shaped vessel but capable of manipulating the very fabric of reality, they add up to much the same: Dr. Manhattan — synthespian avant la lettre.

Singing Along with Dr. Horrible

Duration, when it comes to media, is a funny thing. Dr. Horrible’s Sing-Along Blog, the web-distributed musical (official site here; Wiki here), runs a tad over 42 minutes in all, or about the length of an average hour-long block of TV entertainment with commercials properly interspersed. But my actual experience of it was trisected into chunks of approximately 15 minutes, for like your standard block of TV programming (at least in the advertising-supported format favored in the U.S.), Dr. Horrible is subdivided into acts, an exigence which shapes the ebb and flow of its dramatic humours while doing double service as a natural place to pause and reflect on what you’ve seen — or to cut yourself another slice of ice-cream cake left over from your Dairy-Queen-loving relatives’ visit.

That last would be a blatantly frivolous digression, except in this key sense: working my way through the three acts of Dr. Horrible was much like consuming thick slices of icy sweetness: each individual slab almost sickeningly creamy and indulgent, yet laced throughout with a tantalizingly bitter layer of crisp chocolate waferage. Like the cake, each segment of the show left me a little swoony, even nauseated, aesthetic sugar cascading through my affective relays. After each gap, however, I found myself hungry for more. Now, in the wake of the total experience, I find myself contemplating (alongside the concentrated coolness of the show itself) the changing nature of TV in a digital moment in which the forces of media evolution — and more properly convergence — have begun to produce fascinating cryptids: crossbred entities in which the parent influences, harmoniously combined though they might be, remain distinct. Sweet cream, bitter fudge: before everything melts together to become the soupy unremarkable norm, a few observations.

Ultimately, it took me more than two weeks to finish Dr. Horrible. I watched the first two acts over two nights with my wife, then finished up on my own late last week. (For her part, Katie was content to have the ending spoiled by an online forum she frequents: a modern Cliffs Notes for media-surfers in a hurry to catch the next wave.) So another durative axis enters the picture — the runtime of idiosyncratic viewing schedules interlacing the runtime of actual content, further macerated by multiple pausings and rewindings of the iPod Touch that was the primary platform, the postage-stamp proscenium, for my download’s unspooling. Superstring theorists think they have things hard with their 10, 11, or 26 dimensions!

As such, Horrible‘s cup of video sherbet was the perfect palate-cleanser between rounds of my other summer viewing mission — all five seasons of The Wire. I’m racing to get that series watched before the school year (another arbitrary temporal framework) resumes in three weeks; enough of my students are Wireheads that I want to be able to join in their conversations, or at least not have to fake my knowing nods or shush the conversation before endings can be ruined. On that note, two advisories about the suspense of watching The Wire. First, be careful on IMDb. Hunting down members of the exceptionally large and splendid cast risks exposing you to their characters’ lifespans: finding out that such-and-such exits the series after 10, 11, or 26 episodes is a pretty sure clue as to when they’ll take a bullet in the head or OD on a needle. Second, and relatedly, it’s not lost on this lily-white softy of an academic that I would not last two frickin’ seconds on the streets of Baltimore — fighting on either side of the drug wars.

Back to Dr. Horrible. Though other creators hold a somewhat higher place in my Pantheon of Showrunners (Ronald D. Moore with Battlestar Galactica, Matt Weiner with Mad Men, and above them all, of course, Trek‘s Great Bird of the Galaxy, Gene Roddenberry), Joss Whedon gets mad props for everything from Buffy the Vampire Slayer to Firefly/Serenity and for fighting his way Dante Alighieri-like through the development hell of Alien Resurrection. I was only so-so about the turn toward musical comedy Whedon demonstrated in “Once More with Feeling,” the BtVS episode in which a spell forced everyone to sing their parts; I always preferred Buffy when the beating of its heart of darkness drowned out its antic, cuddly lullabies.

But Dr. Horrible, in a parallel but separate universe of its own, is free to mix its ugliness and frills in a fresh ratio, and the (re)combination of pathos and hummable tunes works just fine for me. Something of an inversion of High School Musical, Dr. Horrible is one for all the kids who didn’t grow up pretty and popular. Moreover, its rather lonesome confidence in superweaponry and cave lairs suggests a masculine sensibility: Goth Guy rather than Gossip Girl. Its characters are presented as grownups, but they’re teenagers at the core, and the genius is in the indeterminacy of their true identities; think of Superman wearing both his blue tights and Clark Kent’s blue business suit and still not winning Lois Lane’s heart. My own preteen crush on Sabrina Duncan (Kate Jackson) of Charlie’s Angels notwithstanding, I first fell truly in love in high school, and it’s gratifying to see Dr. Horrible follow the arc of unrequited love, with laser-guided precision, to its accurate, apocalyptically heartbreaking conclusion.

What of the show as a media object, which is to say, a packet-switched quantum of graphic data in which culture and technology mingle undecidably like wave and particle? NPR hailed it as the first genuine flowering of TV in a digital milieu, and perhaps they’re right; the show looks and acts like an episode of something larger, yet it’s sui generis, a serial devoid of seriality. It may remain a kind of mule, giving rise to nothing beyond the incident of itself, or it may reproduce wildly within the corporate cradle of Whedon’s Mutant Enemy and in the slower, rhizomatic breeding beds of fanfic and fanvids. It’s exciting to envision a coming world in which garage- and basement-based production studios generate in plenty their own Dr. Horribles for grassroots dissemination; among the villains who make up the Evil League of Evil, foot-to-hoof with Bad Horse, there must surely stand an Auteur of Doom or two.

In the mise-en-abyme of digital networks, long tails, and the endlessly generative matrix of comic books and musical comedy, perhaps we will all one day turn out to be mad scientists, conquering the world only to find we have sacrificed the happy dreams that started it all.

Planet of the Apes

As my attention shifts to one of the major goals of the summer — drafting a proposal for my book on special and visual effects — I’ve started to augment my movie-a-day habit with some classic FX titles. These are films I’ve seen before, in some cases many times, but which need revisiting. Seeing them now can be a corrective shock, revealing my memory for the sloppy generalizing mechanism it is. Impressions of movies watched in childhood blend together, in the adult mind, like ingredients of a stew, a delicious melange that is nevertheless a kind of monotaste: a tidy averaging of visual and narrative pleasures that, with a fresh viewing, shatter back into discrete components. The movie again becomes a complex terrain rather than a distant map, a succession of contrasting images rather than a single iconic poster still, a cascade of rediscovered characters, tableaux, action setpieces, and lines of dialogue. It’s like opening a box of forgotten photographs.

In the case of Planet of the Apes — Franklin J. Schaffner’s 1968 original, not Tim Burton’s lousy 2001 remake — I was stunned to find a film far more stark, confident, somber, chilling, and stylish than the simplistic caricature to which I’d reduced it. My first encounter with Planet of the Apes came sometime in the mid-1970s, when it ran as part of “Ape Week” on our local ABC affiliate’s Four-O’Clock Movie. I’d get home from school in time to watch an hour or so of cartoons before the feature came on; Ape Week was just one of several themed lineups I looked forward to eagerly, including “James Bond Week” and “Monster Week” (a string of Eiji Tsuburaya‘s Godzilla and Mothra movies).

The Apes series was a perfect fit for the Four-O’Clock Movie because there was one for every day of the week: from Monday’s installment of the first film through Beneath the Planet of the Apes (1970) on Tuesday, Escape from the Planet of the Apes (1971) on Wednesday, Conquest of the Planet of the Apes (1972) on Thursday, and Battle for the Planet of the Apes (1973) on Friday. The end of the week didn’t mean an end to Apes, though. Right about that time, a live-action TV series aired, followed by an animated counterpart on Saturday mornings. It would be thirty years before I heard the term transmedia franchise, but — along with daily reruns of the original Star Trek series — Apes was my inaugural passport to the labyrinthine landscape of distributed science-fiction storyworlds.

What I loved about Planet of the Apes back then, and what has stayed with me over the years, can be summarized in two images that sent me into an ecstasy of eeriness: the ape makeup created by Ben Nye and implemented by John Chambers; and the famous final shot, in which the hero Taylor (Charlton Heston) stumbles across the ruins of the Statue of Liberty and realizes he’s been on Earth — not an alien world, but his own home — all this time. The frame is below; a grainy YouTube version can be found here.

It’s one of the great twist endings in SF — contributed, fittingly enough, by Rod Serling. But its unfortunate effect was to instantly reduce the movie to a grand cliche, a semiotic Shrinky-Dink, source of endless quotations and parodies in the decades that followed. The sad truth about twist endings is that they follow a logic opposite that of genre (in which the same patterns reappear over and over again without anyone taking offense; we applaud them, in fact, for their iterative familiarity): once given its Big Reveal, a twist shrivels on the vine, spoiled by critics, lampooned for its very memorability. Citizen Kane‘s Rosebud, The Sixth Sense‘s dead psychiatrist, St. Elsewhere‘s world-in-a-snowglobe — each exists, like Taylor’s final, horrible epiphany, as the ultimate self-annihilating closure, shutting down not just a particular narrative instance, but the possibility of its own resurrection in anything but smirkily insincere form. Shots like the one that concludes Planet of the Apes are, to me, a perfect example of Lacanian captation: they arrest and hold us in an escape-proof hermetic prison of the imaginary.

OK, psychoanalytic blather aside, what was so great about watching Planet of the Apes again? I suppose my answer is yet more Lacan, for both the apes and humans are trapped by and within their own misrecognitions. Taylor and his fellow astronauts firmly believe themselves to be on an alien planet, despite evidence to the contrary (the apes speak English); for their part, the apes see the humans as completely Other and cannot countenance any notion that there is an evolutionary link between them. It’s a comedy of evolutionary errors, the Scopes Trial replayed simultaneously as farce and deadpan drama. The truth of the situation is hidden, like the purloined letter, in plain sight; it is not until the end, in a traumatic confrontation with the Real, that Taylor traverses his fantasy. (Maybe that’s why the joke has been replayed so frequently in pop culture, from Spaceballs to The Simpsons; what is repetition but the insistent revisiting of trauma?) Of course, as often occurs in science fiction, the meta-misrecognition that operates here is failing to see in the portrayal of a “future” the actual representation of a “present.” Eric Greene’s Planet of the Apes as American Myth explores this aspect of the film and its sequels, arguing that Apes is a funhouse mirror held up to racial politics in the United States.

Bringing this all back home to the movie and its special effects, I see two kinds of misrecognition at play in the visuals, both of them integral to the suspension of disbelief by astronauts, apes, and audiences alike. First, of course, are the actual human beings (Roddy McDowell, Kim Hunter, Maurice Evans) beneath the prosthetics and hair appliances. The makeup and costumes that turn these actors into sentient, speaking apes do not mask or muffle the performances, but rather estrange and amplify them: we watch and listen for nuances of emotion, an amused glint in the eye, a subtle shift in intonation, precisely because they are cloaked in filmmaking technology. At first glance the masquerade is comical, almost grotesque, but it quickly gives way to some remarkably graceful performances. Our twinned awareness of the trickery and investment in the fantasy reflects the knife-edge calibration of disbelief attending the finest FX work.

But there’s a second register of misrecognition here, one I would have missed completely if I hadn’t been watching a pristine widescreen transfer of the film. The first act of Planet of the Apes consists of Taylor and his fellow astronauts trekking across the forbidding but beautiful scenery of Arizona and Utah — in particular, the area of the Colorado River known as Lake Powell:

I was dumbstruck by this natural backdrop of mountains, deserts, and water, as gorgeously alien as anything in Nicolas Roeg’s Walkabout (1971). It occurred to me that the genius of this portion of the movie — an opening thirty minutes before the apes even show up — is that it places the spectator in a homologous position to the stranded astronauts. Like them, we stare at a world that is at once ours and another’s; a landscape both earthly and unearthly. Like the ape makeup, the cinematography forces us into sublime attentiveness, consuming every detail of a setting made familiar by our experience with terrestrial features, then unfamiliar through a storyline that presents it as an alien world, then familiar again in the final beachside revelation.

I guess what I’m saying with all this is that Planet of the Apes stands out to me as much for its planet as for its apes; and that in both constructs (and our response to them) we glimpse something of the multitiered, shuttling structure of belief and disavowal that great special effects provoke.

Soul of a New Machine

Not much to add to the critical consensus around WALL-E; trusted voices such as Tim Burke’s, as well as the distributed hive mind of Rotten Tomatoes, agree that it’s great. Having seen the movie yesterday (a full two days after its release, which feels like an eternity by the clockspeed of media blogging), I concur — and leave as given my praise for its instantly empathetic characters, striking environments, and balletic storytelling. It’s the first time in a while that tears have welled in my eyes just at the beautiful precision of the choices being made 24 times per second up on the big screen; a happy recognition that Pixar, over and over, is somehow nailing it at both the fine level of frame generation and the macro levels of marketplace logic and movie history. We are in the midst of a classic run.

Building on my comments on Tim’s post, I’m intrigued by the trick Pixar has pulled off in positioning itself amid such turbulent crosscurrents of technological change and cinematic evolution: rapids aboil with mixed feelings about nostalgia for golden age versus the need to stay new and fresh. The movies’ mental market share — the grip in which the cinematic medium holds our collective imaginary — is premised on an essential contradiction between the pleasures of the familiar and the equally strong draw of the unfamiliar. That dialectic is visible in every mainstream movie as a tension between the predictability of genre patterns and the discrete deformations we systematize and label as style.

But nowadays this split has taken on a new visibility, even a certain urgency, as we confront a cinema that seems suddenly digital to its roots. Hemingway (or maybe it was Fitzgerald) wrote that people who go bankrupt do so twice: first gradually, then all at once. The same seems true of computer technology’s encroachment on traditional filmmaking practices. We thought it was creeping up on us, but in a seeming eyeblink, it’s everywhere. Bouncing around inside the noisy carnival of the summer movie season, careening from the waxy simulacrum of Indiana Jones into the glutinous candied nightmare of Speed Racer, it’s easy to feel we’re waking up the morning after an alien invasion, to find ourselves lying in bed with an uncanny synthetic replacement of our spouse.

Pixar’s great and subtle achievement is that it makes the digital/cinema pod-people scenario seem like a simple case of Capgras Syndrome, a fleeting patch of paranoia in which we peer suspiciously at our movies and fail to recognize them as being the same lovable old thing as always. With its unbroken track record of releases celebrated for their “heart,” Pixar is marking out a strategy for the successful future of a fully digital cinema. The irony, of course, is that the studio is doing so by shrugging off its own cutting-edge nature, making high-tech products with low-tech content.

Which is not to say that WALL-E lacks in technological sublimity. On the contrary, it’s a ringing hymn to what machines can do, both in front of and behind the camera. More so than the plastic bobbles of Toy Story, the chitinous carapaces of A Bug’s Life, the scales and fins of Finding Nemo or the polished chassis of Cars, the performers in WALL-E capture the fundamental gadgety wonder of a CG character: they look like little robots, but in another, more inclusive sense they are robots — cyborged 2D sandwiches of actors’ voices, animators’ keyframes, and procedural rendering. There’s a longstanding trope in Pixar films that the coldly inorganic can be brought to life; think of the wooden effigy of a bird built by the heroes of A Bug’s Life, or the existential yearnings of Woody and Buzz Lightyear in the Toy Story films. WALL-E, however, calibrates a much narrower metaphorical gap between its subject matter and its underlying mode of production. Its sweetly comic drama of machines whose preprogrammed functionalities are indistinguishable from their lifeforce is like a reassuring parable of cinema’s future: whether the originating matrix is silicon or celluloid, our virtual pleasures will reflect (even enshrine) an enduring humanity.

I’ll forgo commentary on the story and its rich webwork of themes, except to note a felicitous convergence of technology’s hetero gendering and competing design aesthetics that remap the Macintosh’s white curves onto the eggy life-incubator of EVE — juxtaposed with a masculine counterpart in the ugly-handsome boxiness of PC and LINUX worlds. I delighted in the film’s vision of an interstellar cruise liner populated by placid chubbies, but was also impressed by the opening 30-40 minutes set amid the ruins of civilization. It says something that for the second time this year, a mainstream science-fiction film has enticed us to imagine ourselves the lone survivor of a decimated earth, portraying this situation on one level as a prison of loneliness and on another as an extended vacation: tourists of the apocalypse. I refer here of course to the better-than-expected I Am Legend, whose vistas of a plague-depopulated Manhattan unfold in loving extended takes that invite Bazinian immersion and contemplation:

Beyond these observations, what stands out to me among the many pleasures of WALL-E are the bumper materials on either side of the feature: the short “Presto,” which precedes the main film, and the credit sequence that closes the show. Such paratexts are always meaningful in a Pixar production, but tend to receive less commentary than the “meat” of the movie. Tim points out accurately that “Presto” is the first time a Pixar short has captured the antic Dionysian spirit of a Tex Avery cartoon (though I’d add that Avery’s signature eruption of the id, that curvaceous caricature of womanhood Red, was preemptively foregrounded by Jessica Rabbit in 1988’s Who Framed Roger Rabbit; such sex-doll humor seems unlikely to be emulated any time soon in Pixar’s family-friendly universe — though the Wolf could conceivably make an appearance). What I like about “Presto” is the short’s reliance on “portal logic” — the manifold possibilities for physical comedy and agonistic drama in the phenomenon of spatial bilocation, so smartly operationalized in the Valve videogame Portal.

As for the end credits of WALL-E, they are unexpectedly daring in scope, recapitulating the history of illustration itself — compressing thousands of years of representational practices in a span of minutes. As the first names appear onscreen, cave drawings coalesce, revealing what happens as robots and humans work together to repopulate the earth and nurse its ecosystem back to health. The cave drawings give way to Egyptian-style hieroglyphs and profiled 2D portraiture, Renaissance perspective drawings, a succession of painterly styles. Daring, then subversive: from Seurat’s pointillism, Monet’s impressionism, and Van Gogh’s loony swirls, the credits leap to 8-bit computer graphics circa the early 1980s — around the time, as told in David A. Price’s enjoyable history of the studio, that Pixar itself came into existence. WALL-E and his friends cavort in the form of jagged sprites, the same as you’d find in any Atari 2600 game, or perhaps remediated on the tiny screens of cell phones or the Wii’s retrographics.

I’m not sure what WALL-E‘s credits are “saying” with all this, but surely it provides a clue to the larger logic of technological succession as it is being subtextually narrated by Pixar. Note, for example, that photography as a medium appears nowhere in the credits’ graphic roll call; more scandalously, neither does cinematography — nor animation. In Pixar’s restaging of its own primal scene, the digital emerges from another tradition entirely: one more ludic, more subjective and individualistic, more of an “art.” Like all ideologies, the argument is both transparently graspable and fathoms deep. Cautionary tale, recuperative fantasy, manufactured history doubling as road map for an uncertain digital future: Pixar’s movies, none more so than WALL-E, put it all over at once.


It’s not hard to see what Doug Liman intended Jumper (2008) to be: a slick, stylish action-adventure, paced to the quick rhythm of its protagonist’s wormhole-assisted leaps through space. In terms of emotional tone, something a bit less serious than The Bourne Identity (2002) and more serious than Mr. and Mrs. Smith (2005), with a touch of the structural experimentation of 1999’s Go (probably my favorite of Liman’s movies, even over Swingers [1996], which, while funny, bared its brand of prefab indie classic a little too emphatically).

But Jumper turns out to be an anemic misfire, its frictionless construction (which might, during preproduction, have seemed a strength) resulting in something like those Olestra Doritos I used to eat: tasty, low in calories, and passing with liquid brevity through the digestive system. Or — a better metaphor — like David Rice (Hayden Christiansen) himself, a young man who through some never-explained and never-sweated mutation of genetics, neurochemistry, or both, can teleport instantly from one place on earth to another. Building a plot around a person unbound by basic physical laws is always risky. First, there’s the problem of identification: truly super superbeings are impossible to empathize with, a notion explored brilliantly through the figure of Doctor Manhattan in Alan Moore’s Watchmen. Second, it’s hard to embed super-powerful beings in dramatic situations that offer any real challenge or suspense. Think of the “burly brawl” in The Matrix Revolutions (2003): Neo’s hyperkungfu turned his showdown with hundreds of Agent Smiths into an inadvertantly funny dance number, spectacular in the manner of Busby Berkeley musicals and charming in the manner of Buster Keaton slapstick — but never exciting, because nothing was at stake.

To get around this dilemma, our fantasies of superpower have yoked the anomalous beings at their center to various forms of existential and psychological ennui. DC got it right with Superman and Batman, both orphans, one an extraterrestrial “stranger in a strange land” and one a PTSD-afflicted vigilante. Superman’s love for Lois Lane is ultimately a hobbling force, locking him to a human scale of emotions and practical concerns (why else would he need to take a 9-5 job at the Daily Planet?). In literature, Billy Pilgrim — the haunted hero of Kurt Vonnegut’s Slaughterhouse-Five (1969) — travels through time, but not under his own direction; instead he revisits traumatic moments of war and family life, adrift in a temporal ocean. (A similar theme organizes the lovely, tearjerking tapestry of Audrey Niffenegger’s 2003 novel The Time Traveler’s Wife.)

Jumper would have been more interesting if David’s teleporting ability took him only to places where he’d fallen in love or feared for his life — or if he compulsively returned to the same locations over and over, without meaning to: a Freudian trip. The film’s mystery might then have resonated as much inwardly as outwardly. As it is, the situation with which the screenwriters have saddled David, and us, is both needlessly elaborate and absurdly simplistic. Bad guys called Paladins hunt those who can teleport, using a range of electrified devices (again, both baroque and silly in their design) to anchor, trap, and ultimately kill the jumpers. It may not be a sin that the Paladins’ motivation isn’t explained in more detail — the opening crawl of Star Wars taught a generation of filmgoers the value of ruthlessly boiling down exposition — but it would have been nice to learn even a little bit about how their “civil war” with the jumpers has played out over history. (Since jumpers must use images to target their more exotic jumps, how would they have functioned in a pre-photographic era?)

Ah well. The film is more interested in portraying David as a kind of supertourist, someone who can go wherever he wants, whenever he wants — enjoying a picnic atop the head of the Sphinx, followed by surfing in Thailand. This has the effect of equating teleportation with ownership of a really good credit card, a consumerist fantasy of total access and freedom. A shame, because the charm of Steven Gould’s 1992 source novel is in showing how David learns his way gradually into his power, staged as a series of plausibly awkward experiments and epiphanies. The book, that is, eases us into a superhuman life by showing us each incremental point on the hero’s journey. The movie, by contrast, skips all that — “jumps” past it — giving us a protagonist who seems petulant rather than plaintive, arrogant rather than awesome. (The fact that he is played by the same piece of plastic who sank the Star Wars prequels doesn’t help.)

Liman’s comments to the contrary, Jumper is very much a visual-effects film; the teleportation effect is as much the movie star as Christiansen. According to the DVD extras (and here’s a tip: if the FX get their own doc, there’s a safe bet the picture was bankrolled on the basis of them), considerable R&D went into getting the jumps just right. Wikipedia records over 100 jumps in the film, each subtly adjusted to reflect distance and emotional state of the jumper. The effect itself is really a package of techniques. Characters appear and disappear in a swirl of particles, as though they’ve turned into ash and blown away; the local environment is stirred as though in a strong wind, papers fluttering, doors slamming; and hazy, prismatic “jump scars” remain in their wake, marking the point at which spacetime has conveniently ruptured. Often all of this is accompanied by offhand flicks of the camera, as though following the body’s transit through hyperspace; a shot might begin with a quick pan downward to street level, an instant before the jumper appears.

For the most part, we witness jumps from the “outside,” that is, with a body popping out of and back into presence without traveling the intervening distance. (Some of the most pleasing instances are long, motion-controlled takes in which a body — or car! — might appear five or six times, dancing from spot to spot in the frame.) But every once in a while, the camera goes virtual and follows David through a jump, as in the example below, in which he shifts himself from an icy lake to a local library:

All this jumping about is undeniably fun, a kind of cat-and-mouse in which each leap arrives slightly before or after the audience predicts. It looks stylish, as though the jumpers are being sucked away by concealed vacuums. But ultimately, it doesn’t add up to anything more than itself: in an act of accidental self-referentiality, Jumper the movie is a movie that Jumps, and that’s all.