Jurassic World

maxresdefault

Driving home from last night’s screening of Jurassic World, I kept thinking back to a similarly humid summer evening in 1997, when I locked horns with a friend over the merits of an earlier film in the franchise, Jurassic Park: The Lost World. We were lingering outside the theater before heading to our cars, digesting the experience of the movie we had just watched together. He’d disliked The Lost World, which (if I remember his stance accurately) he saw as an empty commercial grab, a cynical attempt by Steven Spielberg to repeat the success of Jurassic Park (1993)—a film we agreed was a masterpiece of blockbuster storytelling and spectacle—but possessing none of the spark and snap of the original. As for my position, all I can reconstruct these 18 years later is that I appreciated The Lost World’s promotion of Ian Malcolm to primary protagonist—after The Fly (1986), any movie that put Jeff Goldblum in the driver’s seat was golden in my book—as well as its audacious final sequence in which a Tyrannosaurus Rex runs rampant through the streets and suburbs of San Diego.

Really what it came down to, though, were two psychological factors, facets of a subjectivity forged in the fantastic fictions and film frames of science-fiction media, my own version of Scott Bukatman’s “terminal identity.” The first, which I associate with my fandom, was that in those days I never backed down from a good disagreement, whether on picayune questions like Kirk versus Picard or loftier matters such as the meaning of the last twenty minutes of 2001: A Space Odyssey (1968). (Hmm, both of those links go to WhatCulture.com—maybe I should add it to my reading list.)

The second reason for my defense of The Lost World was simply my overriding wish that it not be a piece of crap: that it be, in fact, great. Because even then I could feel the tug of loyalty tying me to an emergent franchise, the sense that I’d signed on for a serial experience that might well stretch into decades, demanding fealty no matter how debased its future installments might become. I hadn’t yet read the work of Jacques Lacan—that would come a couple of years later, when I became a graduate student—but I see now that I was thinking in the futur anterieur, peering back on myself from a fantasized, later point of view: “I will have done …” It’s a twisty trick of temporality, and if I no longer stress about contradictions in my viewing practice the way I once did (following the pleasurable trauma of Abrams’s reboot, I have accepted the death of the Star Trek I grew up with), I am still haunted by a certain anxiety of the unarrived regarding my scholarly predilections and predictions (I’d hate to be the kind of academic whom future historians tsk-ingly agree got things wrong.)

But the tidal pull of the franchise commitment persists, which why I’m having a hard time deciding whether Jurassic World succeeded or sucked. Objectively I’m pretty sure the film is a muddle, certainly worse in its money-grabbing motivations and listless construction than either The Lost World or its follow-up, the copy-of-a-copy Jurassic Park III (2001). Anthony Lane in the New Yorker correctly bisects Jurassic World into two halves, “doggedly dull for the first hour and beefy with basic thrills for most of the second,” to which I’d add that most of that first hour is rushed, graceless, and elliptical to the point of incoherence. One of my favorite movie critics, Walter Chaw of Film Freak Central, damns that ramshackle execution with faint praise, writing: “Jurassic World is Dada. It is anti-art, anti-sense—willfully, defiantly, some would say exuberantly, meaningless. In its feckless anarchy, find mute rebellion against narrative convention. You didn’t come for the story, it says, you came for the set-ups and pay-offs.”

Perhaps Chaw is right, but seeing the preview for this fall’s Bridge of Spies reminded me what an effortless composer of the frame Spielberg can be–an elegance absent from Jurassic World save for one shot, which I’ll get to in a minute–and rewatching the opening minutes of Jurassic Park before tackling this review reminded me how gifted the man is at pacing. In particular, at putting a story’s elements in place: think of the careful build of the first twenty minutes of Close Encounters of the Third Kind (1977), laying out its globe-jumping puzzle pieces; or of Jaws (1975), as the sleepy beach town of Amity slowly wakes to the horror prowling its waters. Credit too the involvement of the late, great Michael Crichton. His early technothrillers–especially 1969’s The Andromeda Strain and 1973’s Westworld, both of which feed directly into Jurassic Park–might be remembered for their high concepts (and derided for their thin characterizations), but what made him such a perfect collaborator for Spielberg was the clear pleasure he took in building narrative mousetraps, one brief chapter at a time. (Nowadays someone like Dan Brown is probably seen as Crichton’s heir apparent, though I vastly prefer the superb half-SF novels of Daniel Suarez.)

I delve into these influences and inheritances because ancestry and lineage seem to be much on the mind of Jurassic World. DNA and its pandora’s box of wonders/perils have always been a fascination of the Jurassic franchise. In the first film it was mosquitoes in amber; in Jurassic World it’s the 1993 movie that’s being resurrected. Something old, something new, something borrowed, something blue: in front of the camera you’ve got B. D. Wong and a crapload of production design tying the humble beginnings of Isla Nubar to its modern, Disneyesque metastasization, while “behind the scenes” (a phrase I put in scare quotes because, let’s face it, if the material were really meant to stay behind the scenes, we wouldn’t be discussing it), videos like this one work to reassure us of a meaningful connection between the original and its copy:

The “Classic Jurassic Crew” profiled here might seem a little reachy (lead greensman? boom operator?), but the key names are obviously Jack Horner and Phil Tippett, “Paleontology Consultant” and “Dinosaur Consultant” respectively: the former conferring scientific legitimacy upon the proceedings, the latter marking a tie to traditions of special effects that predate the digital. Tippett, of course, made his name in stop-motion animation–the Imperial Walkers in The Empire Strikes Back’s Hoth battle are largely his handiwork–and at first was approached by Spielberg to provide similar animation for Jurassic Park. But when Tippett’s “go motion” technique was superseded by the computer-generated dinosaurs being developed by Dennis Muren, Tippett became a crucial figure, both technologically and rhetorically, in the transition from analog to digital eras. In the words of his Wikipedia entry:

Far from being extinct, Tippett evolved as stop motion animation gave way to Computer-generated imagery or CGI, and because of Phil’s background and understanding of animal movement and behavior, Spielberg kept Tippett on to supervise the animation on 50 dinosaur shots for Jurassic Park.

Tippett’s presence in the film’s promotional field of play thus divulges World’s interest in establishing a certain “real” at the core of its operations, inoculating itself against the argument that it is merely a simulacrum of what came before. It’s a challenge faced by every “reboot event” within the ramifying textualities of a long-running blockbuster franchise, forced by marketplace demands to periodically reinvent itself while (and this is the trick) preserving the recognizable essence that made its forerunner(s) successful. In the case of Jurassic World, that pressure surfaces symptomatically in the discourse around the movie’s visual effects–albeit in a fashion that ironically inverts the test Jurassic Park met and mastered all those years ago. Park’s dinosaurs were sold as breakthroughs in CGI, notwithstanding the brevity of their actual appearance: of the movie’s 127-minute running time, only six contained digital elements, with the rest of the creature performances supplied by Tippett’s animation and Stan Winston’s animatronics. Those old-school techniques were largely elided in the attention given to Park’s cutting-edge computer graphics.

Jurassic World, by contrast, arrives long after the end of what Michele Pierson has called CGI’s “wonder years”; inured to Hollywood’s production of digital spectacle by its sheer superfluity, audiences now seek the opposite lure, the promise of something solid, profilmic, touchable. This explains the final person featured in the video: John Rosengrant of the fittingly-named Legacy Effects, seen operating a dying apatosaurus in one of Jurassic World’s few languid moments. The dinosaur in that scene is a token of the analog era, offered up as emblem of a practical-effects integrity the filmmakers hope will generalize to their entire project. It’s an increasingly common move among makers of fantastic media, one that critics, like this writer for Grantland, are all too happy to reinforce:

J. J. Abrams got a big cheer at the recent Star Wars Celebration Anaheim when he said he was committed to practical effects. George Miller won plaudits for sticking real trucks in the desert in Mad Max: Fury Road. Similarly, [Jurassic World director Colin] Trevorrow gestured to the Precambrian world of special effects by filling his movie with rubber dinos, an old View-Master, and going to the mat to force Universal to pony up for an animatronic apatosaurus. Chris Pratt’s Owen Grady tenderly ministers to the old girl before her death — a symbolic death of the practical effect under the rampage of CGI.

I hope to say more about this phenomenon, in which the digital camouflages itself in a lost analog authenticity, in a future post. For now I will simply note that “Chris Pratt’s Owen Grady” might be the most real thing within Jurassic World. Pratt’s performance here is strikingly different from the jokester he played so winningly in Guardians of the Galaxy: he’s tougher, harsher, more brutish. Spielberg is rumored to want him to play Indiana Jones, and I can see how that would work: like the young Harrison Ford, Pratt can convincingly anchor a fantasy scenario without seeming like’s playing dress-up. But the actor Pratt reminds me of even more than Harrison Ford is John Wayne: his first shot in Jurassic World, dazzlingly silhouetted against sunlight, recalls that introductory dolly in Stagecoach (1939) when the camera rushes up to Wayne’s face as though helpless to resist his primordial photogenie.

As for the rest of Jurassic World, I enjoyed some of it and endured most of it, borne along by the movie’s fitful but generally successful invocations of the 1993 original. “Assets” are one of the screenplay’s keywords, and they apply not only to the dinosaurs that are Isla Nubar’s central attractions but to the swarm of intellectual property that constitutes the franchise’s brand: the logos, the score, the velociraptors’ screeches and the T-Rex’s roar. (Sound libraries, like genetically-engineered dinosaurs, are constructed and owned things too.) Anthony Lane jeers that there is “something craven and constricting in the attitude of the new film to the old,” but I found the opposite: it’s when World is most clearly conscious of Park that it works the best, which is probably why I enjoyed its final half an hour–built, as in Park, as an escalating series of action beats, culminating in a satisfying final showdown–the most.

But that might just be my franchise loyalty talking again. As with The Lost World in that 1997 argument outside the theater, I may be talking myself into liking something not because of its actual qualities, but because it’s part of my history–my identity.

It’s in my DNA.

Making Mine Marvel


marvuntold

Reading Sean Howe’s Marvel Comics: The Untold Story (New York: Harper Perennial, 2013) I am learning all sorts of things. Or rather, some things I am learning and some things I am relearning, as Marvel’s publications are woven into my life as intimately as are Star Trek and Star Wars: other franchises of the fantastic whose fecundity — the sheer volume of media they’ve spawned over the years — mean that at any given stage of my development they have been present in some form. Our biographies overlap; even when I wasn’t actively reading or watching them, they served at least as a backdrop. I would rather forget that The Phantom Menace or Enterprise happened, but I know precisely where I was in my life when they did.

Star Wars, of course, dates back to 1977, which means my first eleven years were unmarked by George Lucas’s galvanic territorialization of the pop-culture imaginary. Trek, on the other hand, went on the air in 1966, the same year I was born. Save for a three-month gap between my birthday in June and the series premiere in September, Kirk, Spock and the universe(s) they inhabit have been as fundamental and eternal as my own parents. Marvel predates both of them, coming into existence in 1961 as the descendent of Timely and Atlas. This makes it about as old as James Bond (at least in his movie incarnation) and slightly older than Doctor Who, arriving via TARDIS, er, TV in 1963.

My chronological preamble is in part an attempt to explain why so much of Howe’s book feels familiar even as it keeps surprising me by crystallizing things about Marvel I kind of already knew, because Marvel itself — avatarizalized in editor/writer Stan Lee — was such an omnipresent engine of discourse, a flow of interested language not just through dialogue bubbles and panel captions but the nondiegetic artists’ credits and editorial inserts (“See Tales of Suspense #53! — Ed.”) as well as paratextual spaces like the Bullpen Bulletins and Stan’s Soapbox. Marvel in the 1960s, its first decade of stardom, was very, very good not just at putting out comic books but at inventing itself as a place and even a kind of person — a corporate character — spending time with whom was always the unspoken emotional framework supporting my issue-by-issue excursions into the subworlds of Spider-Man, the Fantastic Four, and Dr. Strange.

Credit Howe, then, with taking all of Marvel’s familiar faces, fictional and otherwise, and casting each in its own subtly new light: Stan Lee as a liberal, workaholic jack-in-the-box in his 40s rather than the wrinkled avuncular cameo-fixture of recent Marvel movies; Jack Kirby as a father of four, turning out pages at breakneck speed at home in his basement studio with a black-and-white TV for company; Steve Ditko as — and this genuinely took me by surprise — a follower of Ayn Rand who increasingly infused his signature title, The Amazing Spider-Man, with Objectivist philosophy.

It’s also interesting to see Marvel’s transmedial tendencies already present in embryo as Lee, Kirby, and Ditko shared their superhero assets across books: Howe writes, “Everything was absorbed into the snowballing Marvel Universe, which expanded to become the most intricate fictional narrative in the history of the world: thousands upon thousands of interlocking characters and episodes. For generations of readers, Marvel was the great mythology of the modern world.” (Loc 125 — reading it on my Kindle app). Of course, as with any mythology of sufficient popular mass, it becomes impossible to read history as anything but a teleologically overdetermined origin story, so perhaps Howe overstates the case. Still, it’s hard to resist the lure of reading marketing decisions as prescient acts of worldbuilding: “It was canny cross-promotion, sure, but more important, it had narrative effects that would become a Marvel Comics touchstone: the idea that these characters shared a world, that the actions of each had repercussions on the others, and that each comic was merely a thread of one Marvel-wide mega-story.” (Loc 769)

I like too the way Untold Story paints comic-book fandom in the 1960s as a movement of adults, or at least teenagers and college students, rather than the children so often caricatured as typical comic readers; Howe notes July 27, 1964 as the date of “the first comic convention” at which “a group of fans rented out a meeting hall near Union Square and invited writers, artists, and collectors (and one dealer) of old comic books to meet.” (Loc 876) The company’s self-created fan club, the Merry Marvel Marching Society or M.M.M.S., was in Howe’s words

an immediate smash; chapters opened at Princeton, Oxford, and Cambridge. … The mania wasn’t confined to the mail, either — teenage fans started calling the office, wanting to have long telephone conversations with Fabulous Flo Steinberg, the pretty young lady who’d answered their mail so kindly and whose lovely picture they’d seen in the comics. Before long, they were showing up in the dimly lit hallways of 625 Madison, wanting to meet Stan and Jack and Steve and Flo and the others. (Loc 920)

A forcefully engaged and exploratory fandom, then, already making its media pilgrimages to the hallowed sites of production, which Lee had so skillfully established in the fannish imaginary as coextensive with, or at least intersecting, the fictional overlay of Manhattan through which Spider-Man swung and the Fantastic Four piloted their Fantasticar. In this way the book’s first several chapters offhandedly map the genesis of contemporary, serialized, franchised worldbuilding and the emergent modern fandoms that were both those worlds’ matrix and their ideal sustaining receivers.

Howe is attentive to these resonances without overstating them: Lee, Kirby and others are allowed to be superheroes (flawed and bickering in true Marvel fashion) while still retaining their earthbound reality. And through his book, so far, I am reexperiencing my own past in heightened, colorful terms, remembering how the media to which I was exposed when young mutated me, gamma-radiation-like, into the man I am now.

Tilt-Shifting Pacific Rim

PACIFIC RIM

Two-thirds of the way through Pacific Rim — just after an epic battle in, around, and ultimately over Hong Kong that’s one of the best-choreographed setpieces of cinematic SF mayhem I have ever witnessed — I took advantage of a lull in the storytelling to run to the restroom. In the air-conditioned chill of the multiplex the lobby with its concession counters and videogames seemed weirdly cramped and claustrophobic, a doll’s-house version of itself I’d entered after accidentally stumbling into the path of a shink-ray, and I realized for the first time that Guillermo del Toro’s movie had done a phenomenological number on me, retuning my senses to the scale of the very, very big and rendering the world outside the theater, by contrast, quaintly toylike.

I suspect that much of PR’s power, not to mention its puppy-dog, puppy-dumb charm, lies in just this scalar play. The cinematography has a way of making you crane your gaze upwards even in shots that don’t feature those lumbering, looming mechas and kaiju. The movie recalls the pleasures of playing with LEGO, model kits, action figures, even plain old Matchbox Cars, taking pieces of the real (or made-up) world and shrinking them down to something you can hold in your hand — and, just as importantly, look up at. As the father of a two-year-old, I often find myself laying on the floor, my eyes situated inches off the carpet and so near the plastic dump trucks, excavators, and fire engines in my son’s fleet that I have to take my glasses off to properly focus on them. At this proximity, toys regain some of their large-scale referent’s visual impact without ever quite giving up their smallness: the effect is a superimposition of slightly dissonant realities, or in the words of my friend Randy (with whom I saw Pacific Rim) a “sized” version of the uncanny valley.

This scalar unheimlich is clearly on the culture’s mind lately, literalized — iconized? — in tilt-shift photography, which takes full-sized scenes and optically transforms them into images that look like dioramas or models. A subset of the larger (no pun intended) practice of miniature faking, tilt-shift updates Walter Benjamin’s concept of the optical unconscious for the networked antheap of contemporary digital and social media, in which nothing remains unconscious (or unspoken or unexplored) for long but instead swims to prominence through an endless churn of collective creation, commentary, and sharing. Within the ramifying viralities of Facebook, Twitter, Tumblr, Reddit, and 4chan, in which memes boil reality into existence like so much quantum foam, the fusion of lens-perception and human vision — what the formalist Soviet pioneers called the kino-eye — becomes just another Instagram filter:

tilt-shift-photography-1

The giant robots fighting giant monsters in Pacific Rim, of course, are toyetic in a more traditional sense: where tilt-shift turns the world into a miniature, PR uses miniatures to make a world, because that is what cinematic special effects do. The story’s flimsy romance, between Raleigh Beckett (Charlie Hunnam) and Mako Mori (Rinko Kikuchi) makes more sense when viewed as a symptomatic expression of the national and generic tropes the movie is attempting to marry: the mind-meldly “drift” at the production level fuses traditions of Japanese rubber-monster movies like Gojiru and anime like Neon Genesis Evangelion with a visual-effects infrastructure which, while a global enterprise, draws its guiding spirit (the human essence inside its mechanical body, if you will) from Industrial Light and Magic and the decades of American fantasy and SF filmmaking that led to our current era of brobdingnagian blockbusters.

Pacific Rim succeeds handsomely in channeling those historical and generic traces, paying homage to the late great Ray Harryhausen along the way, but evidently its mission of magnifying 50’s-era monster movies to 21st-century technospectacle was an indulgence of giantizing impulses unsuited to U.S. audiences at least; in its opening weekend, PR was trounced by Grown Ups 2 and Despicable Me 2, comedies offering membership in a franchise where PR could offer only membership in a family. The dismay of fans, who rightly recognize Pacific Rim as among the best of the summer season and likely deserving of a place in the pantheon of revered SF films with long ancillary afterlives, should remind us of other scalar disjunctions in play: for all their power and reach (see: the just-concluded San Diego Comic Con), fans remain a subculture, their beloved visions, no matter how expansive, dwarfed by the relentless output of a mainstream-oriented culture industry.

Hugo

Count me among those few unfortunates who didn’t “get” Hugo. Not that I failed to see what Martin Scorsese’s opulent adaptation of Brian Selznick’s graphic novel was trying to accomplish; but for me the movie’s attempt to channel and amplify a certain magic succeeded merely in bringing it leadenly to earth, burying a fragile tapestry of early-cinema memories beneath a giant cathedral (or in this case, a train station) of digital visual effects and overly literal storytelling. Some of my disaffection was likely due to viewing situation: I watched it flat and small, at home on my (HD)TV, rather than big and deep, and the lack of 3D glasses or IMAX overwhelm surely contributed to the lackluster experience. Yet I resist this bullying-by-format, and find it at odds with what Scorsese, through his marvelous machinery, appeared to be promoting: the idea that “movie magic” can blossom on the squarest of screens, the grainiest and most monochromatic of film stocks, not even needing synchronized sound to work its wonders on an audience.

I watched Hugo for a class I am co-teaching this term on the varieties of realism and “reality” deployed across multiple media, starting with early cinema’s dazzling blend of spectacularized “actuality” and suddenly plentiful illusion. Often reduced to a binary opposition between the work of Louis and Auguste Lumiere on the one hand and Georges Melies on the other, the era was more like the first femtoseconds following the Big Bang, in which cinema’s principal forces had not yet unfolded themselves from the superheated plasma of possibility. Hugo tries to take us back to that primordial celluloid soup, or more precisely to a fantasized moment — itself still hauntingly young — when the medium of film began to be aware of its own past, that is, to recognize itself as a medium and not, as one Louis Lumiere quote would have it, “an invention without a future.” Through an overwrought relay of intermediaries that includes a young boy and girl, a nascent cinema scholar, a dead father, a semifunctional automaton, and a devoted wife, that past is resurrected in the form of Melies himself, cunningly impersonated by Ben Kingsley, and it is the old magician’s backstory, dominating the final third of the film, that jolted both me and my students back to full, charmed attention.

I suppose there is something smart and reflexive about the way Hugo buries its lead, staging the delights of scholarly and aesthetic discovery through a turgid narrative that only slowly unveils its embedded wonders. But for me, too much of the movie felt like a windup toy, a rusty clockwork, a train with a coal-burning engine in dire need of stoking.

The Grey

It’s always interesting to witness the birth of an articulation between actor and genre — the moment when Hollywood’s endlessly spinning tumblers click into place with a tight and seemingly inevitable fit, fusing star and vehicle into a single, satisfying commercial package. For Liam Neeson, that grave and manly icon, the slot machine hit its combinatorial jackpot with 2008’s Taken, a thriller whose narrative economy was as ruthlessly single-minded as its protagonist; Neeson played a former CIA agent whose mission to rescue his kidnapped daughter lent the bloody body count a moral justification, like a perfectly-balanced throwing knife. Next came Unknown, less successful in leaping its logical gaps, but centered nonetheless by Neeson’s morose, careworn toughness.

The Grey gives us Neeson as yet another hardened but sensitive man, another action hero whose uncompromising competence in the face of disaster is saved from implausible superhumanity by virtue of the fact that he seems so reluctant to be going through it all; you sense that Neeson’s characters really wish they were in another kind of movie. It makes him just right for something like The Grey, in which a plane full of grizzled and boastful oil-drilling workers crashes in remote Alaskan territory, on what turns out to be the hunting and nesting grounds of a community of wolves. None of these guys wants to be there, especially after a crash scene so stroboscopically wrenching and whiplashy that it made my wife leave the room. Shortly after Neeson’s character makes his way to the smoking debris and discovers a handful of survivors, there is an exquisite scene in which he coaches a terminally-wounded man through the final seconds of his life. “You’re going to die,” Neeson gently and compassionately growls, as around him the other tough guys weep and turn away.

Nothing else that happens in the story quite matches that moment, and since the rest of the film is essentially one long death scene, I found myself wishing that someone like Neeson could help put the movie out of its misery. Not that the action of being hunted by wolves isn’t gripping — but as the team’s numbers wear down and it becomes clearer that no one is going to survive this thing, the tone becomes so meditative that I found myself numbing out, as though frostbitten. I’ve written before about annihilation narratives, and I continue to respond to the mordant pleasures of their zero-sum games, which appeal, I suspect, to the same part of me that likes washing every dish in the kitchen and then hosing the sink clean while the garbage disposal whirrs. (Scott Smith’s book The Ruins is perhaps my favorite recent instance of the tale from which no one escapes.) The Grey‘s slow trudge to its concluding frames induced a trance more than a chill, but the experience stayed with me for days afterward.

Don’t Be Afraid of the Dark

It’s hard to pinpoint the primal potency of the original Don’t Be Afraid of the Dark, the 1973 telefilm about a woman stalked by hideous little troll monsters in the shadowy old house where she lives with her husband. The story itself, a wisp of a thing, has the unexplained purity of a nightmare piped directly from a fevered mind: both circuitously mazelike and stiflingly linear, it’s like watching someone drown in a room slowly filling with water. As with contemporaries Rosemary’s Baby and The Stepford Wives, it’s a parable of domestic disempowerment built around a woman whose isolation and vulnerability grow in nastily direct proportion to her suspicion that she is being hunted by dark forces. All three movies conclude in acts of spiritual (if not quite physical) devouring and rebirth: housewives eaten by houses. To the boy I was then, Don’t Be Afraid of the Dark provoked a delicious, vertiginous sliding of identification, repulsion, and desire: the doomed protagonist, played by Kim Darby, merged the cute girl from the Star Trek episode “Miri” with the figure of my own mother, whose return to full-time work as a public-school librarian, I see now, had triggered tectonic shifts in my parents’ relationship and the continents of authority and affection on which I lived out my childhood. These half-repressed terrors came together in the beautiful, grotesque design of the telefim’s creatures: prunelike, whispery-voiced gnomes creeping behind walls and reaching from cupboards to slice with razors and switch off the lights that are their only weakness.

The 2011 remake, which has nothing of the original’s power, is nevertheless valuable as a lesson in the danger of upgrading, expanding, complicating, and detailing a text whose low-budget crudeness in fact constitutes televisual poetry. Produced by Guillermo del Toro, the movie reminds me of the dreadful watering-down that Steven Spielberg experienced when he shifted from directing to producing in the 1980s, draining the life from his own brand (and is there not a symmetry between this industrial outsourcing of artistry and the narrative’s concern with soul-sucking?). The story has been tampered with disastrously, introducing a little girl to whom the monsters (now framed as vaguely simian “tooth fairies”) are drawn; the wife, played by a bloodless Katie Holmes, still succumbs in the end to the house’s demonic sprites, but the addition of a maternal function forces us to read her demise as noble sacrifice rather than nihilistic defeat, and when husband (Guy Pearce) walks off with daughter in the closing beat, it comes unforgivably close to a happy ending. As for the monsters, now more infestation than insidiousness, they skitter and leap in weightless CGI balletics, demonstrating that, as with zombies, faster does not equal more frightening. But for all its evacuation of purpose and punch, the remake is useful in locating a certain undigestible blockage in Hollywood’s autocannibalistic churn, enshrining and immortalizing — through its very failure to reproduce it — the accidental artwork of the grainy, blunt, wholly sublime original.

The Ides of March

Ryan Gosling is something of a Rorschach blot for me; he makes an entirely different impression from film to film, skeevy-charming in Lars and the Real Girl, dopey and dense in Crazy, Stupid, Love, stonily heroic in Drive (where he brings to mind both Gary Cooper and that “axiom of the cinema” Charlton Heston), pathetic to the point of unwatchability in Blue Valentine. Good movies all — Drive is actually great — and I suppose good performances, but Gosling’s particular brand of method acting creates a hollow at their center.

The Ides of March, directed by George Clooney and adapted by Clooney and his frequent collaborator Grant Heslov from the play “Farragut North,” continues this trend, giving us Gosling as Stephen Meyers, a slick political operator working for Democrat governor Mike Morris (Clooney) in his bid for the presidential nomination. The story hinges on Meyers’s loss of innocence, a pivot from sincere belief in his candidate’s values and virtues to a cynical acceptance of the actual machinations of electoral campaigns. The movie channels the bleak paranoia of any number of 70s thrillers and satires, from All the President’s Men to The Candidate, but ultimately feels slight and underdeveloped; there are too few moving parts to the plot, and the series of betrayals and reversals are spottable well in advance.

The largest problem, though, is that Gosling’s character barely seems to change, even as the narrative describes his fall from grace. Is this because of an overly economical script that follows too closely the limited staging of its theatrical existence? The film’s portrayal of a political scandal that seems almost quaint compared to the hateful and incoherent hunger games being played out by the current crop of candidates for the Republican presidential ticket? Or is it because Gosling himself, as the apotheosis of a certain kind of absent acting style, registers from the start as someone who’s merely going through the motions?

Super 8: The Past Through Tomorrow

Ordinarily I’d start this with a spoiler warning, but under our current state of summer siege — one blockbuster after another, each week a mega-event (or three), movies of enormous proportion piling up at the box office like the train-car derailment that is Super 8‘s justly lauded setpiece of spectacle — the half-life of secrecy decays quickly. If you haven’t watched Super 8 and wish to experience its neato if limited surprises as purely as possible, read no further.

This seems an especially important point to make in relation to J. J. Abrams, who has demonstrated himself a master of, if not precisely authorship as predigital literary theory would recognize it, then a kind of transmedia choreography, coaxing a miscellany of texts, images, influences, and brands into pleasing commercial alignment with sufficient regularity to earn himself his own personal brand as auteur. As I noted a few years back in a pair of posts before and after seeing Cloverfield, the truism expounded by Thomas Elsaesser, Jonathan Gray, and others — that in an era of continuous marketing and rambunctiously recirculated information, we see most blockbusters before we see them — has evolved under Abrams into an artful game of hide and seek, building anticipation by highlighting in advance what we don’t know, first kindling then selling back to us our own sense of lack. More important than the movie and TV shows he creates are the blackouts and eclipses he engineers around them, dark-matter veins of that dwindling popular-culture resource: genuine excitement for a chance to encounter the truly unknown. The deeper paradox of Abrams’s craft registered on me the other night when, in an interview with Charlie Rose, he explained his insistence on secrecy while his films are in production not in terms of savvy marketing but as a peace offering to a data-drowned audience, a merciful respite from the Age of Wikipedia and TMZ. No less clever for their apparent lack of guile, Abrams’s feats of paratextual prestidigitation mingle the pleasures of nostalgia with paranoia for the present, allying his sunny simulations of a pop past with the bilious mutterings of information-overload critics. (I refuse to use Bing until they change their “too much information makes you stupid” campaign, whose head-in-the-sand logic seems so like that of Creationism.)

The other caveat to get out of the way is that Abrams and his work have proved uniquely challenging for me. I’ve never watched Felicity or Alias apart from the bits and pieces that circulated around them, but I was a fan of LOST (at least through the start of season four), and enjoyed Mission Impossible III — in particular one extended showoff shot revolving around Tom Cruise as his visage is rebuilt into that of Philip Seymour Hoffman, of which no better metaphor for Cruise’s lifelong pursuit of acting cred can be conceived. But when Star Trek came out in 2009, it sort of short-circuited my critical faculties. (It was around that time I began taking long breaks from this blog.) At once a perfectly made pop artifact and a wholesale desecration of my childhood, Abrams’s Trek did uninvited surgery upon my soul, an amputation no less traumatic for being so lovingly performed. My refusal to countenance Abrams’s deft reboot of Gene Roddenberry’s vision is surely related to my inability to grasp my own death — an intimation of mortality, yes, but also of generationality, the stabbing realization that something which defined me as for so many years as stable subject and member of a collective, my herd identity, had been reassigned to the cohort behind me: a cohort whose arrival, by calling into existence “young” as a group to which I no longer belonged, made me — in a word — old. Just as if I had found Roddenberry in bed with another lover, I must encounter the post-Trek Abrams from within the defended lands of the ego, a continent whose troubled topography was sculpted not by physical law but by tectonics of desire, drive, and discourse, and whose Lewis and Clark were Freud and Lacan. (Why I’m so touchy about border disputes is better left for therapy than blogging.)

Given my inability to see Abrams’s work through anything other than a Bob-shaped lens, I thought I would find Super 8 unbearable, since, like Abrams, I was born in 1966 (our birthdays are only five days apart!), and, like Abrams, I spent much of my adolescence making monster movies and worshipping Steven Spielberg. So much of my kid-life is mirrored in Super 8, in fact, that at times it was hard to distinguish it from my family’s 8mm home movies, which I recently digitized and turned into DVDs. That gaudy Kodachrome imagery, adance with grain, is like peering through the wrong end of a telescope into a postage-stamp Mad Men universe where it is still 1962, 1965, 1967: my mother and father younger than I am today, my brothers and sisters a blond gaggle of grade-schoolers, me a cheerful, big-headed blob (who evidently loved two things above all else: food and attention) showing up in the final reels to toddle hesitantly around the back yard.

Fast-forward to the end of the 70s, and I could still be found in our back yard (as well as our basement, our garage, and the weedy field behind the houses across the street), making movies with friends at age twelve or thirteen. A fifty-foot cartridge of film was a block of black plastic that held about 3 minutes of reality-capturing substrate. As with the Lumiere cinematographe, the running time imposed formal restraints on the stories one could tell; unless or until you made the Griffithian breakthrough to continuity editing, scenarios were envisioned and executed based on what was achievable in-camera. (In amateur cinema as in embryology, ontology recapitulates phylogeny.) For us, this meant movies built around the most straightforward of special effects — spaceship models hung from thread against the sky or wobbled past a painted starfield, animated cotillions of Oreo cookies that stacked and unstacked themselves, alien invaders made from friends wrapped in winter parkas with charcoal shadows under the eyes and (for some reason) blood dripping from their mouths — and titles that seemed original at the time but now announce how occupied our processors were with the source code of TV and movie screens: Star Cops, Attack of the Killer Leaves, No Time to Die (a spy thriller containing a stunt in which my daredevil buddy did a somersault off his own roof).

Reconstructing even that much history reveals nothing so much as how trivial it all was, this scaling-down of genre tropes and camera tricks. And if cultural studies says never to equate the trivial with the insignificant or the unremarkable, a similar rule could be said to guide production of contemporary summer blockbusters, which mine the detritus of childhood for minutiae to magnify into $150 million franchises. Compared to the comic-book superheroes who have been strutting their Nietzschean catwalk through the multiplex this season (Thor, Green Lantern, Captain America et-uber-al), Super 8 mounts its considerable charm offensive simply by embracing the simple, giving screen time to the small.

I’m referring here less to the protagonists than to the media technologies around which their lives center, humble instruments of recording and playback which, as A. O. Scott points out, contrast oddly with the high-tech filmmaking of Super 8 itself as well as with the more general experience of media in 2011. I’m not sure what ideology informs this particular retelling of modern Genesis, in which the Sony Walkman begat the iPod and the VCR begat the DVD; neither Super 8′s screenplay nor its editing develop the idea into anything like a commentary, ironic or otherwise, leaving us with only the echo to mull over in search of meaning.

A lot of the movie is like that: traces and glimmers that rely on us to fill in the gaps. Its backdrop of dead mothers, emotionally-checked-out fathers, and Area 51 conspiracies is as economical in its gestures as the overdetermined iconography of its Drew Struzan poster (below), an array of signifiers like a subway map to my generation’s collective unconscious. Poster and film alike are composed and lit to summon a linkage of memories some thirty years long, all of which arrive at their noisy destination — there’s that train derailment again — in Super 8.

I don’t mind the sketchiness of Super 8‘s plot any more than I mind its appropriation of 1970s cinematography, which trades the endlessly trembling camerawork of Cloverfield and Star Trek for the multiplane composition, shallow focus, and cozily cluttered frames of Close Encounters of the Third Kind. (Abrams’s film is more intent on remediating CE3K‘s rec-room mise-en-scene than its Douglas-Trumbull lightshows.) To accuse Super 8 of vampirizing the past is about as productive as dropping by the castle of that weird count from Transylvania after dark: if the full moon and howling wolves haven’t clued you in, you deserve whatever happens, and to bring anything other than a Spielberg-trained sensibility to a screening of Super 8 is like complaining when Pop Rocks make your mouth feel funny.

What’s exceptional, in fact, about Super 8 is the way it intertextualizes two layers of history at once: Spielberg films and the experience of watching Spielberg films. It’s not quite Las Meninas, but it does get you thinking about the inextricable codependence between a text and its reader, or in the case of Spielberg and Abrams, a mainstream “classic” and the reverential audience that is its condition of possibility. With its ingenious hook of embedding kid moviemakers in a movie their own creative efforts would have been inspired by (and ripped off of), Super 8 threatens to transcend itself, simply through the squaring and cubing of realities: meta for the masses.

Threatens, but never quite delivers. I agree with Scott’s assessment that the film’s second half fails to measure up to its first — “The machinery of genre … so ingeniously kept to a low background hum for so long, comes roaring to life, and the movie enacts its own loss of innocence” — and blame this largely on the alien, a digital McGuffin all too similar to Cloverfield‘s monster and, now that I think about it, the thing that tried to eat Kirk on the ice planet. Would that Super 8‘s filmmakers had had the chutzpah to build their ode to creature features around a true evocation of 70s and 80s special effects, recreating the animatronics of Carlo Rambaldi or Stan Winston, the goo and latex of Rick Baker or Rob Bottin, the luminous optical-printing of Trumbull or Robert Abel, even the flickery stop motion of Jim Danforth and early James Cameron! It might have granted Super 8‘s Blue-Fairy wish with the transmutation the film seems so desperately to desire — that of becoming a Spielberg joint from the early 1980s (or at least time-traveling to climb into its then-youthful body like Sam Beckett or Marty McFly).

Had Super 8‘s closely-guarded secret turned out to be our real analog past instead of a CGI cosmetification of it, the movie would be profound where it is merely pretty. Super 8 opens with a wake; sour old man that I am, I wish it had had the guts to actually disinter the corpse of a dead cinema, instead of just reminiscing pleasantly beside the grave.

 

Paranormal Activity 2

I was surprised to find myself eager, even impatient, to watch Paranormal Activity 2, the followup to 2007’s no-budget breakthrough in surveillance horror. I wrote of the first movie that it delivered a satisfactory double-action of filmic and commercial engineering, chambering and firing its purchased scares in a way that felt requisitely unexpected and, at the same time, reassuringly predictable. The bonus of seeing it in a theater (accompanied by my mom, and my therapist would like to know why I chose to omit that fact from the post) was a happy reminder that crowds can improve, rather than degrade, the movie experience.

PA2 I took in at home after my wife had gone to bed. I lay on a couch in a living room dark except for the low light of a small lamp: a setting well-suited to a drama that takes place largely in domestic spaces at night. My complete lack of fear or even the faint neck-tickle of eeriness probably just proves the truism that some movies work best with an audience — but let’s not forget that cinema does many kinds of work, and offers many varieties of pleasure. This is perhaps especially true of horror, whose glandular address of our viscera places it among the body genres of porn and grossout comedy (1), and whose narratives of inevitable peril and failed protection offer a plurality of identifications where X marks the intersection of the boy-girl lines of gender and the killer-victim lines of predation (2).

I’m not sure what Carol Clover would make of Paranormal Activity 2 or its predecessor (though see here for a nice discussion), built as they are on the conceit of a gaze so disinterested it has congealed into the pure alienness of technology. Shuffled among the mute witnesses of a bank of home-security cameras, we are not in the heads of Alfred Hitchcock, Stanley Kubrick, or even Gaspar Noe, but instead the sensorium — and sensibility — of HAL. A good part of the uncanniness, and hence the fun, comes from the way this setup eschews the typical constructions of cinematography: conventions of framing and phrasing that long ago (with the rise of classical film style) achieved their near-universal legibility at the cost of their willingness to truly disrupt and disturb. PA2 is grindhouse dogme, wringing chills from its formal obstructions.

Rather than situating us securely in narrative space through establishing shots and analytic closeups, shot-reverse-shot, and point-of-view editing, PA2 either suspends us outside the action, hovering at the ceiling over kitchens and family rooms rendered vast as landscapes by a wide-angle lens, or throws us into the action in handheld turmoil that even in mundane and friendly moments feels violent. The visuals and their corresponding “spatials” position viewers as ghosts themselves, alternately watching from afar in building frustration and hunger, then taking posession of bodies for brief bouts of hot corporality. Plotwise, we may remain fuzzy on what the spectral force in question (demon? haunted house? family curse?) finally wants, but on the level of spectatorial empathy, it is easy to grasp why it both hates and desires its victims.

Along with Von Trier, other arty analogs for PA2 might be Chantal Akerman’s Jeanne Dielman or Laura Mulvey and Peter Wollen’s Riddles of the Sphinx, which similarly locate us both inside and outside domestic space to reveal how houses can be “haunted” by gender and power. They share, that is, a clinical interest in the social and ideological compartmentalization of women, though in the Paranormal Activity films the critique remains mostly dormant, waiting to be activated in the readings of brainy academics. (Certainly one could write a paper on PA2’s imbrication of marriage, maternity, and hysterical “madness,” or on the role of technological prophylaxis in protecting the white bourgeois from an Other coded not just as female but as ethnic.)

But for this brainy academic, what’s most interesting about PA2 is the way it weaves itself into the first film. Forming an alternate “flightpath” through the same set of events, the story establishes a tight set of linkages to the story of Micah and Katie, unfolding before, during, and after their own deadly misadventure of spirit photography gone wrong. It is simultaneously prequel, sequel, and expansion to Paranormal Activity, and hence an example — if a tightly conscribed one — of transmedia storytelling, in which a fictional world, its characters, and events can be visited and revisited in multiple tellings. In comments on my post on 2008’s Cloverfield, Martyn Pedler pointed out that film’s transmedia characteristics, and I suggested at the time that “Rather than continuing the story of Cloverfield, future installments might choose to tell parallel or simultaneous stories, i.e. the experiences of other people in the city during the attack.”

Paranormal Activity 2 does precisely that for its tiny, spooky universe. It may not be the scariest movie I’ve seen lately, but for what it implies about the evolving strategies of genre cinema in an age of new media, it’s one of the more intriguing.

Works Cited

1. Linda Williams, “Film Bodies: Gender, Genre, and Excess,” Film Quarterly 44:4 (Summer 1991), 2-13.

2. Carol J. Clover, Men, Women and Chainsaws: Gender in the Modern Horror Film (Princeton: Princeton University Press, 1993).

Tron: Legacy

This review is dedicated to my friends David Surman and Will Brooker.

Part One: We Have Never Been Digital

***

If Avatar was in fact the “gamechanger” its prosyletizers claimed, then it’s fitting that the first film to surpass it is itself about games, gamers, and gaming. Arriving in theaters nearly a year to the day after Cameron’s florid epic, Tron: Legacy delivers on the promise of an expanded blockbuster cinema while paradoxically returning it to its origins.

Those origins, of course, date back to 1982, when the first Tron — brainchild of Steven Lisberger, who more and more appears to be the Harper Lee of pop SF, responsible for a single inspired act of creation whose continued cultural resonance probably doomed any hope of a career — showed us what CGI was really about. I refer not to the actual computer-generated content in that film, whose 96-minute running time contains only 15-20 minutes of CG animation (the majority of the footage was achieved through live-action plates shot in high contrast, heavily rotoscoped, and backlit to insert glowing circuit paths into the environment and costumes), but instead to the discursive aura of the digital frontier it emits: another sexy, if equally illusory, glow. Tron was the first narrative feature film to serve up “the digital” as a governing design aesthetic as well as a marketing gimmick. Sold as high-tech entertainment event, audiences accepted Lisberger’s folly as precisely that: a time capsule from the future, coming attraction as main event. Tron taught us, in short, to mistake a hodgepodge of experiment and tradition as a more sweeping change in cinematic ontology, a spell we remain under to this day.

But the state of the art has always been a makeshift pact between industry and audience, a happy trance of “I know, but even so …” For all that it hinges on a powerful impression of newness, the self-applied declaration of vanguard status is, ironically, old hat in filmmaking, especially when it comes to the periodic eruptions of epic spectacle that punctuate cinema’s more-of-the-same equilibrium. The mutations of style and technology that mark film’s evolutionary leaps are impossible to miss, given how insistently they are promoted: go to YouTube and look at any given Cecil B. DeMille trailer if you don’t believe me. “Like nothing you’ve ever seen!” may be an irresistible hook (at least to advertisers), but it’s rarely true, if only because trailers, commercials, and other advance paratexts ensure we’ve looked at, or at least heard about, the breakthrough long before we purchase our tickets.

In the case of past breakthroughs, the situation becomes even more vexed. What do you do with a film like Tron, which certainly was cutting-edge at the time of its release, but which, over the intervening twenty-eight years, has taken on an altogether different veneer? I was 16 when I first saw it, and have frequently shown its most famous setpiece — the lightcycle chase — in courses I teach on animation and videogames. As a teenager, I found the film dreadfully inert and obvious, and rewatching it to prepare for Tron: Legacy,  I braced myself for a similarly graceless experience. What I found instead was that a magical transformation had occurred. Sure, the storytelling was as clumsy as before, with exposition that somehow managed to be both overwritten and underexplained, and performances that were probably half-decent before an editor diced them them into novocained amateurism. The visuals, however, had aged into something rather beautiful.

Not the CG scenes — I’d looked at those often enough to stay in touch with their primitive retrogame charm. I’m referring to the live-action scenes, or rather, the suturing of live action and animation that stands in for computer space whenever the camera moves close enough to resolve human features. In these shots, the faces of Flynn (Jeff Bridges), Tron (Bruce Boxleitner), Sark (David Warner), and the film’s other digital denizens are ovals of flickering black-and-white grain, their moving lips and darting eyes hauntingly human amid the neon cartoonage.

Peering through their windows of backlit animation, Tron‘s closeups resemble those in Dreyer’s Passion of Joan of Arc — inspiration for early film theorist Béla Balázs’s lyrical musings on “The Face of Man” — but are closer in spirit to the winking magicians of George Méliès’s trick films, embedded in their phantasmagoria of painted backdrops, double exposures, and superimpositions. Like Lisberger, who would intercut shots of human-scaled action with tanks, lightcycles, and staple-shaped “Recognizers,” Méliès alternated his stagebound performers with vistas of pure artifice, such as animated artwork of trains leaving their tracks to shoot into space. Although Tom Gunning argues convincingly that the early cinema of attractions operated by a distinctive logic in which audiences sought not the closed verisimilar storyworlds of classical Hollywood but the heightened, knowing presentation of magical illusions, narrative frameworks are the sauce that makes the taste of spectacle come alive. Our most successful special effects have always been the ones that — in an act of bistable perception — do double duty as story.

In 1982, the buzzed-about newcomer in our fantasy neighborhoods was CGI, and at least one film that year — Star Trek II: The Wrath of Khan — featured a couple of minutes of computer animation that worked precisely because they were set off from the rest of the movie, as special documentary interlude. Other genre entries in that banner year for SF, like John Carpenter’s remake of The Thing and Steven Spielberg’s one-two punch of E.T. and Poltergeist (the latter as producer and crypto-director), were content to push the limits of traditional effects methods: matte paintings, creature animatronics, gross-out makeup, even a touch of stop-motion animation. Blade Runner‘s effects were so masterfully smoggy that we didn’t know what to make of them — or of the movie, for that matter — but we seemed to agree that they too were old school, no matter how many microprocessors may have played their own crypto-role in the production.

“Old school,” however, is another deceptively relative term, and back then we still thought of special effects as dividing neatly into categories of the practical/profilmic (which really took place in front of the camera) and optical/postproduction (which were inserted later through various forms of manipulation). That all special effects — and all cinematic “truths” — are at heart manipulation was largely ignored; even further from consciousness was the notion that soon we would redefine every “predigital” effect, optical or otherwise, as possessing an indexical authenticity that digital effects, well, don’t. (When, in 1998, George Lucas replaced some of the special-effects shots in his original Star Wars trilogy with CG do-overs, the outrage of many fans suggested that even the “fakest” products of 70’s-era filmmaking had become, like the Velveteen Rabbit, cherished realities over time.)

Tron was our first real inkling that a “new school” was around the corner — a school whose presence and implications became more visible with every much-publicized advance in digital imaging. Ron Cobb’s pristine spaceships in The Last Starfighter (1984); the stained-glass knight in Young Sherlock Holmes (1985); the watery pseudopod in The Abyss (1989); each in its own way raised the bar, until one day — somewhere around the time of Independence Day (1996), according to Michele Pierson — it simply stopped mattering whether a given special effect was digital or analog. In the same way that slang catches on, everything overnight became “CGI.” That newcomer to the neighborhood, the one who had people peering nervously through their drapes at the moving truck, had moved in and changed the suburb completely. Special-effects cinema now operated under a technological form of the one-drop rule: all it took was a dab of CGI to turn the whole thing into a “digital effects movie.” (Certain film scholars regularly use this term to refer to both Titanic [1997] and The Matrix [1999], neither of which employs more than a handful of digitally-assisted shots — many of these involving intricate handoffs from practical miniatures or composited live-action elements.)

Inscribed in each frame of Tron is the idea, if not the actual presence, of the digital; it was the first full-length rehearsal of a special-effects story we’ve been telling ourselves ever since. Viewed today, what stands out about the first film is what an antique and human artifact — an analog artifact — it truly is. The arrival of Tron: Legacy, simultaneously a sequel, update, and reimagining of the original, gives us a chance to engage again with that long-ago state of the art; to appreciate the treadmill evolution of blockbuster cinema, so devoted to change yet so fixed in its aims; and to experience a fresh and vastly more potent vision of what’s around the corner. The unique lure (and trap) of our sophisticated cinematic engines is that they never quite turn that corner, never do more than freeze for an instant, in the guise of its realization, a fantasy of film’s future. In this sense — to rephrase Bruno Latour — we have never been digital.

Part Two: 2,415 Times Smarter

***

In getting a hold on what Tron: Legacy (hereafter T:L) both is and isn’t, I find myself thinking about a line from its predecessor. Ed Dillinger (David Warner), figurative and literal avatar of the evil corporation Encom, sits in his office — all silver slabs and glass surfaces overlooking the city’s nighttime gridglow, in the cleverest and most sustained of the thematic conceits that run throughout both films: the paralleling, to the point of indistinguishability, of our “real” architectural spaces and the electronic world inside the computer. (Two years ahead of Neuromancer and a full decade before Snow Crash, Tron invented cyberspace.)

Typing on a desk-sized touchscreen keyboard that neatly predates the iPad, Dillinger confers with the Master Control Program or MCP, a growling monitorial application devoted to locking down misbehavior in the electronic world as it extends its own reach ever outward. (The notion of fascist algorithm, policing internal imperfection while growing like a malignancy, is remapped in T:L onto CLU — another once-humble program omnivorously metastasized.) MCP complains that its plans to infiltrate the Pentagon and General Motors will be endangered by the presence of a new and independent security watchdog program, Tron. “This is what I get for using humans,” grumbles MCP, which in terms of human psychology we might well rename OCD with a touch of NCP. “Now wait a minute,” Dillinger counters, “I wrote you.” MCP replies coldly, “I’ve gotten 2,415 times smarter since then.”

The notion that software — synecdoche for the larger bugaboo of technology “itself” — could become smarter on its own, exceeding human intelligence and transcending the petty imperatives of organic morality, is of course the battery that powers any number of science-fiction doomsday scenarios. Over the years, fictionalizations of the emergent cybernetic predator have evolved from single mainframe computers (Colossus: The Forbin Project [1970], WarGames [1983]) to networks and metal monsters (Skynet and its time-traveling assassins in the Terminator franchise) to graphic simulations that run on our own neural wetware, seducing us through our senses (the Matrix series [1999-2003]). The electronic world established in Tron mixes elements of all three stages, adding an element of alternative storybook reality a la Oz, Neverland … or Disneyworld.

Out here in the real world, however, what runs beneath these visions of mechanical apocalypse is something closer to the Technological Singularity warned of by Ray Kurzweil and Vernor Vinge, as our movie-making machinery — in particular, the special-effects industry — approaches a point where its powers of simulation merge with its custom-designed, mass-produced dreams and nightmares. That is to say: our technologies of visualization may incubate the very futures we fear, so intimately tied to the futures we desire that it’s impossible to sort one from the other, much less to dictate which outcome we will eventually achieve.

In terms of its graphical sophistication as well as the extended forms of cultural and economic control that have come to constitute a well-engineered blockbuster, Tron: Legacy is at least 2,415 times “smarter” than its 1982 parent, and whatever else we may think of it — whatever interpretive tricks we use to reduce it to and contain it as “just a movie” — it should not escape our attention that the kind of human/machine fusion, not to mention the theme of runaway AI, at play in its narrative are surface manifestations of much more vast and far-reaching transformations: a deep structure of technological evolution whose implications only start with the idea that celluloid art has been taken over by digital spectacle.

The lightning rod for much of the anxiety over the replacement of one medium by another, the myth of film’s imminent extinction, is the synthespian or photorealistic virtual actor, which, following the logic of the preceding paragraphs, is one of Tron: Legacy‘s chief selling points. Its star, Jeff Bridges, plays two roles — the first as Flynn, onetime hotshot hacker, and the second as CLU, his creation and nemesis in the electronic world. Doppelgangers originally, Flynn has aged while CLU remains unchanged, the spitting image of Flynn/Bridges circa 1982.

Except that this image doesn’t really “spit.” It stares, simmers, and smirks; occasionally shouts; knocks things off tables; and does some mean acrobatic stunts. But CLU’s fascinating weirdness is just as evident in stillness as in motion (see the top of this post), for it’s clearly not Jeff Bridges we’re looking at, but a creepy near-miss. Let’s pause for a moment on this question: why a miss at all? Why couldn’t the filmmakers have conjured up a closer approximation, erasing the line between actor and digital double? Nearly ten years after Final Fantasy: The Spirits Within, it seems that CGI should have come farther. After all, the makers of T:L weren’t bound by the aesthetic obstructions that Robert Zemeckis imposed on his recent films, a string of CG waxworks (The Polar Express [2004], Beowulf [2007], A Christmas Carol [2009], and soon — shudder — a Yellow Submarine remake) in which the inescapable wrongness of the motion-captured performances are evidently a conscious embrace of stylization rather than a failed attempt at organic verisimilitude. And if CLU were really intended to convince us, he could have been achieved through the traditional retinue of doubling effects: split-frame mattes, body doubles in clever shot-reverse-shot arrangements, or the combination of these with motion-control cinematography as in the masterful composites of Back to the Future 2, which, made in 1989, is only seven years older than the first Tron.

The answer to the apparent conundrum is this: CLU is supposed to look that way; we are supposed to notice the difference, because the effect wouldn’t be special if we didn’t. The thesis of Dan North’s excellent book Performing Illusions is that no special effect is ever perfect — we can always spot the joins, and the excitement of effects lies in their ceaseless toying with our faculties of suspicion and detection, the interpretation of high-tech dreams. Updating the argument for synthespian performances like CLU’s, we might profitably dispose of the notion that the Uncanny Valley is something to be crossed. Instead, smart special effects set up residence smack-dab in the middle.

Consider by analogy the use of Botox. Is the point of such cosmetic procedures to absolutely disguise the signs of age? Or are they meant to remain forever fractionally detectable as multivalent signifiers — of privilege and wealth, of confident consumption, of caring enough about flaws in appearance to (pretend to) hide them? Here too is evidence of Tron: Legacy’s amplified intelligence, or at least its subtle cleverness: dangling before us a CLU that doesn’t quite pass the visual Turing Test, it simultaneously sells us the diegetically crucial idea of a computer program in the shape of human (which, in fact, it is) and in its apparent failure lulls us into overconfident susceptibility to the film’s larger tapestry of tricks. 2,415 times smarter indeed!

Part Three: The Sea of Simulation

***

Doubles, of course, have always abounded in the works that constitute the Tron franchise. In the first film, both protagonist (Flynn/Tron) and antagonist (Sark/MCP) exist as pairs, and are duplicated yet again in the diegetic dualism of real world/electronic world. (Interestingly, only MCP seems to lack a human manifestation — though it could be argued that Encom itself fulfills that function, since corporations are legally recognized as people.) And the hall of mirrors keeps on going. Along the axis of time, Tron and Tron: Legacy are like reflections of each other in their structural symmetry. Along the axis of media, Jeff Bridges dominates the winter movie season with performances in both T:L and True Grit, a kind of intertextual cloning. (The Dude doesn’t just abide — he multiplies.)

Amid this rapture of echoes, what matters originality? The critical disdain for Tron: Legacy seems to hinge on three accusations: its incoherent storytelling; its dependence on special effects; and the fact that it’s largely a retread of Tron ’82. I’ll deal with the first two claims below, but on the third count, T:L must surely plead “not guilty by reason of nostalgia.” The Tron ur-text is a tale about entering a world that exists alongside and within our own — indeed, that subtends and structures our reality. Less a narrative of exploration than of introspection, its metaphysics spiral inward to feed off themselves. Given these ouroboros-like dynamics, the sequel inevitably repeats the pattern laid down in the first, carrying viewers back to another embedded experience — that of encountering the first Tron — and inviting us to contrast the two, just as we enjoy comparing Flynn and CLU.

But what about those who, for reasons of age or taste, never saw the first Tron? Certainly Disney made no effort to share the original with us; their decision not to put out a Blu-Ray version, or even rerelease the handsome two-disk 20th anniversary DVD, has led to conspiratorial muttering in the blogosphere about the studio’s coverup of an outdated original, whose visual effects now read as ridiculously primitive. Perhaps this is so. But then again, Disney has fine-tuned the business of selectively withholding their archive, creating rarity and hence demand for even their flimsiest products. It wouldn’t at all surprise me if the strategy of “disappearing” Tron pre-Tron: Legacy were in fact an inspired marketing move, one aimed less at monetary profit than at building discursive capital. What, after all, do fans, cineastes, academics, and other guardians of taste enjoy more than a privileged “I’ve seen it and you haven’t” relationship to a treasured text? Comic-Con has become the modern agora, where the value of geek entertainment items is set for the masses, and carefully coordinated buzz transmutes subcultural fetish into pop-culture hit.

It’s maddeningly circular, I know, to insist that it takes an appreciation of Tron to appreciate Tron: Legacy. But maybe the apparent tautology resolves if we substitute terms of evaluation that don’t have to do with blockbuster cinema. Does it take appreciation of Ozu (or Tarkovsky or Haneke or [insert name here]) to appreciate other films by the same director? Tron: Legacy is not in any classical sense an auteurist work — I couldn’t tell you who directed it without checking IMDb — but who says the brand itself can’t function as an auteur, in the sense that a sensitive reading of it depends on familiarity with tics and tropes specific to the larger body of work? Alternatively, we might think of Tron as sub-brand of a larger industrial genre, the blockbuster, whose outward accessibility belies the increasingly bizarre contours of its experience. With its diffuse boundaries (where does a blockbuster begin and end? — surely not within the running time of a single feature-length movie) and baroque textual patterns (from the convoluted commitments of transmedia continuity to rapidfire editing and slangy shorthands of action pacing), the contemporary blockbuster possesses its own exotic aesthetic, one requiring its own protocols of interpretation, its own kind of training, to properly engage. High concept does not necessarily mean non-complex.

Certainly, watching Tron: Legacy, I realized it must look like visual-effects salad to an eye untrained in sensory overwhelm. I don’t claim to enjoy everything made this way: Speed Racer made me queasy, and Revenge of the Fallen put me into an even deeper sleep than did the first Transformers. T:L, however, is much calmer in its way, perhaps because its governing look — blue, silver, and orange neon against black — keeps the frame-cramming to a minimum. (The post-1983 George Lucas committed no greater sin than deciding to pack every square inch of screen with nattering detail.) Here the sequel’s emulation of Tron‘s graphics is an accidental boon: limited memory and storage led in the original to a reliance on black to fill in screen space, a restriction reinvented in T:L as strikingly distinctive design. Our mad blockbusters may indeed be getting harder to watch and follow. But perhaps we shouldn’t see this as proof of commercially-driven intellectual bankruptcy and inept execution, but as the emergence of a new — and in its way, wonderfully difficult and challenging — mode of popular art.

T:L works for me as a movie not because its screenplay is particularly clever or original, but because it smoothly superimposes two different orders of technological performance. The first layer, contained within the film text, is the synthesis of live action and computer animation that in its intricate layering succeeds in creating a genuinely alternate reality: action-adventure seen through the kino-eye. Avatar attempted this as well, but compared to T:L, Cameron’s fantasia strikes me as disingenuous in its simulationist strategy. The lush green jungles of Pandora and glittering blue skin of the Na’vi are the most organic of surfaces in which CGI could cloak itself: a rendering challenge to be sure, but as deceptively sentimental in its way as a Thomas Kinkade painting. Avatar is the digital performing in “greenface,” sneakily dissembling about its technological core. Tron: Legacy, by contrast, takes as its representational mission simulation itself. Its tapestry of visual effects is thematically and ontologically coterminous with the world of its narrative; it is, for us and for its characters, a sea of simulation.

Many critics have missed this point, insisting that the electronic world the film portrays should have reflected the networked environment of the modern internet. But what T:L enshrines is not cyberspace as the shared social web it has lately become, but the solipsistic arena of first-person combat as we knew it in videogames of the late 1970s. As its plotting makes clear, T:L is at heart about the arcade: an ethos of rastered pyrotechnics and three-lives-for-a-quarter. The adrenaline of its faster scenes and the trances of its slower moments (many of them cued by the silver-haired Flynn’s zen koans) perfectly capture the affective dialectics of cabinet contests like Tempest or Missile Command: at once blazing with fever and stoned on flow.

The second technological performance superimposed on Tron: Legacy is, of course, the exhibition apparatus of IMAX and 3D, inscribed in the film’s planning and execution even for those who catch the print in lesser formats. In this sense, too, T:L advances the milestone planted by Avatar, beacon of an emerging mode of megafilm engineering. It seems the case that every year will see one such standout instance of expanded blockbuster cinema — an event built in equal parts from visual effects and pop-culture archetypes, impossible to predict but plain in retrospect. I like to imagine that these exemplars will tend to appear not in the summer season but at year’s end, as part of our annual rituals of rest and renewal: the passing of the old, the welcoming of the new. Tron: Legacy manages to be about both temporal polarities, the past and the future, at once. That it weaves such a sublime pattern on the loom of razzle-dazzle science fiction is a funny and remarkable thing.

***

To those who have read to the end of this essay, it’s probably clear that I dug Tron: Legacy, but it may be less clear — in the sense of “twelve words or less” — exactly why. I confess I’m not sure myself; that’s what I’ve tried to work out by writing this. I suppose in summary I would boil it down to this: watching T:L, I felt transported in a way that’s become increasingly rare as I grow older, and the list of movies I’ve seen and re-seen grows ever longer. Once upon a time, this act of transport happened automatically, without my even trying; I stumbled into the rabbit-holes of film fantasy with the ease of … well, I’ll let Laurie Anderson have the final words.

I wanted you. And I was looking for you.
But I couldn’t find you.
I wanted you. And I was looking for you all day.
But I couldn’t find you. I couldn’t find you.

You’re walking. And you don’t always realize it,
but you’re always falling.
With each step you fall forward slightly.
And then catch yourself from falling.
Over and over, you’re falling.
And then catching yourself from falling.
And this is how you can be walking and falling
at the same time.