Watching Avatar

1262128264270s

Apologies for taking a while to get around to writing about Avatar — befitting the film’s almost absurd graphical heft, the sheer surfeit of its spectacle, I decided to watch it a second time before putting my thoughts into words. In one way, this strategy was useful as a check on my initial enthusiasm; the blissful swoon of first viewing gave way, in the second, to a state resembling boredom during the movie’s more langourous stretches. (Banshee flight training, let’s just say, is not a lightning-fast process.)  But in another way, waiting to write might not have been all that smart, since by now the movie has been discussed to death. Yet for all the hot air and cold type that’s been spent dissecting Avatar, the map of the dialogue still divides neatly into two camps: one insisting that Cameron’s movie is an instant classic of cinematic science fiction, a technological breakthrough and a grand adventure of visual imagination; the other grudgingly admitting that the film is pretty, but beyond that, a trite and obvious story lifted from Pocahontas and Dances With Wolves and populated, moreover, by a bland and predictable set of character-types.

I tend to be forgiving toward experiments as grand as Avatar, especially when they’ve done such a good job laying the groundwork of hopeful expectation. Indeed, as I walked into the theater last week, ripping open the plastic bag containing my 3D glasses, I remember thinking I’d already gotten my money’s worth simply by looking forward so intensely to the experience. There’s also the matter of auteurist precedent: James Cameron has built up an enormous amount of goodwill — and, dare I say it, faith — with his contributions of Terminator, Terminator 2: Judgment Day, and Aliens to the pantheon of SF greatness. (I’m also a closet fan of Battle Beyond the Stars, the derivative but fun 1980 Roger Corman production on which Cameron served as art director and contributed innovative visual effects.)

So I’m not fussed about whether Avatar‘s story is particularly deep or original. This is, to me, a case of the dancer over the dance; the important thing is not the tale, but Avatar‘s telling of it. And I’m sympathetic to the argument that in such a technically intricate production, a relatively simple narrative gearing is required to anchor audiences and lead them, as in a rail game, along a precise path through the jungle. (That said, Cameron’s first “scriptment” was apparently a much more complex and nuanced saga, and one wonders to what degree his narrative ambitions were stripped away as the humongous physical nature of the undertaking became clear.) Cameron is correctly understood as a techno-auteur of the highest order, a man who doesn’t make films so much as build them, and if he has, post-Titanic, become complicit in fanning the flames of his own worshipful publicity, we ought to take that as simply another feat of engineering — in this instance discursive rather than digital. It would hardly be the first time (I’m looking at you, Alfred Hitchcock) and is certainly better-deserved than some (I’m looking at you, George Lucas).

Did I like Avatar? Very much so — but as I indicated above, this is practically a foregone conclusion; to disavow the thing now would be tantamount to aesthetic seppuku. Of course, in the strange numismatics of fandom, hatred is just the other side of the coin from veneration, and the raging “avatolds” (as in, You just got avatold!) of 4chan may or may not realize that, love it or hate it, we’re all playing in Cameron’s world now. And what a world it is, literally! Avatar the film is something of a delivery system for Pandora the planet (OK, moon), an act of subcreation so extensive it has generated its own wiki. The detailed landscapes we see in the movie are merely the topmost layer of a topography and ecosystem fathoms deep, an enormous bank of 3D assets and encyclopedic autotextuality that, now established as a profitable pop-culture phenomenon, stands ready for extrapolation and exploration in transmedia to come. (Ironic, then, that a launching narrative so opposed to stripmining is itself destined to be mined, or in Jason Mittell’s evocative term, drilled.)

And in this sense, I suspect, we can locate a double meaning to the idea of the avatar, or tank-grown alien body driven by human operators via direct neural link. A biological vessel designed to allow visitors to explore an alien world, the story’s avatars are but metaphors for Avatar the movie, itself a technological prosthesis for viewers hungry to experience new landscapes (and for whom the exotics of Jersey Shore don’t cut it). 3D, IMAX, and great sound systems are merely sensory upgrades for our cinematic avatarialism, and as I watched the audience around me check the little glowing squares of their cell phones, my usual dismay was mitigated by the notion that, like the human characters in the movie, they were merely augmenting their immersion with floating GUIs and HUDs.

My liking for the film isn’t entirely unalloyed, and deep down I’m still wondering by what promotional magic we have collectively agreed to see Avatar as a live-action movie with substantial CG components rather than a CG animated film (a la Up, or more analogously Final Fantasy: The Spirits Within) into which human performances have cunningly been threaded. Much has been made of the motion-capture technology by which actors Sam Worthington, Zoe Saldana, Sigourney Weaver et al performed their roles into one end of a real-time rendering apparatus while Cameron peered into a computer display — essentially his own avatarial envoy to Pandora — directing his troupe through their videogame doubles. But this is merely the latest sexing-up of an “apparatus” as old as cinema, by which virtual bodies are brought to life on an animation stand, their features and vocals synched to a dialogue track (and sometimes reference footage of the original performances).

Cameron’s nifty trick, though, has always been to frame his visual and practical effects in ways that lend them a crucial layer of believability. I’m not talking about photorealism, that unreachable horizon (unreachable precisely because it’s a moving target, a fantasized attribute we hallucinate within the imaginary body of cinema: as Lacan would put it, in you more than you). I’m talking about the way he cast Arnold Schwarzenegger as the human skin around a robotic core in the Terminator films, craftily selling an actor of limited expressiveness through the conceit of a cyborg trying to pass as human; Arnold’s stilted performance, rather than a disbelief-puncturing liability, became proof of his (diegetically) mechanoid nature, and when the cutaways to stop-motion stand-ins and Stan Winston’s animatronics took over, we accepted the endoskeleton as though it had been there all along, the real star, just waiting to be discovered. An identical if hugely more expensive logic underlies the human-inhabited Nav’i of Avatar: if Jake Sully’s alien body doesn’t register as absolutely realistic and plausible, it’s OK — for as the editing constantly reminds us, we are watching a performance within a performance, Sully playing his avatar as Worthington plays Sully, Cameron and his cronies at WETA and ILM playing us in a game of high-tech Russian nesting dolls. The biggest “special effect” in Cameron’s films is the way in which diegesis and production reality collapse into each other.

I’m not saying that Avatar isn’t revolutionary, just that amid the more colorful flora and fauna of its technological garden we should be careful to note that other layer of “movie magic,” the impression of reality that is as much a discursive and ideological production as any clump of pixels pushed through a pipeline. We submit, in other words, to Avatar‘s description of itself as a step forward, an excursion into a future cinema as alien and exhilarating as anything to be found on Pandora, and that too is part of the spell the movie casts. Yet the animating spirit behind that future cinema — the ghost in the machine — remains the familiar package of hopes and beliefs we always bring to the darkened theater: the desire to escape into another body, and when the adventure is over, to wake up and go home.

Paranormal Activity

paranormal-activity-bedroom1

[Some broad spoilers below]

I’ve said it before: these days, seeing certain movies means coming to the endpoint of an experience, rather than its beginning; closing a door rather than opening it. Think of how something like Star Wars in 1977 seeded an entire universe of story (and franchise) possibilities, or how The Rocky Horror Picture Show ignited a subculture of ritual performance and camp remixes of genre chestnuts. By contrast, a new kind of movie, exemplified currently by Paranormal Activity, hits theaters with a conclusive thump, like the punchline of a joke or the ending of a whodunit. After you’ve watched it, there is little more to say.

Such movies sail toward us on a sea of buzz, phantom vessels that hang maddeningly at the horizon of visibility, of knowability. Experienced fannish spotters stand with their spyglasses, picking out details in the mist and relaying their interpretations back to the rest of us. Insiders leak morsels of information about the ship’s construction and configuration. Old salts grumble about the good old days. It’s the modern cinematic equivalent of augury: awaiting the movie’s arrival is like awaiting a predestined fate, and we gaze into the abyss of our own inevitable future with a mixture of horror and appetite.

It sounds like I didn’t care for Paranormal Activity, but in fact I did; it’s as spare and spooky as promised, with a core of unexpected sweetness (due mainly to the performance of Katie Featherston) and consequently a sense of loss, even tragedy, at the end. It occurs to me that we are seeing another phenomenon in low-budget, buzz-driven, scary filmmaking: a trend toward annihiliation narratives. The Blair Witch Project, Open Water, Cloverfield, now Paranormal Activity — these are stories in which no one survives, and their biggest twist is that they disobey a fundamental rule of horror and suspense storytelling by which we understand that no matter how bad things get, at least one person, the hero, will make it through the gauntlet. With this principle guiding our expectations, we can affix our identifications to one or more figures, trusting them to safely convey us through the charnelhouse, evading the claws of monsters or razor-edged deathtraps.

No such comfort in the annihilation narrative, which blends the downbeat endings of early-70s New Hollywood with the clinical finality of the snuff film or autopsy report. Such brutal endings are encouraged by the casting of unknown or non-actors, whose public and intertextual lives presumably won’t be harmed by seeing them dispatched onscreen — though the more important factor, I suspect, is the blurring of the line between the character’s ontological existence and their own.

The usual symptom of this is identical first names: Daniel Travis plays Daniel Kintner in Open Water; Heather, Josh, and Michael are all “themselves” in Blair Witch; Paranormal‘s Katie and Micah are played by actors named Katie and Micah. There is, in other words, no supervening celebrity identity, no star persona, to yank us out of the fiction, to remind us simply by gravitational necessity that there must be a reality outside the fiction. The collapse of actor and character corresponds to the mockumentary mode that all these films share — a mode that itself depends on handheld cameras, recognizable, nonexotic settings, and an absence of standard continuity editing and background scoring.

Taken together, these factors (no-name actors, conscientiously unadorned and “unprofessional” filmmaking) would seem to recall Italian neorealism. But this being Hollywood, the goal is to tell stories that fit into familiar genres while reinventing them: horror seems to be the order of the day. A more subtle point is that, with the exception of Cloverfield‘s sophisticated matchmoving of digital monsters into shakycammed cityscapes, movies in this emerging genre cost almost nothing to make. The budget for Paranormal Activity was $11,000, a datum I didn’t even have to look up, because it’s been foregrounded so relentlessly in the film’s publicity. Oddly, these facts of the film’s manufacture don’t seem to detract from the envelope of “reality” in which its thrills are delivered; for all the textual (non)labor that goes into assuring us this really happened, we are just as entertained by the saga of scrappy Oren Peli and his sudden success as by the thumpings and scarrings inflicted on poor Micah and Katie.

And we are entertained, I think, by our own entertainment — the way in which we willingly give ourselves over to a machine whose cold operations we understand very well. I certainly felt this way as I took my seat at one of the few remaining non-multiplexed moviehouses in Ann Arbor, the tawdry but venerable State Theater. The 7 p.m. crowd was a throng of University of Michigan students, a few clusters of friends packed in with lots and lots of couples. Paranormal Activity is the kind of movie where you want to be able to clutch somebody. More to the point, it’s a genuine group experience: scares are amplified by a factor of ten when people around you are screaming.

Which brings me back to my opening point: we all knew what we were there for, even as the movie’s central mysteries — from the exact nature of its big bad to the specific escalating sequence of its scares — awaited discovery like painted eggs on an Easter-day hunt. (The film’s discretely doled-out shocks, which get us watching the screen with hypnotic attentiveness, are reminiscent of the animated GIFs one finds on the /x/ board of 4chan.) We were there for the movie, certainly, but we were also there for each other, enjoying the echo chamber of each others’ emotions and performative displays of fear. And we were there for ourselves, reverberating happily within the layers of our knowing and not-knowing, our simultaneous awareness of the film as cunning construct and as rivetingly believable bedtime story, our innocence and cynicism so expertly shaped by months of hype and misdirection, viral marketing, rumors and debunkings, word of mouth.

All of which constitutes, of course, the real paranormal activity: a mediascape that haunts and taunts us, foreshadowing our worst fears as well as our fiercest pleasures.

Predestination Paradox

flash-forward

It would be nice if ABC’s new series, Flashforward, didn’t stylistically model itself quite so slavishly on Lost — which is not to deny a legitimate familial relationship between the two shows. Indeed, it’s largely thanks to Lost that broadcast television now periodically risks acts of serial storytelling with genuine intricacy and depth, sizeable and interesting casts of characters, and generic inflections that flirt with science fiction and fantasy without ever quite falling into the proud but doomed ghetto of, say, Virtuality and Firefly. Nowadays we seem to prefer our fantastic extrapolations blended with a strong tincture of “reality”; while I might privately consider series such as Mad Men and Jericho to be as bizarre in setting and plot machination as Farscape ever was, the truth is it will be a long time before we see a space-set show lasting more than a season or two. (And before you ask, no, I haven’t gotten around to watching Defying Gravity, though some trusted friends have been telling me to give it a try.)

So Flashforward clearly owes a debt to Lost for tutoring audiences in the procedures and pleasures of the complex narratives so deftly dissected by Jason Mittell: in this specific case, the shuttling game of memory and induction by which viewers stitch together a tale told in pieces. Where 24 builds itself around the synchronic, crosscutting among simultaneous story-streams until the very concept of a pause, of downtime, is squeezed out of existence, Lost and Flashforward take as their structuring principle the diachronic, bouncing us backwards and forwards through time until one can no longer tell present from backstory. (I will admit that the most recent season of Lost finally threw off this faithful viewer like a bucking bronco; while I’m all for time-traveling back to the glory years of the 1970s, the show’s intertitled convolutions have become too much for me to keep up with, especially when further diced and sliced by the timeshifting mechanism of my DVR.)

No wonder, then, that David S. Goyer (late of Blade) and Brannon Braga (who in the 1990s both saved and ruined the Star Trek franchise, IMO) felt the moment was ripe to adapt Robert J. Sawyer‘s novel for TV. (Apparently there’s a history involving HBO and a tug-of-war over rights; perhaps a branching feature on the show’s eventual box-set release will as a deconstructive extra interweave this additional knotted plotting, an industrial Z-axis, into the general mayhem.) I remember reading Flashforward-the-book when it first came out, but it took Wikipedia to remind me how it all ended. Now that original ending has of course been jettisoned, in the process of retrofitting the story to serial form.

And a clever adaptation it looks to be. By moving up the collective “flashforward” experienced by the entire human race from twenty-odd years to six months, the TV show embeds its own climax within a different kind of foreseeable future: the conclusion of season one. That is, as the characters catch up with their own precognated fates on April 29, 2010 (in show-reality), so will we the watchers (in audience-reality), making for what I expect to be a delicious and delirious moment of suture. Like the first season of Heroes, Flashforward constructs itself around its own endpoint, arriving like clockwork twenty-odd episodes from now.

Clever, but maybe not smart. Look what happened to Heroes, which did great until collapsing into meaningless narratorhea with the start of its second season. I can think of countless TV series done in by their own cruelly relentless seriality, overstaying their welcome, swapping in cast members and increasingly baroque storytelling gimmicks until the final result is a ghoulish, cyborged facsimile of the show we once knew and loved. People speak of “jumping the shark,” but the truth of a TV show that’s lost its soul is something much more depressing: an elderly parent babbling in the grip of Alzheimer’s, a friend lost to dementia, a young and innocent heart curdled by prostitution or drug addiction. The excitement of Flashforward will consist of watching as it knowingly exploits the feints and deferrals of serial form, doling out clues and red herrings that keep us guessing even as the destination comes inexorably into greater focus — a finale that, by its final arrival, will appear perfectly logical. Good storytelling gets us to the expected endpoint by unexpected means, and I wonder if Flashforward has it in itself to pull off the trick more than once.

In the meantime, let’s sit back and appreciate the tapestry as it emerges for the first, unrepeatable time. The characters have already begun to build a “conspiracy wall,” tacking up photos, scribbled notes, and lengths of string to make a tableau that simultaneously constructs the future as solution while decoding it as mystery. And don’t forget the wonderful opportunity for meta-reflection on the existential whys and wherefores of TV as the first episode ends with another kind of “flashforward” — this one a promotional montage enticing us with glimpses of the season to come. In this sense, of course, the show is a perfect commercial animal, advertising itself and its high concept with every beat of its crass and calculated heart. But in another, purer sense, it is a kind of koan, an invitation to meditate on the deeper patterns of the stories we tell; the time in which we experience them; the nature of narrative consciousness itself.

Flashforward may be, in short, one last chance to live in the media present (even as its central conceit destroys any sense of simple present-ness). Here’s to enjoying the experience before the show is ruined by its own need to respawn in 2010-2011, by the ongoing efforts of the spoiler community and devout Wiki priesthood, or by the aforementioned box sets, downloads, and torrents. A series like this is perfectly engineered for its time, which is to say, paced to the week-by-week parceling of information, the micro-gaps of commercial breaks and the macro-gaps between episodes.

Yet even as we put a name to the temporality of TV, it is already past. For all such gaps are dissolving in the quick waters of new media, and with them the gaps in knowledge (precisely-lathed questions with carefully-choreographed answers) on which a show like Flashforward, and by extension all serial storytelling, thrives. We who are “lucky” enough to straddle this historical juncture — at which the digital is reworking the media forms with which we grew up — face our own version of the predestination paradox: knowing full well where we’re going, yet helpless before the forces that deliver us there.

Watchmen: Stuck in the Uncanny Valley

[Warning: this review contains spoilers -- and at the end, a blue penis.]

One wonders if Masahiro Mori, the roboticist who introduced the term “uncanny valley” in 1970, now wishes he’d had the foresight to trademark it; after laying largely dormant for a couple of decades, the concept came into its own, bigtime, with the advent of photorealistic CG. Or perhaps I should say CG posing as photorealistic, for what nearly passes muster in one year — the liquid pseudopod in The Abyss (1986), the lipsynched LBJ in Forrest Gump (1994), the entire casts of Final Fantasy: The Spirits Within (2001), The Polar Express (2004), and Beowulf (2007) — lapses into reassuringly spottable artifice the next. The only strategy by which CG convincingly and sustainably replicates organic life, in fact, is Pixar’s, and their method is simultaneously a cheat and a transcendent knight’s-move of FX design. Engaging in what Andrew Darley has called second-order realism, Pixar’s characters wear their manufactured status on their skin, er, surface (toys, bugs, fish, cars) while drawing on expressive traditions derived from cel- and stop-motion animation to deliver believable, inhabited performances and make us forget that what we’re watching are essentially, with their sandwiching of organic and synthetic elements, cyborgs.

Of course, by casting its films in this manner, Pixar retreats from true uncanniness. Humanity has always been an easier sell when it comes to cartoonish abstractions; ask anyone from Pac-man to Punch and Judy. The uncanny valley actually kicks in when a simulation comes close enough to almost fool us, only to fall back into uncomfortable, irreducible alterity. Such is the fate, I think, of Watchmen.

That calm, urbane salon that is the internet is already abuzz with evaluations of the movie adaptation of Alan Moore’s and Dave Gibbons’s graphic novel — actually, it’s been buzzing for months, even years, another instance of what I have elsewhere termed the cometary halo that precedes any hotly-anticipated media property. Fans have been speculating, cogitating, and arguing about the whys and wherefores of overnight techno-auteur Zack Snyder’s approach for so long that the arrival of the film itself marks the conversation’s end rather than its beginning. The Christmas present has been unwrapped; Schoedinger’s Cat is out of the box; we’ve traveled into the Forbidden Zone only to learn we were on Earth the whole time.

My own take on Watchmen is that it’s an impressive feat of engineering: detailed, intricate, and surprisingly unified in tone. Beyond a splendid opening-credit sequence, however, it isn’t particularly invigorating or dramatic — hell, I’ll just say it, the thing’s boring for long stretches. As Alexandra DuPont, my absolute favorite reviewer on Ain’t-It-Cool News, points out, the boring bits are unfortunately more prevalent toward the end, giving the movie as a whole the feeling of a party that peters out once the fun people leave, or a hot date that takes a wrong turn when someone brings up religion. (And while I’m linking recommendations, let me encourage you toward the smartest movie site out there.) Watchmen is still a rather miraculous object, an oddly introverted and idiosyncratic epic whose very existence lends support to the idea that fans have become an audience important enough to warrant their own blockbuster. Tastewise, the periphery has become the center; the niche the norm.

For Watchmen, love it or hate it, is fanservice with a $120 million budget. Let’s remember that to hardcore fandom, love and hate are as difficult to disarticulate as the tattoos on Harry Powell‘s knuckles; the object is never simply accepted or rejected outright (that’s a mark of the fickle mainstream, whose media engagements resemble one-night stands or trips to the drive-through) but instead studied and anatomized with scholarly rigor, its faults and achievements tabulated and ranked with an accountant’s thoroughness. Intimacy is the name of the game — that and passing the object from hand to hand until it is worn smooth as a worry bead. I’ve no doubt that Watchmen will be worried to death in coming weeks, the only thing keeping it from complete erasure the periodic infusion of new material: transmedia expansions like Tales of the Black Freighter, or the four-hour director’s cut rumored to be lurking on Snyder’s hard drive.

The principal focus of all this discussion will undoubtedly be the pros and cons of adaptation, for that is the process which Watchmen foregrounds in all its contradiction and mystery. Viewing the film, I thought irritably of all the adaptations to which we give a free pass, the ones that don’t get scrutinized at a subatomic level: endless versions of Pride and Prejudice, and let’s not forget that little gift that keeps on giving, Romeo and Juliet. Shakespeare, it seems to me, is as viral as it gets, and Masterpiece Theater a breeding ground that could compete with Nadya Suleman‘s womb. Watchmen simply takes faithfulness and fidelity to a cosmic degree, its mise-en-scene a mimetic map of the printed panels that were its source.

Or source code; for what Synder has achieved is not so much adaptation as transcription, operationalization; a phenotypic readout of a genetic program, a “run” in cinematic hardware of an underlying instruction set. Watchmen verges, that is, on emulation, and its spiritual fathers are not Moore and Gibbons but Turing and von Neumann. Snyder probably thought his hands were tied; there’s no transposing Watchmen to a new setting without disrupting its elaborate weavework of political and pop-cultural signifiers. The origin of the story in graphic form means that cinema’s primary ability, visualization, had already been usurped; faced with a publicly-available reference archive and a legion of fans ready to apply it, Synder may have felt his only option was to replicate down to the tiniest prop and wardobe detail what’s shown in the panel. Next level up is the determining rhythm of Moore’s scripting: storytelling and dialogue have been similarly transposed from printed page to filmed frame, and while some critics laughingly excoriate Rorschach’s purple prose, his overheated voice-overs sounded fine to me (Rorschach’s a rather self-important character, after all — as narcissistic and monomaniacal in his way as the ostensible villain Ozymandias). Editing, too, copies over with surprising fluidity: some of the most effective sequences, like the Comedian’s funeral interwoven with flashbacks to his ignoble career, or Jon Osterman’s tragic temporal tapestry of an origin story, are crosscut almost precisely as laid out on the page.

What’s left to Synder and the cinematic signifier, then, is a handful of sensory registers, deployed sometimes with subtlety and sometimes an almost slapstick obviousness. Music plays a crucial role in the film, as does the casting; some choices are dead-on effective, others (Malin Akerman, I’m looking at you — with sympathy) not so much. The most talked-about aspect of Synderian style is probably his use of variable speed or “ramping,” and on this front I’m with DuPont: the effect of all the slow-mo is to suggest something of the fascinated readerly gaze we bring to comic books, lingering over splash pages, reconstituting in our internal perceptions the hieroglyphic symbolia of speed lines and large-fonted “WHOOSHES.”

But back to the uncanny valley. The world of Watchmen is undoubtedly digital in ways we can’t even detect; there are certainly some showstopping visual moments, but I’d argue that more important to the movie’s cumulative immersive impact are the framing, composition, and patterns of hue, saturation, and texture that only a digital intermediate makes possible. It’s less garish than Sin City, and nowhere near the green-screened limbo of Sky Captain and the World of Tomorrow, but all the exterior shots and real-world sets shouldn’t blind us to the essential constructedness of what we’re seeing.

And here’s where the real uncanniness resides. We’re often hoodwinked into thinking that the visual (indeed, existential) crisis of our times is the rapidly closing gap between profilmic truth and what’s been simulated with computer graphics. But CG is merely the latest offspring of a vast heritage of manipulation, a tradition of trickery indistiguishable from cinema itself. Watchmen is uncanny not because of its visual effects, but because it comes precariously close to convincing us that we are seeing Moore’s and Gibbons’s graphic novel preserved intact, when, after all, it is only a copy — and a lossy one at that. In flashes, the film fools us into forgetting that another version exists; but then the knowledge of an original, an other, comes crashing back in to sour the experience. It is not reality and its digital double whose narrowing difference freaks us out, but the aesthetic convergence between two media, threatening to collapse into each other through the use of ever more elaborate production tools and knowing appeals to fannish competencies. At stake: the very grounds of authenticity — the epistemic rules by which we recognize our originals.

I’ll conclude by noting that the character who most fascinated me in the graphic novel is also the one I couldn’t tear my eyes from onscreen: Dr. Manhattan. He’s “played” by Billy Crudup, and as I noted in my post on Space Buddies, the actor’s voice is our central means of accepting Manhattan as a living character. Equally magnificent, though, is the physical performance supplied by Manhattan’s digital surface, an iridescent azure body hanging from Crudup’s motion-captured face. Whether intentionally or through limits in the technology, Dr. Manhattan never quite fits into his surroundings, and that’s exactly as it should be; as conceived by Moore, he’s a buzzing collection of hyperparticles, a quantum ghost, and Snyder uses digital effects to nail Manhattan’s transhuman ontology. (He is, both diegetically and non-, a walking visual effect.) Presciently, the print version of Watchmen — published between 1986 and 1987, when CG characters were just starting to creep into movies (see Young Sherlock Holmes) — gave us in Dr. Manhattan our first viable personification of digital technology. The metaphysical underpinning and metaphorical implications of the print Manhattan, of course, are radioactivity and the atomic age, not digitality and the information revolution. But in the notion of an otherwordly force, decanted into a man-shaped vessel but capable of manipulating the very fabric of reality, they add up to much the same: Dr. Manhattan — synthespian avant la lettre.

Singing Along with Dr. Horrible

Duration, when it comes to media, is a funny thing. Dr. Horrible’s Sing-Along Blog, the web-distributed musical (official site here; Wiki here), runs a tad over 42 minutes in all, or about the length of an average hour-long block of TV entertainment with commercials properly interspersed. But my actual experience of it was trisected into chunks of approximately 15 minutes, for like your standard block of TV programming (at least in the advertising-supported format favored in the U.S.), Dr. Horrible is subdivided into acts, an exigence which shapes the ebb and flow of its dramatic humours while doing double service as a natural place to pause and reflect on what you’ve seen — or to cut yourself another slice of ice-cream cake left over from your Dairy-Queen-loving relatives’ visit.

That last would be a blatantly frivolous digression, except in this key sense: working my way through the three acts of Dr. Horrible was much like consuming thick slices of icy sweetness: each individual slab almost sickeningly creamy and indulgent, yet laced throughout with a tantalizingly bitter layer of crisp chocolate waferage. Like the cake, each segment of the show left me a little swoony, even nauseated, aesthetic sugar cascading through my affective relays. After each gap, however, I found myself hungry for more. Now, in the wake of the total experience, I find myself contemplating (alongside the concentrated coolness of the show itself) the changing nature of TV in a digital moment in which the forces of media evolution — and more properly convergence – have begun to produce fascinating cryptids: crossbred entities in which the parent influences, harmoniously combined though they might be, remain distinct. Sweet cream, bitter fudge: before everything melts together to become the soupy unremarkable norm, a few observations.

Ultimately, it took me more than two weeks to finish Dr. Horrible. I watched the first two acts over two nights with my wife, then finished up on my own late last week. (For her part, Katie was content to have the ending spoiled by an online forum she frequents: a modern Cliffs Notes for media-surfers in a hurry to catch the next wave.) So another durative axis enters the picture — the runtime of idiosyncratic viewing schedules interlacing the runtime of actual content, further macerated by multiple pausings and rewindings of the iPod Touch that was the primary platform, the postage-stamp proscenium, for my download’s unspooling. Superstring theorists think they have things hard with their 10, 11, or 26 dimensions!

As such, Horrible‘s cup of video sherbet was the perfect palate-cleanser between rounds of my other summer viewing mission — all five seasons of The Wire. I’m racing to get that series watched before the school year (another arbitrary temporal framework) resumes in three weeks; enough of my students are Wireheads that I want to be able to join in their conversations, or at least not have to fake my knowing nods or shush the conversation before endings can be ruined. On that note, two advisories about the suspense of watching The Wire. First, be careful on IMDb. Hunting down members of the exceptionally large and splendid cast risks exposing you to their characters’ lifespans: finding out that such-and-such exits the series after 10, 11, or 26 episodes is a pretty sure clue as to when they’ll take a bullet in the head or OD on a needle. Second, and relatedly, it’s not lost on this lily-white softy of an academic that I would not last two frickin’ seconds on the streets of Baltimore — fighting on either side of the drug wars.

Back to Dr. Horrible. Though other creators hold a somewhat higher place in my Pantheon of Showrunners (Ronald D. Moore with Battlestar Galactica, Matt Weiner with Mad Men, and above them all, of course, Trek‘s Great Bird of the Galaxy, Gene Roddenberry), Joss Whedon gets mad props for everything from Buffy the Vampire Slayer to Firefly/Serenity and for fighting his way Dante Alighieri-like through the development hell of Alien Resurrection. I was only so-so about the turn toward musical comedy Whedon demonstrated in “Once More with Feeling,” the BtVS episode in which a spell forced everyone to sing their parts; I always preferred Buffy when the beating of its heart of darkness drowned out its antic, cuddly lullabies.

But Dr. Horrible, in a parallel but separate universe of its own, is free to mix its ugliness and frills in a fresh ratio, and the (re)combination of pathos and hummable tunes works just fine for me. Something of an inversion of High School Musical, Dr. Horrible is one for all the kids who didn’t grow up pretty and popular. Moreover, its rather lonesome confidence in superweaponry and cave lairs suggests a masculine sensibility: Goth Guy rather than Gossip Girl. Its characters are presented as grownups, but they’re teenagers at the core, and the genius is in the indeterminacy of their true identities; think of Superman wearing both his blue tights and Clark Kent’s blue business suit and still not winning Lois Lane’s heart. My own preteen crush on Sabrina Duncan (Kate Jackson) of Charlie’s Angels notwithstanding, I first fell truly in love in high school, and it’s gratifying to see Dr. Horrible follow the arc of unrequited love, with laser-guided precision, to its accurate, apocalyptically heartbreaking conclusion.

What of the show as a media object, which is to say, a packet-switched quantum of graphic data in which culture and technology mingle undecidably like wave and particle? NPR hailed it as the first genuine flowering of TV in a digital milieu, and perhaps they’re right; the show looks and acts like an episode of something larger, yet it’s sui generis, a serial devoid of seriality. It may remain a kind of mule, giving rise to nothing beyond the incident of itself, or it may reproduce wildly within the corporate cradle of Whedon’s Mutant Enemy and in the slower, rhizomatic breeding beds of fanfic and fanvids. It’s exciting to envision a coming world in which garage- and basement-based production studios generate in plenty their own Dr. Horribles for grassroots dissemination; among the villains who make up the Evil League of Evil, foot-to-hoof with Bad Horse, there must surely stand an Auteur of Doom or two.

In the mise-en-abyme of digital networks, long tails, and the endlessly generative matrix of comic books and musical comedy, perhaps we will all one day turn out to be mad scientists, conquering the world only to find we have sacrificed the happy dreams that started it all.

Planet of the Apes

As my attention shifts to one of the major goals of the summer — drafting a proposal for my book on special and visual effects — I’ve started to augment my movie-a-day habit with some classic FX titles. These are films I’ve seen before, in some cases many times, but which need revisiting. Seeing them now can be a corrective shock, revealing my memory for the sloppy generalizing mechanism it is. Impressions of movies watched in childhood blend together, in the adult mind, like ingredients of a stew, a delicious melange that is nevertheless a kind of monotaste: a tidy averaging of visual and narrative pleasures that, with a fresh viewing, shatter back into discrete components. The movie again becomes a complex terrain rather than a distant map, a succession of contrasting images rather than a single iconic poster still, a cascade of rediscovered characters, tableaux, action setpieces, and lines of dialogue. It’s like opening a box of forgotten photographs.

In the case of Planet of the Apes — Franklin J. Schaffner’s 1968 original, not Tim Burton’s lousy 2001 remake — I was stunned to find a film far more stark, confident, somber, chilling, and stylish than the simplistic caricature to which I’d reduced it. My first encounter with Planet of the Apes came sometime in the mid-1970s, when it ran as part of “Ape Week” on our local ABC affiliate’s Four-O’Clock Movie. I’d get home from school in time to watch an hour or so of cartoons before the feature came on; Ape Week was just one of several themed lineups I looked forward to eagerly, including “James Bond Week” and “Monster Week” (a string of Eiji Tsuburaya‘s Godzilla and Mothra movies).

The Apes series was a perfect fit for the Four-O’Clock Movie because there was one for every day of the week: from Monday’s installment of the first film through Beneath the Planet of the Apes (1970) on Tuesday, Escape from the Planet of the Apes (1971) on Wednesday, Conquest of the Planet of the Apes (1972) on Thursday, and Battle for the Planet of the Apes (1973) on Friday. The end of the week didn’t mean an end to Apes, though. Right about that time, a live-action TV series aired, followed by an animated counterpart on Saturday mornings. It would be thirty years before I heard the term transmedia franchise, but — along with daily reruns of the original Star Trek series — Apes was my inaugural passport to the labyrinthine landscape of distributed science-fiction storyworlds.

What I loved about Planet of the Apes back then, and what has stayed with me over the years, can be summarized in two images that sent me into an ecstasy of eeriness: the ape makeup created by Ben Nye and implemented by John Chambers; and the famous final shot, in which the hero Taylor (Charlton Heston) stumbles across the ruins of the Statue of Liberty and realizes he’s been on Earth — not an alien world, but his own home — all this time. The frame is below; a grainy YouTube version can be found here.

It’s one of the great twist endings in SF — contributed, fittingly enough, by Rod Serling. But its unfortunate effect was to instantly reduce the movie to a grand cliche, a semiotic Shrinky-Dink, source of endless quotations and parodies in the decades that followed. The sad truth about twist endings is that they follow a logic opposite that of genre (in which the same patterns reappear over and over again without anyone taking offense; we applaud them, in fact, for their iterative familiarity): once given its Big Reveal, a twist shrivels on the vine, spoiled by critics, lampooned for its very memorability. Citizen Kane‘s Rosebud, The Sixth Sense‘s dead psychiatrist, St. Elsewhere‘s world-in-a-snowglobe — each exists, like Taylor’s final, horrible epiphany, as the ultimate self-annihilating closure, shutting down not just a particular narrative instance, but the possibility of its own resurrection in anything but smirkily insincere form. Shots like the one that concludes Planet of the Apes are, to me, a perfect example of Lacanian captation: they arrest and hold us in an escape-proof hermetic prison of the imaginary.

OK, psychoanalytic blather aside, what was so great about watching Planet of the Apes again? I suppose my answer is yet more Lacan, for both the apes and humans are trapped by and within their own misrecognitions. Taylor and his fellow astronauts firmly believe themselves to be on an alien planet, despite evidence to the contrary (the apes speak English); for their part, the apes see the humans as completely Other and cannot countenance any notion that there is an evolutionary link between them. It’s a comedy of evolutionary errors, the Scopes Trial replayed simultaneously as farce and deadpan drama. The truth of the situation is hidden, like the purloined letter, in plain sight; it is not until the end, in a traumatic confrontation with the Real, that Taylor traverses his fantasy. (Maybe that’s why the joke has been replayed so frequently in pop culture, from Spaceballs to The Simpsons; what is repetition but the insistent revisiting of trauma?) Of course, as often occurs in science fiction, the meta-misrecognition that operates here is failing to see in the portrayal of a “future” the actual representation of a “present.” Eric Greene’s Planet of the Apes as American Myth explores this aspect of the film and its sequels, arguing that Apes is a funhouse mirror held up to racial politics in the United States.

Bringing this all back home to the movie and its special effects, I see two kinds of misrecognition at play in the visuals, both of them integral to the suspension of disbelief by astronauts, apes, and audiences alike. First, of course, are the actual human beings (Roddy McDowell, Kim Hunter, Maurice Evans) beneath the prosthetics and hair appliances. The makeup and costumes that turn these actors into sentient, speaking apes do not mask or muffle the performances, but rather estrange and amplify them: we watch and listen for nuances of emotion, an amused glint in the eye, a subtle shift in intonation, precisely because they are cloaked in filmmaking technology. At first glance the masquerade is comical, almost grotesque, but it quickly gives way to some remarkably graceful performances. Our twinned awareness of the trickery and investment in the fantasy reflects the knife-edge calibration of disbelief attending the finest FX work.

But there’s a second register of misrecognition here, one I would have missed completely if I hadn’t been watching a pristine widescreen transfer of the film. The first act of Planet of the Apes consists of Taylor and his fellow astronauts trekking across the forbidding but beautiful scenery of Arizona and Utah — in particular, the area of the Colorado River known as Lake Powell:

I was dumbstruck by this natural backdrop of mountains, deserts, and water, as gorgeously alien as anything in Nicolas Roeg’s Walkabout (1971). It occurred to me that the genius of this portion of the movie — an opening thirty minutes before the apes even show up — is that it places the spectator in a homologous position to the stranded astronauts. Like them, we stare at a world that is at once ours and another’s; a landscape both earthly and unearthly. Like the ape makeup, the cinematography forces us into sublime attentiveness, consuming every detail of a setting made familiar by our experience with terrestrial features, then unfamiliar through a storyline that presents it as an alien world, then familiar again in the final beachside revelation.

I guess what I’m saying with all this is that Planet of the Apes stands out to me as much for its planet as for its apes; and that in both constructs (and our response to them) we glimpse something of the multitiered, shuttling structure of belief and disavowal that great special effects provoke.

Soul of a New Machine

Not much to add to the critical consensus around WALL-E; trusted voices such as Tim Burke’s, as well as the distributed hive mind of Rotten Tomatoes, agree that it’s great. Having seen the movie yesterday (a full two days after its release, which feels like an eternity by the clockspeed of media blogging), I concur — and leave as given my praise for its instantly empathetic characters, striking environments, and balletic storytelling. It’s the first time in a while that tears have welled in my eyes just at the beautiful precision of the choices being made 24 times per second up on the big screen; a happy recognition that Pixar, over and over, is somehow nailing it at both the fine level of frame generation and the macro levels of marketplace logic and movie history. We are in the midst of a classic run.

Building on my comments on Tim’s post, I’m intrigued by the trick Pixar has pulled off in positioning itself amid such turbulent crosscurrents of technological change and cinematic evolution: rapids aboil with mixed feelings about nostalgia for golden age versus the need to stay new and fresh. The movies’ mental market share — the grip in which the cinematic medium holds our collective imaginary — is premised on an essential contradiction between the pleasures of the familiar and the equally strong draw of the unfamiliar. That dialectic is visible in every mainstream movie as a tension between the predictability of genre patterns and the discrete deformations we systematize and label as style.

But nowadays this split has taken on a new visibility, even a certain urgency, as we confront a cinema that seems suddenly digital to its roots. Hemingway (or maybe it was Fitzgerald) wrote that people who go bankrupt do so twice: first gradually, then all at once. The same seems true of computer technology’s encroachment on traditional filmmaking practices. We thought it was creeping up on us, but in a seeming eyeblink, it’s everywhere. Bouncing around inside the noisy carnival of the summer movie season, careening from the waxy simulacrum of Indiana Jones into the glutinous candied nightmare of Speed Racer, it’s easy to feel we’re waking up the morning after an alien invasion, to find ourselves lying in bed with an uncanny synthetic replacement of our spouse.

Pixar’s great and subtle achievement is that it makes the digital/cinema pod-people scenario seem like a simple case of Capgras Syndrome, a fleeting patch of paranoia in which we peer suspiciously at our movies and fail to recognize them as being the same lovable old thing as always. With its unbroken track record of releases celebrated for their “heart,” Pixar is marking out a strategy for the successful future of a fully digital cinema. The irony, of course, is that the studio is doing so by shrugging off its own cutting-edge nature, making high-tech products with low-tech content.

Which is not to say that WALL-E lacks in technological sublimity. On the contrary, it’s a ringing hymn to what machines can do, both in front of and behind the camera. More so than the plastic bobbles of Toy Story, the chitinous carapaces of A Bug’s Life, the scales and fins of Finding Nemo or the polished chassis of Cars, the performers in WALL-E capture the fundamental gadgety wonder of a CG character: they look like little robots, but in another, more inclusive sense they are robots — cyborged 2D sandwiches of actors’ voices, animators’ keyframes, and procedural rendering. There’s a longstanding trope in Pixar films that the coldly inorganic can be brought to life; think of the wooden effigy of a bird built by the heroes of A Bug’s Life, or the existential yearnings of Woody and Buzz Lightyear in the Toy Story films. WALL-E, however, calibrates a much narrower metaphorical gap between its subject matter and its underlying mode of production. Its sweetly comic drama of machines whose preprogrammed functionalities are indistinguishable from their lifeforce is like a reassuring parable of cinema’s future: whether the originating matrix is silicon or celluloid, our virtual pleasures will reflect (even enshrine) an enduring humanity.

I’ll forgo commentary on the story and its rich webwork of themes, except to note a felicitous convergence of technology’s hetero gendering and competing design aesthetics that remap the Macintosh’s white curves onto the eggy life-incubator of EVE — juxtaposed with a masculine counterpart in the ugly-handsome boxiness of PC and LINUX worlds. I delighted in the film’s vision of an interstellar cruise liner populated by placid chubbies, but was also impressed by the opening 30-40 minutes set amid the ruins of civilization. It says something that for the second time this year, a mainstream science-fiction film has enticed us to imagine ourselves the lone survivor of a decimated earth, portraying this situation on one level as a prison of loneliness and on another as an extended vacation: tourists of the apocalypse. I refer here of course to the better-than-expected I Am Legend, whose vistas of a plague-depopulated Manhattan unfold in loving extended takes that invite Bazinian immersion and contemplation:

Beyond these observations, what stands out to me among the many pleasures of WALL-E are the bumper materials on either side of the feature: the short “Presto,” which precedes the main film, and the credit sequence that closes the show. Such paratexts are always meaningful in a Pixar production, but tend to receive less commentary than the “meat” of the movie. Tim points out accurately that “Presto” is the first time a Pixar short has captured the antic Dionysian spirit of a Tex Avery cartoon (though I’d add that Avery’s signature eruption of the id, that curvaceous caricature of womanhood Red, was preemptively foregrounded by Jessica Rabbit in 1988′s Who Framed Roger Rabbit; such sex-doll humor seems unlikely to be emulated any time soon in Pixar’s family-friendly universe — though the Wolf could conceivably make an appearance). What I like about “Presto” is the short’s reliance on “portal logic” — the manifold possibilities for physical comedy and agonistic drama in the phenomenon of spatial bilocation, so smartly operationalized in the Valve videogame Portal.

As for the end credits of WALL-E, they are unexpectedly daring in scope, recapitulating the history of illustration itself — compressing thousands of years of representational practices in a span of minutes. As the first names appear onscreen, cave drawings coalesce, revealing what happens as robots and humans work together to repopulate the earth and nurse its ecosystem back to health. The cave drawings give way to Egyptian-style hieroglyphs and profiled 2D portraiture, Renaissance perspective drawings, a succession of painterly styles. Daring, then subversive: from Seurat’s pointillism, Monet’s impressionism, and Van Gogh’s loony swirls, the credits leap to 8-bit computer graphics circa the early 1980s — around the time, as told in David A. Price’s enjoyable history of the studio, that Pixar itself came into existence. WALL-E and his friends cavort in the form of jagged sprites, the same as you’d find in any Atari 2600 game, or perhaps remediated on the tiny screens of cell phones or the Wii’s retrographics.

I’m not sure what WALL-E‘s credits are “saying” with all this, but surely it provides a clue to the larger logic of technological succession as it is being subtextually narrated by Pixar. Note, for example, that photography as a medium appears nowhere in the credits’ graphic roll call; more scandalously, neither does cinematography — nor animation. In Pixar’s restaging of its own primal scene, the digital emerges from another tradition entirely: one more ludic, more subjective and individualistic, more of an “art.” Like all ideologies, the argument is both transparently graspable and fathoms deep. Cautionary tale, recuperative fantasy, manufactured history doubling as road map for an uncertain digital future: Pixar’s movies, none more so than WALL-E, put it all over at once.

Jumper

It’s not hard to see what Doug Liman intended Jumper (2008) to be: a slick, stylish action-adventure, paced to the quick rhythm of its protagonist’s wormhole-assisted leaps through space. In terms of emotional tone, something a bit less serious than The Bourne Identity (2002) and more serious than Mr. and Mrs. Smith (2005), with a touch of the structural experimentation of 1999′s Go (probably my favorite of Liman’s movies, even over Swingers [1996], which, while funny, bared its brand of prefab indie classic a little too emphatically).

But Jumper turns out to be an anemic misfire, its frictionless construction (which might, during preproduction, have seemed a strength) resulting in something like those Olestra Doritos I used to eat: tasty, low in calories, and passing with liquid brevity through the digestive system. Or — a better metaphor — like David Rice (Hayden Christiansen) himself, a young man who through some never-explained and never-sweated mutation of genetics, neurochemistry, or both, can teleport instantly from one place on earth to another. Building a plot around a person unbound by basic physical laws is always risky. First, there’s the problem of identification: truly super superbeings are impossible to empathize with, a notion explored brilliantly through the figure of Doctor Manhattan in Alan Moore’s Watchmen. Second, it’s hard to embed super-powerful beings in dramatic situations that offer any real challenge or suspense. Think of the “burly brawl” in The Matrix Revolutions (2003): Neo’s hyperkungfu turned his showdown with hundreds of Agent Smiths into an inadvertantly funny dance number, spectacular in the manner of Busby Berkeley musicals and charming in the manner of Buster Keaton slapstick — but never exciting, because nothing was at stake.

To get around this dilemma, our fantasies of superpower have yoked the anomalous beings at their center to various forms of existential and psychological ennui. DC got it right with Superman and Batman, both orphans, one an extraterrestrial “stranger in a strange land” and one a PTSD-afflicted vigilante. Superman’s love for Lois Lane is ultimately a hobbling force, locking him to a human scale of emotions and practical concerns (why else would he need to take a 9-5 job at the Daily Planet?). In literature, Billy Pilgrim — the haunted hero of Kurt Vonnegut’s Slaughterhouse-Five (1969) — travels through time, but not under his own direction; instead he revisits traumatic moments of war and family life, adrift in a temporal ocean. (A similar theme organizes the lovely, tearjerking tapestry of Audrey Niffenegger’s 2003 novel The Time Traveler’s Wife.)

Jumper would have been more interesting if David’s teleporting ability took him only to places where he’d fallen in love or feared for his life — or if he compulsively returned to the same locations over and over, without meaning to: a Freudian trip. The film’s mystery might then have resonated as much inwardly as outwardly. As it is, the situation with which the screenwriters have saddled David, and us, is both needlessly elaborate and absurdly simplistic. Bad guys called Paladins hunt those who can teleport, using a range of electrified devices (again, both baroque and silly in their design) to anchor, trap, and ultimately kill the jumpers. It may not be a sin that the Paladins’ motivation isn’t explained in more detail — the opening crawl of Star Wars taught a generation of filmgoers the value of ruthlessly boiling down exposition — but it would have been nice to learn even a little bit about how their “civil war” with the jumpers has played out over history. (Since jumpers must use images to target their more exotic jumps, how would they have functioned in a pre-photographic era?)

Ah well. The film is more interested in portraying David as a kind of supertourist, someone who can go wherever he wants, whenever he wants — enjoying a picnic atop the head of the Sphinx, followed by surfing in Thailand. This has the effect of equating teleportation with ownership of a really good credit card, a consumerist fantasy of total access and freedom. A shame, because the charm of Steven Gould’s 1992 source novel is in showing how David learns his way gradually into his power, staged as a series of plausibly awkward experiments and epiphanies. The book, that is, eases us into a superhuman life by showing us each incremental point on the hero’s journey. The movie, by contrast, skips all that — “jumps” past it — giving us a protagonist who seems petulant rather than plaintive, arrogant rather than awesome. (The fact that he is played by the same piece of plastic who sank the Star Wars prequels doesn’t help.)

Liman’s comments to the contrary, Jumper is very much a visual-effects film; the teleportation effect is as much the movie star as Christiansen. According to the DVD extras (and here’s a tip: if the FX get their own doc, there’s a safe bet the picture was bankrolled on the basis of them), considerable R&D went into getting the jumps just right. Wikipedia records over 100 jumps in the film, each subtly adjusted to reflect distance and emotional state of the jumper. The effect itself is really a package of techniques. Characters appear and disappear in a swirl of particles, as though they’ve turned into ash and blown away; the local environment is stirred as though in a strong wind, papers fluttering, doors slamming; and hazy, prismatic “jump scars” remain in their wake, marking the point at which spacetime has conveniently ruptured. Often all of this is accompanied by offhand flicks of the camera, as though following the body’s transit through hyperspace; a shot might begin with a quick pan downward to street level, an instant before the jumper appears.

For the most part, we witness jumps from the “outside,” that is, with a body popping out of and back into presence without traveling the intervening distance. (Some of the most pleasing instances are long, motion-controlled takes in which a body — or car! — might appear five or six times, dancing from spot to spot in the frame.) But every once in a while, the camera goes virtual and follows David through a jump, as in the example below, in which he shifts himself from an icy lake to a local library:

All this jumping about is undeniably fun, a kind of cat-and-mouse in which each leap arrives slightly before or after the audience predicts. It looks stylish, as though the jumpers are being sucked away by concealed vacuums. But ultimately, it doesn’t add up to anything more than itself: in an act of accidental self-referentiality, Jumper the movie is a movie that Jumps, and that’s all.

Indiana Jones and the Unattainable FX Past

This isn’t a review, as I haven’t yet made it to the theater to see Indiana Jones and the Kingdom of the Crystal Skull (portal to the transmedia world of Dr. Jones here; typically focused and informative Wiki entry here). What I have been doing — breaking my normal rule about keeping spoiler-free — is poring over fan commentaries on the new movie, swimming within the cometary aura of its street-level paratexts, working my way into the core theatrical experience from the outside in. This wasn’t anything intentional, more the crumbling of an internet wall that sprang one informational leak after another, until finally the wave of words washed over me like, well, one of the death traps in an Indiana Jones movie.

Usually I’m loath to take this approach, finding the twists and turns of, say, Battlestar Galactica and Lost far more compelling when they clobber me unexpectedly (and let me add, both shows have been rocking out hard with their last couple of episodes). But it seemed like the right approach here. Over the years, the whole concept of Indiana Jones has become a diffuse map, gas rather than solid, ocean rather than island. Indy 4 is a media object whose very essence — its cultural significance as well as its literal signification, the decoding of its concatenated signage — depends on impacted, recursive, almost inbred layers of cinematic history.

On one level, the codes and conventions of pulp adventure genres, 1930s serials and their ilk, have been structured into the series film by film, much like the rampant borrowings of the Star Wars texts (also masterminded by George Lucas, whose magpie appropriations of predecessor art are cannily and shamelessly redressed, in his techno-auteur house style, as timelessly mythic resonance). But by now, 27 years after the release of Raiders of the Lost Ark, the Indy series must contend with a second level of history: its own. The logic of pop-culture migration has given way to the logic of the sequel chain, the franchise network, the transmedia system; we assess each new installment by comparing it not to “outside” films and novels but to other extensions of the Indiana Jones trademark. Indy 4, in other words, cannot be read intertextually; it must be read intratextually, within the established terms of its brand. And here the franchise’s history becomes indistinguishable from our own, since it is only through the activity of audiences — our collective memory, our layered conversations, the ongoing do-si-do of celebration, critique, and comparison — that the Indy texts sustain any sense of meaning above and beyond their cold commodity form.

All of this is to say that there’s no way Indiana Jones and the Kingdom of the Crystal Skull could really succeed, facing as it does the impossible task of simultaneously returning to and building upon a shared and cherished moment in film history. While professional critics have received the new film with varying degrees of delight and disappointment, the talkbacks at Aint-It-Cool News (still my go-to site for rude and raucous fan discourse) are far more scornful, even outraged, in their assessment. Their chorused rejection of Indy 4 hits the predictable points: weak plotting, flimsy attempts at comic relief, and in the movie’s blunt infusion of science-fiction iconography, a generic splicing so misjudged / misplayed that the film seems to be at war with its own identity, a body rejecting a transplanted organ.

But running throughout the talkback is another, more symptomatic complaint, centering on the new film’s overuse of CG visual effects. The first three movies — Raiders, Temple of Doom, and Last Crusade — covered a span from 1981 to 1989, an era which can now be retroactively characterized as the last hurrah of pre-digital effects work. All three feature lots of practical effects — stuntwork, pyrotechnics, and the on-set “wrangling” of everything from cobras to cockroaches. But more subtly, all make use of postproduction optical effects based on non-digital methods: matte paintings, bluescreen compositing, a touch of cel animation here, a cloud tank there. Both practical and optical effects have since been augmented if not colonized outright by CG, a shift apparently unmissable in Indy 4. And that has longtime fans in an uproar, their antidigital invective targeted variously on Lucas’s influence, the loss of verisimilitude, and the growing family resemblance of one medium (film) to another (videogames):

The Alien shit didnt bother me at all, it was just soulless and empty as someone earlier said.. And the CGI made it not feel like an Indy flick in some parts.. I walked out of the theater thinking the old PC game Fate of Atlantis gave me more Indiana joy than this piece of big budget shit.

My biggest gripe? Too much FUCKING CGI. The action lacked tension in crucial places. And there were too many parts (more than from the past films) where Looney Tunes physics kept coming into play. By the end, when the characters endure 3 certain deaths, you begin to think “Okay, the filmmakers are just fucking around, lean back in your seat and take in the silliness.” No thanks. That’s not what makes Indiana Jones movies fun.

This film was AVP, The Mummy Returns and Pirates of the Fucking Carribean put together, a CGI shitfest. A long time ago in a galaxy far far away, Lucas said “A special effect is a tool, a means to telling astory, a special effect without a story is a pretty boring thing.” Take your own advice Lucas, you suck!!!

The entire movie is shot on a stage. What happened to the locations of the past? The entire movie is CG. What a disappointment. I really, REALLY wanted to enjoy it.

Interestingly, this tension seems to have been anticipated by the filmmakers, who loudly claimed that the new film would feature traditional stuntwork, with CGI used only for subtleties such as wire removal. But the slope toward new technologies of image production proves to be slippery: according to Wikipedia, CG matte paintings dominate the film, and while Steven Spielberg allegedly wanted the digital paintings to include visible brushstrokes — as a kind of retro shout-out to the FX artists of the past — the result was neither nostalgically justifiable or convincingly indexical.

Of course, I’m basing all this on a flimsy foundation: Wiki entries, the grousing of a vocal subcommunity of fans, and a movie I haven’t even watched yet. I’m sure I will get out to see Indy 4 soon, but this expedition into the jungle of paratexts has definitely diluted my enthusiasm somewhat. I’ll encounter the new movie all too conscious of how “new” and “old” — those basic, seemingly obvious temporal coordinates — exceed our ability to construct and control them, no matter how hard the filmmakers may try, no matter how hard we audiences may hope.

A Marvel of Engineering

The opening act of the summer movie season, Iron Man, is much like the machine armor worn by Tony Stark (Robert Downey Jr.): a potent blend of advanced technology, sleek style, and glowing energy. The fetishism of the super-suit has rarely been quite so explicitly rendered, or embraced with such pornographic shamelessness, in comic-book cinema. Sure, movies and television have given us plenty of heroes whose iconic power resides in the costume (whether caped or capeless): Christopher Reeve’s Superman, Tobey Maguire’s Spider-Man, the leather-overcoat-and-sunglasses combo of Wesley Snipes in Blade, the patriotic bustier worn by Lynda Carter as Wonder Woman. Often these sartorial choices become flashpoints of controversy with fans: think of Bryan Singer’s X-Men adaptation, which did away with the classic yellow costumes of the comic series, or the many nippled and sculpted variations of the Batsuit worn by the Batactors playing a series of Batmen in the Batfranchise.

In Iron Man, the situation is different, for Iron Man is his suit, the “secret identity” within a compromised figure both morally and physically. Over the course of the story, Stark’s transformation from a hard-partying weapons magnate to a passionately peace-committed and (mostly) teetotaling cyborg is made concrete — made metal, really — through the metaphor of the successively more sophisticated armor shells in which he encases himself. The first is a kitbashed monstrosity the color of corroded tin cans, crisscrossed with scars of solder. Like a rustbucket car, it survives just long enough to convey Stark to version two, a more compact silver exoskeleton reminiscent of Ginsu knives and Brookstone gadgets. It’s quite satisfying when Stark arrives at the canonical configuration, a red-and-gold chassis of interlocking plates, purring hydraulics, and HUD graphics that answers the question “What would it be like to wear a Lamborghini?”

What’s clever is how these upgrades express Stark’s ethical evolution while recapitulating decades of shifting design in the Marvel comic series from which this movie sprang. (It’s kind of like a He’s Not There version of James Bond in which the lead is played over the course of the film by Sean Connery, then Roger Moore, followed by Timothy Dalton, Pierce Brosnan, and Daniel Craig — with a quick dream interlude, of course, starring George Lazenby.) Iron Man, in other words, manages to honor superhero history rather than pillaging it, and this — along with the film’s smart screenplay and glossy digital mise-en-scene — wins it a sure place in future best-of lists when it comes to the spotty genre of comic-book adaptations.

Another weapon in the film’s arsenal, riding within in its narrative delivery system like he pilots his mechanical costume, is Robert Downey Jr., who seems to have arrived at a point of perfect intertextual harmony with this turn in his career. His performance as Stark is properly lived-in and mischievous (indeed, the actor’s persona is yet another kind of suit, this one built of bad publicity) yet alive with disarmingly sincere warmth. In one of the film’s facile yet pitch-perfect tropes, Stark must wear a pulsing blue-and-white generator on his chest, a kind of electromagnetic pacemaker that doubles as an energy source for his armor and trebles as a signifier of the character’s humanity. Downey Jr. is himself a kind of power source that propels the vehicle of Iron Man forward while lending it, in stray moments, genuine moral weight. His portrayal reminds us that, while a superhero’s technological or organic essence is important, the larger ingredient is something harder to quantify and calibrate — in Stark’s case, a kludge of intellect, imagination, and compassion that easily trades one outfit for another.

That’s not all that’s going on under the hood: the fight scenes, not to mention flight scenes, are awesome, and Jeff Bridges is remarkably menacing with all his hair relocated from his head to his chin. There’s some nimble ideological shadowboxing around the the military-industrial complex and the terrible allure of “shock and awe” (there’s your real pornography). And while Stan Lee makes his trademarked cameo, branding this as yet another item in Marvel’s transmedia catalog, a quirky counterpoint is sounded when a villain taunts Stark by asking “Did you really think that just because you had an idea, it belongs to you?” For comic fans, it’s hard not to hear this and think of Marvel’s infamous screwing of Jack Kirby. At the same time, we should count our blessings that concepts like Iron Man travel so fluidly through our mediascape, rephrasing themselves in the transformational grammar of convergence cinema. Too often, the result is an ugly and lifeless thing — a strangled fragment like Daredevil, a run-on sentence like Van Helsing. But once in a lucky while, you get a perfect little poem like Iron Man.