Cloverfield and the Mystery Box of Abrams’s Authorship

cloverfield.jpg

I expected a little more from J. J. Abrams’s talk at TED.com. My first disappointment was in realizing that the presentation is almost a year old: he gave it in March 2007, and waiting till now to air it smacks of a publicity push for Cloverfield, the new monster movie produced by Abrams and directed by Matt Reeves, set for release one week from now (or as teaser images like the one above would have it — striving for 9/11-like gravitas — 1.18.08).

The second disappointment came from the disconnect between the content of the talk and the mental picture I’d formed based on the blurb:

There’s a moment in J.J. Abrams’ amazing new TEDTalk, on the mysteries of life and the mysteries of storytelling, where he makes a great point: Filmmaking as an art has become much more democratic in the past 10 years. Technology is letting more and more people tell their own stories, share their own mysteries. Abrams shows some examples of high-quality films made on home computers, and shares his love of the small, emotional moments inside even the biggest blockbusters.

Somehow I took these innocuous words as promise of some major revelation from Abrams, a writer-producer-director-showrunner on whose bandwagon I’ve been all to happy to hop. Alias was a great show for its first couple of seasons, Lost continues to be blissful mind candy, and I quite liked Mission Impossible III (though I seem to be one of the few who did). My reservations about Abrams’s Star Trek reboot aside, I’ll follow the man anywhere at this point. But I found his talk a frustrating ramble, full of half-told jokes and half-completed insights, shifting more or less randomly from his childhood love of magic tricks to the power of special effects to “do anything.” Along the way he shows a few movie clips, makes a lot of people laugh and applaud, essentially charming his way through a loosely-organized scramble of ideas that feel pulled from his back pocket.

More fool me for projecting so helplessly my own hunger for insider knowledge. What I wanted, I now realize, was stories about Cloverfield. Like many genre fans, I’m endlessly intrigued by the film, about which little is known except that little is known about it. The basic outline is clear enough: giant monster attacks New York City. What distinguishes Cloverfield from classic kaiju eiga like Toho’s Godzilla films — and this is what’s got interested parties both excited and dismayed — is the storytelling conceit: consisting entirely of “found footage,” Cloverfield shows the attack from ground level, in jumpy snatches of handheld shots supposedly retrieved from consumer video cameras and cell phones. Like The Blair Witch Project, which attempted to breathe new life into the horror genre by stripping it of its tried-and-true (and trite) conventions of narrative and cinematography, Cloverfield, for those who accept its experimental approach, may pack an exhilarating punch.

For those who don’t, however, the film will stand as merely the latest reiteration of the Emperor’s New Clothes, another media “product” failing to live up to its hype. And that’s what is ultimately so interesting about Abrams’s talk at TED: it embodies the very effect that Abrams is so good at injecting into the stories he oversees. In the manner of M. Night Shyamalan, who struggles ever more unconvincingly with each new film to brand himself a master of the twist surprise, Abrams’s authorship has become associated with a sense of unfolding mystery, enigmatic tapestries glimpsed one tantalizing thread at a time. One doesn’t watch a series like Lost so much as decipher it; the pleasure comes from a complex interplay of continuity and surprise, the marvelous sense of teetering eternally at the brink of chaos even as new symmetries and patterns become legible.

Abrams’s stories are like magic tricks, full of misdirection and sleight of hand. It drives some people crazy — they see it as nothing more than a shell game, and they ask, with some justification, when we’ll finally get to the truth, the Big Reveal. But as his talk at TED demonstrates, Abrams has always been more about the agile foreshadowing than the final result. It’s a style built paradoxically on the deferral, really the denial, of pleasure — a curious and almost masochistic structure of feeling in our pop culture of instant gratification.

Perhaps that’s where the TED talk’s value really resides. Gabbing about the “mystery box” — a metaphor promiscuously encompassing everything from a good suspense story to bargain-basement digital visual effects to the blank page awaiting an author’s pen — Abrams delivers no substantive content. But he does provide the promise of it: the sense that a breakthrough is just around the corner. It’s an authorial style suited to the rhythms and structure of serial television, which can give closure only through opening up new mysteries. Whether it will work within the bounded length of Cloverfield, that risky mystery box that will open for our inspection next Friday, remains to be seen.

Smut in 1080p

This article on the porn industry’s response to the HD DVD / Blu-Ray format wars caught my eye, reminding me that changing technological standards are an equal-opportunity disrupter. It’s not only the big  movie studios (like Warner Brothers, which made headlines last week by throwing its weight behind Blu-Ray) that must adapt to the sensory promise and commercial peril of HD, but porn providers,  television networks, and videogame makers: up and down and all around the messy scape of contemporary media, its brightly-lit and family-friendly spaces as well as its more shadowy and stigmatized precincts.

The prospect of HD pornography is interesting, of course, because it’s such a natural evolution of this omnipresent yet disavowed form. The employment of media to stimulate, arouse, and drive to climax the apparatus of pleasure hard-wired into our brains and bodies is as old as, well, any medium you care to name. From painted scrolls to printed fiction, stag reels to feature films, comic books to KiSS dolls, porn has always been with us: the dirty little secret (dirty big secret, really, since very few are unaware of it) of a species whose unique co-evolution of optical acuity, symbolic play, and recording and playback instrumentalities has granted us the ability — the curse, some would say — to script and immerse ourselves in virtual realities on page and screen. That porn is now making the leap to a technology promising higher-fidelity imaging and increased storage capacity is hardly surprising.

The news also reminds us of the central, integral role of porn in the economic fortunes of a given medium. I remember discovering, as a teenager in the 1980s, that the mom-and-pop video stores springing up in my home town invariably contained a back room (often, for some reason, accessed through swinging wooden doors like those in an old-time saloon) of “adult” videocassettes. In the 1990s a friend of mine, manager of one of the chain video places that replaced the standalone stores, let me in on the fact that something like 60% of their revenues came from rentals of porn. The same “XXX factor” also structures the internet, providing a vastly profitable armature of explicit websites and chat rooms — to say nothing of the free and anonymous fora of newsgroups, imageboards, torrents, and file-sharing networks — atop which the allegedly dominant presence of Yahoo, Amazon, Google, etc. seem like a thin veneer of respectable masquerade, as flimsy a gateway as those swinging saloon doors.

The inevitable and ironic question facing HD porn is whether it will show too much, a worry deliciously summarized in the article’s mention of “concern about how much the camera would capture in high-definition.” The piece goes on to quote Albert Lazarito, vice president of Silver Sinema, as saying that “Imperfections are modified” by the format. (I suspect that Lazarito actually said, or meant, that imperfections are magnified.) The fact is that porn is frequently a grim, almost grisly, affair in its clinical precision. Unlike the soft-core content of what’s often labeled “erotica,” the blunt capture of sexual congress in porn tends to unfold in ghoulishly long takes, more akin to footage from a surveillance camera or weather satellite than the suturing, storytelling grammar of classical Hollywood. Traditional continuity editing is reserved for the talky interludes between sexual “numbers,” resulting in a binary structure something like the alternation of cut-scenes and interactive play in many videogames. (And here let’s not forget the words attributed to id Software’s John Carmack, Edison of the 3D graphics engine, that “Story in a game is like a story in a porn movie. It’s expected to be there, but it’s not that important.”)

As an industry that sometimes thrives on the paired physical and economic exploitation of its onscreen workers, porn imagery contains its share of bruises, needle marks, botched plastic surgeries, and poorly-concealed grimaces of boredom (at best) or pain (at worst). How will viewers respond to the pathos and suffering at the industry’s core — of capitalism’s antihumanism writ large across the bodies offered up for consumers’ pleasure-at-a-distance — when those excesses are rendered in resolutions of 1920×1080?

Dumbledore: Don’t Ask, Don’t Tell

dumbledore.jpg

I was all set to write about J. K. Rowling’s announcement that Albus Dumbledore, headmaster of Hogwarts, was gay, but Jason Mittell over at JustTV beat me to it. Rather than reiterating his excellent post, I’ll just point you to it with this link.

Here’s a segment of the comment I left on Jason’s blog, highlighting what I see as a particularly odd aspect of the whole event:

On a structural level, it’s interesting to note that Rowling is commenting on and characterizing an absence in her text, a profound lacuna. It’s not just that Dumbledore’s queerness is there between the lines if you know to read for it (though with one stroke, JKR has assured that future readers will do so, and probably quite convincingly!). No, his being gay is so completely offstage that it’s tantamount to not existing at all, and hence, within the terms of the text, is completely irrelevant. It’s as though she said, “By the way, during the final battle with Voldemort, Harry was wearing socks that didn’t match” or “I didn’t mention it at the time, but one of the Hogwarts restrooms has a faucet that leaked continuously throughout the events of the seven books.” Of course, the omission is far more troubling than that, because it involves the (in)visibility of a marginalized identity: it’s more as though she chose to reveal that a certain character had black skin, though she never thought to mention it before. While the move seems on the surface to validate color-blindness, or queer-blindness, with its blithe carelessness, the ultimate message is a form of “stay hidden”; “sweep it under the rug”; and of course, “Don’t ask, don’t tell.”

We’ve got two more movies coming out, so of course it will be interesting to see how the screenwriters, directors, production designers, etc. — not to mention Michael Gambon — choose to incorporate the news about Dumbledore into the ongoing mega-experiment in cinematic visualization. My strong sense is that it will change things not at all: the filmmakers will become, if anything, scrupulously, rabidly conscientious about adapting the written material “as is.”

But I disagree, Jason, with your contention that Rowling’s statement is not canonical. Come on, she’s the only voice on earth with the power to make and unmake the Potter reality! She could tell us that the whole story happened in the head of an autistic child, a la St. Elsewhere, and we’d have to believe it, whether we liked it or not — unless of course it could be demonstrated that JKR was herself suffering from some mental impairment, a case of one law (medical) canceling out another (literary).

For better or worse, she’s the Author — and if that concept might be unraveling in the current mediascape, all the more reason that people will cling to it, a lifejacket keeping us afloat amid a stormy sea of intepretation.

One Nation Under Stephen

colbert.jpg

I felt a delicious chill as I read the news that Stephen Colbert is running for President. (He made his announcement on Tuesday’s edition of The Colbert Report, the half-hour news and interview program he hosts on Comedy Central.) Why a chill? For all that I enjoy and respect Colbert, he has always prompted in me a faint feeling of vertigo. Watching his comedy is like staring into a deep well or over the side of a tall building: you get the itchy feeling in your legs of wanting to jump, to give yourself up to gravity and the abyss, obliterating yourself and all that you hold dear. Colbert’s impersonation of a rabidly right-wing, plummily egotistical media pundit is so polished and impenetrable that it stops being a joke and moves into more uncannily undecidable territory: simulation, automaton, a doll that has come to life. Unlike Jon Stewart on The Daily Show, Colbert’s satire doesn’t have a target, but becomes the target, triggering a collapse of categories, an implosion, a joke that eats itself and leaves its audience less thrilled than simply unsure (cf. Colbert’s performance at the 2006 White House Correspondents Dinner, at which he mapped uneasy smiles and half-frowns across a roomful of Republican faces).

Judging from Colbert’s offstage discussion of his work, like his recent interview with Terry Gross of Fresh Air, he’s a modest, sensible, reflective guy, able to view his Report persona with wit and detachment even as he delights in using it to generate ever more extreme, Dada-like interventions in popular and political culture — his Wikipedia mischief being only one instance. My half-serious worry is that with his latest move, he’s unleashed something far bigger than he knows or can control. The decision to place himself on the 2008 Presidential ballot, even if only in South Carolina, has been received by the mainstream media primarily as another ironic turn of the comedy-imitates-reality-imitates-art cycle, noting the echo of Robin Williams’s Man of the Year (2006) and comedian Pat Paulsen’s bid for the White House in 1968. But I think the more accurate and alarming comparison might be Larry “Lonesome” Rhodes, the character played by Andy Griffith in Elia Kazan’s A Face in the Crowd (1957). In that film, Rhodes goes from being a bumpkinish caricature on a television variety show to a populist demagogue, drunk on his own power and finally revealed as a hollow shell, a moral vacuum. The unsubtle message of Kazan’s film is that TV’s pervasive influence makes it a tool for our most destructive collective tendencies — a nation of viewers whose appetite for entertainment leads them to eagerly embrace fascism.

griffith.jpg

I’d be lying — or at least being flippant — if I claimed to believe that Colbert could be another “Lonesome” Rhodes. I’m neither that cynical about our culture nor that paranoid about the power of media. But given that we live in an era when the opportunities for self-organizing social movements have multiplied profoundly through the agency of the internet, who is to say where Colbert’s campaign comedy will mutate smoothly into something more genuine? Maybe he is, at this moment in history, the perfect protest candidate, smoother and more telegenic than Nader and Perot by orders of magnitude. He just might win South Carolina. And if that happens … what next?

Beep … Beep … Beep …

sputnik-image-1.gif

The Soviet satellite Sputnik, launched fifty years ago today, is stitched into my family history in an odd way. A faded Polaroid photograph from that year, 1957, shows my older siblings gathered in the living room in my family’s old house. The brothers and sisters I would come to know as noisily lumbering teenage creatures who alternately vied for my attention and pounded me into the ground are, in the image, blond toddlers messing around with toys. There also happens to be a newspaper in the frame. On its front page is the announcement of Sputnik’s launch.

Whatever the oblique and contingent quality of this captured moment — one time-stopping medium (newsprint) preserved within another (the photograph) — I’ve always been struck by how it layers together so many kinds of lost realities, realities whose nature and content I dwell upon even though, or because, I never knew them personally. Sputnik’s rhythmically beeping trajectory through orbital space echoes another, more idiomatic “outer space,” the house where my family lived in Ann Arbor before I was born (in the early 1960s, my parents moved across town to a new location, the one that I would eventually come to know as home). These spaces are not simply lost to me, but denied to me, because they existed before I was born.

Which is OK. Several billion years fall into that category, and I don’t resent them for predating me, any more than I pity the billions to come that will never have the pleasure of hosting my existence. (I will admit that the only time I’ve really felt the oceanic impact of my own inevitable death was when I realized how many movies [namely all of them] I won’t get to see after I die.) If I’m envious of anything about that family in the picture from fall 1957, it’s that they got to be part of all the conversations and headlines and newspaper commentaries and jokes and TV references and whatnot — the ceaseless susurration of humanity’s collective processing — that accompanied the little beeping Russian ball as it sliced across the sky.

As a fan of the U.S. space program, I didn’t think I really cared that much about Sputnik until I caught a story from NPR on today’s Morning Edition, which profiled the satellite’s designer, Sergei Korolev. One of Korolev’s contemporaries, Boris Chertok, relates how Sputnik’s shape “was meant to capture people’s imagination by symbolizing a celestial body.” It was the first time, to be honest, I’d thought about satellites being designed as opposed to engineered — shaped by forces of fashion and signification rather than the exigences of physics, chemistry, and ballistics. One of the reasons I’ve always liked probes and satellites such as the Surveyor moon probe, the Viking martian explorer, the classic Voyager, and my personal favorite, the Lunar Orbiter 1 (pictured here), is that their look seemed entirely dictated by function.

lunar_orbiter.jpg

Free of extras like tailfins and raccoon tails, flashing lights and corporate logos, our loyal emissaries to remoteness like the Mariner or Galileo satellites possessed their own oblivious style, made up of solar panels and jutting antennae, battery packs and circuit boxes, the mouths of reaction-control thrusters and the rotating faces of telemetry dishes. Even the vehicles built for human occupancy — Mercury, Gemini, and Apollo capsules — I found beautiful, or in the case of Skylab or the Apollo missions’ Lunar Module, beautifully ugly, with their crinkled reflective gold foil, insectoid angles, and crustacean asymmetries. My reverence for these spacefaring robots wasn’t limited to NASA’s work, either: when the Apollo-Soyuz docking took place in 1975 ( I was nine years old then, equidistant from the Sputnik launch that bracketed my 1966 birthday), it was like two creatures from the deep sea getting it on — literally bumping uglies.

apollo-soyuz.jpg

So the notion that Sputnik’s shape was supposed to suggest something, “symbolizing a celestial body,” took me at first by surprise. But I quickly came to embrace the idea. After all, the many fictional space- and starships that have obsessed me from childhood — the Valley Forge in Silent Running, the rebel X-Wings and Millennium Falcon from Star Wars, the mothership from Close Encounters of the Third Kind, the Eagles from Space 1999, and of course the U.S.S. Enterprise from Star Trek — are, to a one, the product of artistic over technical sensibilities, no matter how the modelmakers might have kit-bashed them into verisimilitude. And if Sputnik’s silver globe and backswept antennae carried something of the 50s zeitgeist about it, it’s but a miniscule reflection of the satellite’s much larger, indeed global, signification: the galvanizing first move in a game of orbital chess, the pistol shot that started the space race, the announcement — through the unbearably lovely, essentially passive gesture of free fall, a metal ball dropping endlessly toward an earth that swept itself smoothly out of the way — that the skies were now open for warfare, or business, or play, as humankind chooses.

Happy birthday, Sputnik!

On Fanification

A recent conversation on gender and fandom hosted at Henry Jenkins’s blog prompted me to hold forth on the “fanification” of current media — that is, my perception that mainstream television and movies are displaying ever more cultlike and niche-y tendencies even as they remain giant corporate juggernauts. Nothing particularly earthshaking in this claim; after all, the bifurcation and multiplication of TV channels in search of ever more specialized audiences is something that’ s been with us since the hydra of cable lifted its many heads from the sea to do battle with the Big Three networks.

My point is that, after thirty-odd years of this endless subdivision and ramification, texts themselves are evolving to meet and fulfill the kinds of investments and proficiencies that — once upon a time — only the obsessive devotees of Star Trek and soap operas possessed. The robustness and internal density of serialized texts, whether in small-screen installments of Lost or big-screen chapters of Pirates of the Caribbean, anticipates the kind of scrutiny, dissection, and alternate-path-exploring appropriate to what Lizbeth Goodman has called “replay culture.” More troublingly, these textual attributes hide the mass-produced and -marketed commodity behind the artificially-generated underdog status of the cult object: in a kind of adaptive mimicry, the center pretends that it is the fringe. And audiences, without knowing they are doing so, complete the ideological circuit by acting as fans, even though the very notion of “fan” becomes insupportable once it achieves mainstream status. (In other words, to quote The Incredibles, if everyone’s a fan, then no one is.)

As evidence of the fanification of mainstream media, one need look no further than Alessandra Stanley’s piece in this Sunday’s New York Times. In her lead essay for a special section previewing the upcoming fall TV season, Stanley writes of numerous ways in which today’s TV viewer behaves, for all intents and purposes, like the renegade fans of yore — mapping, again, a minority onto a majority. Here are a few quotes:

… Viewers have become proprietary about their choices. Alliances are formed, and so are antipathies. Snobbery takes root. Preferences turn totemic. The mass audience splintered long ago; now viewers are divided into tribes with their own rituals and rites of passage.

A favorite show is a tip-off to personality, taste and sophistication the way music was before it became virtually free and consumed as much by individual song as artist. Dramas have become more complicated; many of the best are serialized and require time and sequential viewing. If anything, television has become closer to literature, inspiring something similar to those fellowships that form over which authors people say they would take to the proverbial desert island.

In this Balkanized media landscape, viewers seek and jealously guard their discoveries wherever they can find them.

Before the Internet, iPhones and flash drives, people jousted over who was into the Pixies when they were still a garage band or who could most lengthily argue the merits of Oasis versus Blur. Now, for all but hardcore rock aficionados, one-upmanship is more likely to center around a television series.

Stanley concludes her essay by suggesting that to not be a fan is to risk social censure — a striking inversion of the cultural coordinates by which geekiness was once measured (and, according to the values of the time, censured). “People who ignore [TV’s] pools and eddies of excellence do so at their own peril,” Stanley writes. “They are missing out on the main topic of conversation at their own table.” Her points are valid. I just wish they came with a little more sense of irony and even alarm. For me, fandom has always been about finding something authentic and wonderful amid the dross. Fandom is, among other things, a kind of reducing valve, a filter for what’s personal and exciting and offbeat. If mass media succeeds in de-massing itself, what alternative — what outside — is left?

Britney and Bush: The Comeback Kids

bush.jpgbritney.jpg

 

 

Last week saw an astonishing play of parallels across our media screens – a twinned spectacle of attempted resurrection which, while occupying two very different sets of cultural coordinates, were perhaps not so distinct when examined closely. If it’s true that the personal is the political, then the popular must be political too; and it’s no great stretch to say that, in contemporary media culture as well as contemporary politics, the rituals of rejuvenation are more alike than dissimilar.

Britney Spears’s opening number at the MTV Video Music Awards on September 9 has by now been thoroughly masticated and absorbed by the fast-moving digestive system of blogosphere critics, with TMZ and Perez Hilton leading the way. I won’t belabor Britney’s performance here, except to note that when I, like much of the nation, succeeded in finding online video documentation the next day despite the best efforts of MTV and Viacom, it was just as fascinatingly surreal (or surreally fascinating) to watch as promised: a case where the hype, insofar as it was a product of derision rather than promotion, accurately described the event.

The other performance last week was, of course, George W. Bush’s September 13 address to the nation discussing the testimony of General David Petraeus before Congress. Again, there is little point in rehearsing here what Petraeus had to say about Iraq, or what Bush took away from it. For weeks, the mainstream media had been cynically predicting that nothing in the President’s position would change, and when nothing did, the outraged responses were as strangely, soporifically comforting as anything on A Prairie Home Companion. The jagged edges of our disillusion have long since been worn to the gentle contour of rosary beads, the dissonance of our angry voices merged into the sweet anonymous harmony of a mass chorus (or requiem).

What stands out to me is the perfect structural symmetry between Britney and Bush’s public offering, not of themselves – I’m too much a fan of Jean Baudrillard to suppose there is any longer a “there” there, that the individual corporeal truths of Britney and Bush did not long ago transubstantiate into the empty cartoon of the hyperreal – but of their acts. Both were lip-synching (one literally, one figuratively), and if Bush’s mouth matched the prerecorded lyrics more closely than did Britney’s, I can only ascribe it to his many more years of practice. I’ve always considered him not so much a man as a mechanism, a vibrating baffle like you’d find in a stereo speaker, emitting whatever modulated frequencies of conservative power and petroleum capital he was wired to. Unlike his father or the avatars of evil (Rove, Cheney, Rumsfeld) that surround him, Bush has never struck me as sentient or even, really, alive – he gives off the eerie sense of a ventriloquist’s dummy or a chess-playing robot. (Admittedly, he did at the height of his post-9/11 potency manifest another mode, the petulant child with unlimited power, like Billy Mumy’s creepy kid in the Twilight Zone episode “It’s a Good Life.”)

Britney, by contrast, seems much less in synch with the soundtrack of her life, which is what makes her so hypnotically sad and wonderfully watchable. I flinch away when words emanate from Bush the way I flinch when the storm of bats flies out of the cave early in Raiders of the Lost Ark; Britney’s clumsy clomping and uncertain smile at the VMAs is more like a series of crime-scene photos, slow-motion film of car wrecks, or the collapsing towers of the World Trade Center. Like any genuine trauma footage, you can’t take your eyes off it.

Where Britney and Bush came together last week was in their touching allegiance to strategies that worked for them in the past: hitting their marks before the cameras and microphones, they struck the poses and mouthed the words that once charmed and convinced, terrified and titillated. The magic has long since fled – you can’t catch lightning in a bottle twice, whether in the form of a terrorist attack or a python around the shoulders – but you can give it the old college try, and even if we’re repulsed, we’re impressed. But is it stamina or something more coldly automatic? Do we praise the gear for turning, the guillotine blade for dropping, the car bomb for exploding?

The “Clumsy Sublime” of Videogames

To anyone interested in how old image technologies date, I highly recommend Laura Mulvey’s short essay in the Spring 2007 Film Quarterly (Vol. 60, No. 3, Pages 3-3) entitled “A Clumsy Sublime.” (The link is here, but you may need special privileges to access it; I’m writing this post in my office, from whose campus-tied network a vast infospace of journals and databases is transparently visitable.) Mulvey writes about rear-projection cinematography, that trick of placing actors in front of false backgrounds — think of scenes in films from the 1940s and 1950s in which characters drive a car, their cranking of the steering wheel bearing no relationship to the projected movie-in-a-movie of the road unspooling behind them. Nowadays these shots are for the most part instantly spottable for the in-studio machinations they are: nothing makes an undergraduate audience snicker more quickly than the sudden jump to a stilted closeup of an actor standing before an all-too-grainy slideshow. Mulvey deftly dissects the economic reasons that made rear-projection shots a necessity, but the real jackpot comes in the essay’s concluding paragraph, where she writes:

This paradoxical, impossible space, detached from either an approximation to reality or the verisimilitude of fiction, allows the audience to see the dream space of cinema. But rear projection renders the dream uncertain: the image of a cinematic sublime depends on a mechanism that is fascinating because of, not in spite of, its clumsy visibility.

With newly fine-grained methods of compositing images filmed at different spaces and times — the traveling matte was only the first step on a path to digital cut-and-paste cinema — rear projection is rarely seen anymore. But as Mulvey observes, “As so often happens with passing time, [rear projection’s] disappearance has given this once-despised technology new interest and poignancy.”

Her point is equally applicable to a range of filmmaking techniques, especially those of special effects and visual effects, which invariably pay for their cutting-edge-ness by going quickly stale. (Ray Harryhausen‘s stop-motion animation, for example, “fools” no one now, but is prized, indeed cherished, by aficionados.) But I’d like to extend the category of the clumsy sublime to another medium, the videogame, which has evolved much more rapidly and visibly than cinema: under the speeding metronome of Moore’s Law, gaming’s technological substrate is essentially reinvented every couple of years. Games from 2000 can’t help but announce their relative primitiveness, which in turn looks cutting-edge next to games of 1995, and so on and so on back to the circular screen of the PDP-1 and the rastery vessels of 1962’s Spacewar. Yet we don’t disdain old games as hopelessly unsophisticated; instead, a lively retrogame movement preserves and celebrates the pleasures of 8-bit displays and games that fit into 4K of RAM.

Two examples of videogames’ clumsy sublimity can be found on these marvelous websites. Rosemarie Fiore is an artist whose work includes beautiful time-lapse photographs of classic videogames of the early 1980s like Tempest, Gyruss, and Qix. Diving beneath the surface of the screen, Ben Fry’s distellamap traces the calls and returns of code, the dancing of data, in Atari 2600 games. What I like about these projects is that they don’t just fetishize the old arcade cabinets, home computers and consoles, cartridges and software: rather, they build on and interpret the pleasures of gaming’s past while staying true to its graphic appeal and crudely elegant architecture — the fast-emerging clumsy sublime of new interactive media.

To Capture A Giant

In reading up on Harry Potter and the Order of the Phoenix (David Yates, 2007), I was intrigued to note the coining of a new term for the creation of CG (computer-generated) performance in movies: soul capture. It’s the brainchild of Double Negative, a London-based visual effects company that was one of several involved in the most recent Harry Potter film. Double Negative worked on approximately 950 of Phoenix’s 1400 effects shots, categorized by “geographical area”:

Hogwarts, both inside and out; the Forbidden Forest, where Harry and his friends encounter Grawp, Hagrid’s teenage half brother; the Hall of Prophecy, a mysterious and cavernous storage space in the Ministry of Magic and the Veil Room, which lies at the very heart of the Wizarding world.

(Interesting in itself this mapping of labor, with its strong suggestion that a movie’s diegesis or storyworld is increasingly conceptualized in spatial terms, more akin to videogame environments and theme parks than to linear narrative – but on the other hand, the linearity of cinematic storytelling has always been only an ex post facto phenomenon, with films being shot out of order for economic reasons and only in editing arranged into apparent sequentiality.)

Anyway, soul capture refers to the particular breed of motion capture with which Double Negative took elements of the performance provided by Tony Maudsley and used them to “drive” the performance of the giant CG character Grawp, the 16-foot-tall half-brother of Hagrid (Robbie Coltrane).

Now I have to admit that the Grawp sequences were easily my least favorite parts of Phoenix in both its print and film incarnations. In the book, Grawp struck me as an unnecessary and overly cute bit of comic relief, an uninspired attempt to open up and explore Hagrid’s already perfectly realized world. In the movie, Grawp’s essential flimsiness was amplified by visual effects that simply didn’t work. In a movie filled with otherwise bold and imaginative FX, Grawp doesn’t look or move right: with his stretchy, misshapen features, he comes across as a regular-sized man who’s been massively inflated with air. Which, in a certain sense, he is: Maudsley’s “soul-captured” performance has been mapped onto the surface of a CG balloon, then integrated with surprising clumsiness into settings with other actors. Of course, this clumsiness could be explained as a function of Grawp’s half-wittedness; like Gollum (Andy Serkis) in The Lord of the Rings (Peter Jackson, 2001-2003), an FX-driven character’s eccentricity/monstrosity seems to justify a lack of grace and subtlety in its realization. (This is not to knock Gollum, whom I truly dug, but to highlight a particular faultline in state-of-the-art CG between the cartoonish and the mimetic, and the psychological mechanisms by which that faultline is ideologically negotiated and naturalized.)

Grawp’s failings aside, I’m struck by the rapid evolution of nomenclature in what we might call “acting effects.” In the late 1990s, around the time that Final Fantasy: The Spirits Within (Hironobu Sakaguchi & Moto Sakakibara, 2001) was gearing up, the keyword was motion capture. In behind-the-scenes materials, we were treated to the sight of actors pantomiming on blank soundstages, wearing leotards crisscrossed with grids of tiny spheres – markers providing reference points for the computer, and later animators, to use in reconstructing the performance digitally. In the early 2000s, John Gaeta and the effects crew at ESC were using universal capture to scan Keanu Reeves and Hugo Weaving in ultra-high-resolution for The Matrix Reloaded and Revolutions (The Wachowski Brothers, 2003). By the time of The Polar Express (Robert Zemeckis, 2004), the keyword was performance capture, and markers studded Tom Hanks’s face and eyelids. Now we have soul capture, presumably expressing some further evolution of capture modalities, with a concomitantly finer degree of resolution.

The point, I think, is not whether any of these tools is “better” than any other, or even especially revolutionary as a shift in filmmaking methods. As Andre Bazin so valuably observed, the camera itself is a primary “capture technology,” one that marks an irrevocable break with previous methods of representing reality. If anything, the swooning (and market-driven) discourse around new forms of capture signals a movement sideways into painting and illustration, for what’s really being taken from Maudsley or Serkis is components for future blending, ingredients in a recipe of illusion.

Hence David S. Cohen gets it wrong when he writes that “Actors give films their humanity and heart. Visual effects let the audience see things that no camera could capture. So the battle lines [are] drawn: soul vs. spectacle.” Movie performances have always been based on the “spectacle” of the actor’s preserved essence, whether in a rubber Godzilla suit, in Jack Pierce’s makeup for Boris Karloff in Frankenstein (James Whale, 1931), or for that matter in the screen personae of Clark Gable and Jimmy Stewart. “Humanity and heart” are reduced to a two-dimensional skin of photons in the final instance, an insurmountable layer of dead screen severing us from living performance – even as it brings that performance magically to life for us.

The One True Enterprise

Thanks to an incredibly generous gift certificate from some friends, my wife and I spent last weekend at a ritzy hotel in Washington, DC – where the 100-degree temperatures made us quite happy to stay inside, work out in the fitness center, order room service, and watch TV.

But the one time we had to venture outside was to visit the National Air and Space Museum, my favorite spot in Washington and, perhaps, the greatest place in the known universe. Ever since I first visited DC, at nine or ten years old, I have loved the NASM: the satellites suspended from the ceiling, the Imax theater, the giant Robert McCall mural, the silver packets of freeze-dried ice cream, and of course the full-size Skylab sitting on its end like a small cylindrical skyscraper, a constant line of people threading through its begadgeted, submarinelike innards.

I’ve been to the museum several times, so it was a shock to come face-to-face with one of its most famous artifacts, and realize that – somehow – I’d forgotten it was there. It used to hang gloriously over the entrance to some special exhibit (Spaceflight in Science Fiction, maybe?), which has now been replaced by a room devoted to the role of computers in aeronautical design and engineering. As for the marvelous object, it has moved to the lower floor of the gift shop, where it sits toward the back in its own plexiglass box, big enough to hold a Hummer…

Enterprise at NASM - side view

This is the original miniature of the U.S.S. Enterprise, used for optical effects shots in the first series of Star Trek (1966-1969). It’s been part of the Smithsonian collection since 1974, and has undergone several modifications in that time, including a new “mosaic” paint job to simulate square hull plating. (This concept, introduced with the starship’s redesign for Star Trek: The Motion Picture [Robert Wise, 1979], has since become standard for Trek’s vessels, reflected in the numerous sequels and series that constitute the franchise.)

After gazing reverently at the Enterprise for a while, I dragged my wife over to see it. I explained to her – feeling somewhat like a goofball – that this was not just a replica or facsimile, but the actual shooting model that went before the cameras of the Howard Anderson effects company, to which Desilu Studios farmed out its optical work. (Actually, the miniature made the rounds of several FX houses, including Film Effects of Hollywood, the Westheimer Company, and Van Der Veer Photo Effects – Trek demanding a particularly high number of expensive and time-consuming optical effects.) Eleven feet long and weighing 200 pounds, the miniature is made of poplar wood, vacuformed plastic, and sheet metal. It was one of three Enterprises used in shooting (the others included a small balsa-wood version that appeared in the “swish” flybys of the title sequence, and a three-foot version used to show the ship in the far distance). It was designed by Walter “Matt” Jefferies in consultation with series creator Gene Roddenberry, and build by Richard C. Datin, Jr.

Enterprise lofted

Enterprise in studio

The miniature undeniably has a sad aspect to it now. Consigned to what is essentially the museum basement, it sits by a shelf of books about Star Trek and Star Wars like an aging carny hawking its wares. Once lit from within by a complex electrical system of lights and relays, it is now shadowed and gloomy, its sepulchral air made more poignant by the racks of day-glow flags, posters, and coffee mugs that surround it. (In this, the back corner of the gift shop, the air-and-space motif gives way to a randomly-themed grab bag of DC memorabilia: Washington Monument t-shirts, Abraham Lincoln yo-yos.)

Yet despite or perhaps because of the diorama of motley neglect in which I encountered it, the Enterprise miniature possesses an historical solidity, a gravity classifying it as the best kind of museum exhibit: one that belongs simultaneously to past and present, functioning as a material bridge between one moment in time and another. For as I circled the plexiglass case, snapping pictures with my digital camera, I realized that the real magic was not in seeing the Enterprise with my own eyes. It was, instead, in the act of capturing its image — of being physically present at one node of a visual apparatus, framing the model in my viewfinder and recording the light rays reflecting off its surface. In doing so, I fleetingly occupied the position of the original camera operators at Howard Anderson and Van Der Veer in Hollywood in the late 1960s, whose daily job it was to line up and shoot this structure of wood and plastic.

Enterprise at NASM

Enterprise TV capture

 

 

Enterprise at NASM

 

Enterprise TV capture

This, I suggest, is the real experience of the Enterprise. As a viewer growing up, watching the show on TV, I saw the starship only in its final composited form – as an “actual” vessel in space – experiencing a play-along immediacy that is the basic perceptual displacement necessary to the operation of television, movies, and videogames (we can only believe what we are seeing if we disbelieve in the fact of its having-been-made). Photographing the model at the National Air and Space Museum on Saturday, I experienced a flash of disbelief’s opposite, what I can only call mediacy, bringing layers of technology and labor – of historical material practice – back into the picture. It was like going to work and going to church at the same time, like punching a timeclock that is also a reliquary holding the bones of a saint. It was great; I’ll never forget it.

Bob at the NASM