/

The fund drives that biannually interrupt the flow of intelligent goodness from my local NPR station like to trumpet the power of “driveway moments” — stories so called because when they come on the radio, you stay in your car, unable to tear yourself away until they’re finished. The term has always interested me because it so bluntly merges the experience of listening with the act of driving: treating the radio as synecdoche for the car, or maybe the other way around (I can never keep my metonymies straight).

Anyway, I had my own driveway moment tonight, when All Things Considered broadcast a story on the vidding movement. Of course, fans have been remixing and editing cult TV content into new, idiosyncratically pleasurable/perverse configurations for decades, and the fact that mainstream media are only now picking up on these wonderful grassroots creations and the subcultural communities through which they circulate is sad proof of a dictum I learned from my long-ago undergraduate journalism professor: by the time a cultural phenomenon ends up on the cover of Newsweek, it’s already six months out of date.

Credit to ATC, though, for doing the story, and for avoiding the trap of talking about vidding as though it were, in fact, something new. I did tense up when the reporter Neda Ulaby used male pronouns to refer to one CSI vidder — “the vidder wants to say something about the dangers faced by cops on the show, and he’s saying it by cutting existing scenes together” — thinking it surely incorrect, since the vidding community is dominantly female. Oh, great, I thought: yet another rewriting of history in which a pointedly masculine narrative of innovation and authorship retroactively simplifies a longer and more complex tradition developed by women. (Yes, I do occasionally think in long sentences like that.)

But then the piece brought in Francesca Coppa, and everything was OK again. Coppa, an associate professor of English and the director of film studies at Muhlenberg College, is herself a vidder as well as an accomplished scholar of fandom; I had the pleasure of hearing her work at MIT’s Media in Transition conference in 2007. With her input, the NPR story manages to compress a smart and fairly accurate picture of vidding and fandom into a little under six minutes — an impressive feat.

The funny thing is that the little flash of anxiety and defensiveness I felt when it seemed like NPR would “get it wrong” was like a guilty echo of the way I’ve “gotten it wrong” over the years. My own work on Star Trek fandom focuses on a variety of fan creativity based on strict allegience to canon, in particular the designed objects and invented technologies that constitute the series’ setting and chronology. I call it, variously, hardware fandom or blueprint culture, and I’ve always conceptualized it as a specifically male mode of fandom. It’s the kind of fan I once was — hell, still am — and in my initial exuberance to explore the subject years ago, I remember thinking and writing as though feminine modes of fandom were mere stepping stones toward, really a pale adjunct to, some more substantive, engaged, and commercially complicit fandom practiced by men. I’ve learned better since, largely through interactions with female friends and colleagues in dialogues like the gender-in-fandom debates staged by Henry Jenkins in summer 2007.

For fear of caricaturing my own and others’ positions, I’ll spare you further mea culpas. Suffice to say that my thinking on fandom has evolved (let’s hope it continues to do so!). I am learning to prize voices like Coppa’s for prompting me to revisit and reassess my own too-easy understandings of fan practices as something I can map and intepret based solely on my own experience: valid enough as individual evidence, I suppose, but curdling into something more insidious when generalized — a male subject’s unthinking colonization of territory already capably inhabited.

Quick Thoughts on the Oscars

Last night’s Academy Awards ceremony was more enjoyable than I expected, though it’s always this way: each year I watch with a kind of low-level awe, impressed not only by the show’s general lack of suckiness but by how the pageantry, with its weird mix of cheesy spectacle and syrupy sentimentality, manages to distill that specific tincture of pleasure that is the movies’ signature structure of feeling. Mind you, I’m not talking about the cinematic medium’s larger essence (its extremes of Godardian play and Tarkovskian tedium) but its particular, peculiar manifestation via the churning industrial engine of Hollywood, so helplessly eager to please and entertain that it puts us in the position of that poor guy in Brainstorm (Douglas Trumbull, 1983) — the one hooked up to an orgasm machine that nearly lands him in a coma.

Next day, the script is always the same: I listen to NPR and read the Times and discover that, in fact, the show was a big, boring disappointment, full of the same old lame old. (Here too the ceremony replicates the experience of filmgoing, the light of day too often revealing the seams and strings that ruin the previous night’s magic.) So before I proceed to my necessary disillusionment, that temporary discharge of cynicism by which I prepare the poles of my pleasure centers for their next jolt, a few points of interest I noticed in the 81st Awards:

1. Popularity and its discontents. At several points during the broadcast — Hugh Jackman’s great opening number, Will Smith’s comments following the action-movie montage — we were reminded that the movies everyone went to see (Iron Man, The Dark Knight) were not necessarily the ones that received critical accolades and, by definition, Academy attention. In fact, an allergically inverse relation seems to apply: the more popular a film is, the less likely it is to receive any kind of respect, save for its technical components. That’s why crowdpleasers are so often relegated to categories like Editing, Sound Mixing, and, of course, Visual Effects. (The one place where popular, profitable movies are granted the privilege of existing as feature-length artworks, rather than Frankensteinian assemblages of FX proficiency, is in the category of Animated Feature, where Bolt, WALL-E, and Kung-Fu Panda are rightly celebrated as marvels.) Last night, the Academy got called on its strange standards, with Jackman asking why Dark Knight hadn’t been nominated and Smith dryly remarking that action movies actually have audiences. Only he called them “fans” — and this year, it seems, fans are realizing that they are audiences, perhaps the only real audiences, and they’re rising up to demand equal representation at the spotlit top of the cultural hierarchy.

2. Mashups and remixes. Others, I’m sure, will have much to say about Slumdog Millionaire‘s sweep of the major awards and what this indicates about the globalization of Hollywood. The mingling of film traditions and industries seems to me an epiphenomenal manifestation of a more atomistic and pervasive transformation, over the last few years, into remix culture: we live in the era of mashup and migration, a mixing and matching that produced the wonderful, boundary-crossing hybrid of Slumdog (a media artifact, let us note, that is as much about television and film’s mutual remediation as it is about Bollywood and Hollywood). This was apparent at a formal level in the composition of the Awards’ Best Original Song performances, which garishly and excitingly wrapped the melodies of “Down to Earth,” “O Saya,” and “Jai Ho” around each other.

3. The Curious Case of Brad Pitt. Although he didn’t win, Pitt’s nomination for Best Actor in a Leading Role marks the continuing erosion of prejudices against — and concomitant trend toward full acceptance of — what I have elsewhere termed blended performance: the creation of dramatic characters through special and visual effects. More precisely, blended performance involves acting that depends vitally on visual machination: think Christopher Reeve in Superman, Jeremy Irons in Dead Ringers, Andy Serkis in The Lord of the Rings (and for that matter, Serkis as the big ape in King Kong). Each of these characters came alive not simply through their anchoring in particular bodies, faces, voices, and dramatic chops, but the augmentation of those traits with VFX methods, from bluescreen traveling mattes to motion capture and animation. I’m not saying there’s a strict dividing line here between “real” and “altered” performance; every star turn is a spectacular technology in its own right. But good acting is also a category that prizes authenticity; we do not want to be reminded that we are being tricked. Blended performances don’t often get Oscar attention, but when they do, there’s a historical bias toward special (on-camera) effects versus postproduction contributions: John Hurt received a Best Actor nomination for The Elephant Man (1980), a film he spent buried in pounds of prosthetic devices. In Benjamin Button, of course, Pitt wore plenty of makeup; but a large portion of his performance came about through an intricate choreography of matchmoving and CG modeling. As digital phenomenologies become the inescapable norm, we’ll see more and more legitimacy accorded to blended performance, a trend that will dovetail, I expect, with the threshold moment at which a CG animated film gets nominated for best live-action feature. Don’t laugh: many thought it would happen with WALL-E, and it’s in the taxonomies of AMPAS that such profound distinctions about the ontologies of cinema and technology get hammered out.

Visible Evidence

From the Department of Incongruous Juxtapositions, this pair of items: on the right, the photograph of Rihanna following her beating by Chris Brown; on the left, the supposed image of Atlantis culled from Google Earth.

Let me immediately make clear that I am in no way calling into question the fact of Rihanna’s assault or equating its visual trace — its documentary and moral significance — with the seeming signs of ancient civilization read into a tracework of lines on the ocean floor. The former is evidence of a vicious crime, brutal as any forensic artifact; the latter a fanciful projection, like the crisscrossing canals that generations of hopeful astronomers imagined onto the surface of Mars.

If there is a connection between these images, it is not in their ontological status, which is as clean an opposition as one could hope for, but in their epistemological status: the way they localize larger dialectics of belief and uncertainty, demonstrating the power of freely circulating images to focus, lenslike, the structures of “knowledge” with which our culture navigates by day and sings itself to sleep at night.

In LA a legal investigation rages over who leaked the photo of Rihanna, while across the nation a million splinter investigations twease out the rights and wrongs of TMZ’s going public with it. Does one person’s privacy outweigh the galvanizing, possibly progressive impact of the crime photo’s appearance? Does the fight against domestic and gendered violence become that much more energized with each media screen to which Rihanna’s battered face travels? What happens now to Rihanna, who (as my wife points out) faces the Hobson’s choice of disavowing what has happened and going on with her career in the doomed fashion of Whitney Houston, or speaking out and risking the simultaneous celebration and stigmatization we attach to celebrities whose real lives ground the glitter of their fame? If nothing else, Rihanna has been forcefully resignified in the public eye; whatever position she now adopts relative to the assault will bring judgment upon her — perhaps the most unfair outcome of all.

Meanwhile, the purported image of Atlantis manifests a familiar way of seeing and knowing the unseeable and unknowable. It joins, that is, a long list of media artifacts poised at the edge of the possible, tantalizing with their promise of rendering in understandable terms the tidal forces of our unruly cultural imaginary: snapshots of the Loch Ness Monster, plaster castings of Bigfoot’s big footprints, grainy images of UFOs glimpsed over backyard treelines, Abraham Zapruder’s flickering footage of JFK’s execution, blown-up photos of the jets hitting the World Trade Centers (bearing circles and arrows to indicate why the wrongly-shaped fuselages disprove the “official story”). From creatures to conspiracies, it’s a real-world game of Where’s Waldo, based on fanatically close readings of evidence produced through scientific means that pervert science’s very tenets. Fictions like Lost provide sanitized, playful versions of the same Pynchonean vertigo: the spiraling, paranoid sense that nothing is as it seems — a point ironically proved with recourse to cameras as objective witnesses. In the case of Google’s Atlantis, that “camera” happens to be a virtual construct, an extrapolated map of the ocean floor. The submerged city itself, Google says, is an artifact of compositing and compression: the lossy posing as the lofty, a high-tech updating of lens flares, cosmic rays, and weather balloons too distant for the lens to resolve into their true natures.

In both cases, something submerged has been brought to light, touching off new rounds of old debates about what really happened, what’s really out there. With depressing speed, internet message boards filled with derisive reactions to Rihanna’s photo. “She looks better that way”; “That’s what happens when a woman won’t shut her mouth”; and perhaps most disheartening, “It’s not that bad.” A chorus of voices singing a ragged melody of sexism, racism, and simple hard-heartedness. Let’s hope we have the collective sense to respond correctly to these two images, separating tragic fact from escapist fiction.

Digital Dogsbodies

It’s funny — and perhaps, in the contagious-episteme fashion of Elisha Gray and Alexander Graham Bell filing patents for the telephone on the very same date, a bit creepy — that Dan North of Spectacular Attractions should write on the topic of dog movies while I was preparing my own post about Space Buddies. This Disney film, which pornlike skipped theaters to go straight to DVD and Blu-Ray, is one of a spate of dog-centered films that have become a crowdpleasing filmic staple of late. Dan poses the question, “What is it about today that people need so many dog movies?” and goes on to speculate that we’re collectively drowning our sorrows at the world’s ugliness with a massive infusion of cute: puppyism as cultural anodyne.

Maybe so. It seems to me, though, that another dynamic is in operation here — and with all due respect to my follow scholar of visual effects, Dan may be letting the grumbly echoes of the Frankfurt School distract him from a fascinating nexus of technology, economics, and codes of expressive aesthetics driving the current crop of cinematic canines. Simply put, dogs make excellent cyberstars.

Think about it. Nowadays we’re used to high-profile turns by hybrids of human and digital performance: Angelina Jolie as Grendel’s goldplated mother in Beowulf, Brad Pitt as the wizened baby in The Curious Case of Benjamin Button. (Hmm, it only now strikes me that this intertextual madonna-and-child are married in real life; perhaps the nuclear family is giving way to the mo-capped one?) Such top-billed performances are based on elaborate rendering pipelines, to be sure, but their celebrity/notoriety is at least as much about the uniquely sexy and identifiable star personae attached to these magic mannequins: a higher order of compositing, a discursive special effect. It takes a ton of processing power to paint the sutured stars onto the screen, and an equivalent amount of marketing and promotion — those other, Foucauldian technologies — to situate them as a specific case of the more general Steady March Toward Viable Synthespianism. Which means, in terms of labor and capital, they’re bloody expensive. Mountains of data are moved in service of the smallest details of realism, and even then, nobody can get the eyes right.

But what of the humble cur and the scaled-down VFX needed to sell its blended performance? The five puppy stars of Space Buddies are real, indexically photographed dogs with digitally-retouched jaw movements and eyebrow expressions; child voice actors supply the final, intangible, irreplaceable proof of character and personality. (To hell with subsurface skin scatter and other appeals to our pathetically seducible eyes; the real threshold of completely virtual performance remains believable speech synthesis.) The canine cast of Beverly Hills Chihuahua, while built on similar principles, are ontologically closer to the army of Agent Smiths in The Matrix Reloaded’s burly brawl — U-Capped fur wrapped over 3D doll armatures and arrayed in Busby-Berkeleyish mass ornament. They are, in short, digital dogsbodies, and as we wring our hands over the resurrection of Fred Astaire in vacuum-cleaner ads and debate whether Ben Burtt’s sound design in Wall-E adds up to a best-actor Oscar, our screens are slowly filling with animals’ special-effects-driven stardom. How strange that we’re not treating them as the landmarks they are — despite their immense profitability, popularity, and paradoxical common-placeness. It’s like Invasion of the Body Snatchers, only cuddly!

I don’t mean to sound alarmist — though writing about the digital’s supposed incursion into the real always seems to bring out the edge in my voice. In truth, the whole thing seems rather wonderful to me, not just because I really dug Space Buddies, but because the dogsbody has been around a long time, selling audiences on the dramatic “realism” of talking animals. From Pluto to Astro, Scooby Doo to Rowlf, Lady and the Tramp to K-9, Dynomutt, and Gromit, dogs have always been animated beyond their biological station by technologies of the screen; we accept them as narrative players far more easily than we do more elaborate and singular constructions of the monstrous and exotic. The latest digital tools for imparting expression to dogs’ mouths and muzzles were developed, of all places, in pet-food ads: clumsy stepping stones that now look as dated as poor LBJ’s posthumous lipsynching in Forrest Gump.

These days it’s the rare dog (or cat, bear, and fish) onscreen whose face hasn’t been partially augmented with virtual prosthetics. Ultimately, this is less about technological capability than the legal and monetary bottom line: unlike human actors, animal actors can’t go ballistic on the lighting guy, or write cumbersome provisions into their contracts to copyright their “aura” in the age of mechanical reproduction. Our showbiz beasts exist near the bottom of the labor pool: just below that other mass of bodies slowly being fed into the meat-grinder of digitization, stuntpeople, and just above the nameless hoardes of Orcs jam-packing the horizon shots of Lord of the Rings. I think it was Jean Baudrillard, in The System of Objects, who observed that pets hold a unique status, poised perfectly between people and things. It’s a quality they happen to share with FX bodies, and for this reason I expect we’ll see menageries in the multiplex for years to come.

Requiem for a Craptop

Today I said goodbye to the MacBook that served me and my wife for almost three years — served us tirelessly, loyally, without ever judging the uses to which we put it. It was part of our household and our daily routines, funneling reams of virtual paper past our eyeballs, taking our email dictation, connecting us with friends through Facebook and family through Skype. (Many was the Sunday afternoon I’d walk the MacBook around our house to show my parents the place; I faced into its camera as the bedrooms and staircases and kitchens scrolled behind me like a mutated first-person shooter or a Kubrickian steadicam.) We called it, affectionately, the Craptop; but there was nothing crappy about its animal purity.

It’s odd, I know, to speak this way about a machine, but then again it isn’t: I’m far too respectful of the lessons of science fiction (not to mention those of Foucault, Latour, and Haraway) to draw confident and watertight distinctions between our technologies and ourselves. My sadness about the Craptop’s departure is in part a sadness about my own limitations, including, of course, the ultimate limit: mortality. Even on a more mundane scale, the clock of days, I was unworthy of the Craptop’s unquestioning service, as I am unworthy of all the machines that surround and support me, starting up at the press of a button, the turn of a key.

The Craptop was not just a machine for the home, but for work: purchased by Swarthmore to assist me in teaching, it played many a movie clip and Powerpoint presentation to my students, flew many miles by airplane and rode in the back seat of many a car. It passes from my world now because the generous College has bought me a new unit, aluminum-cased and free of the little glitches and slownesses that were starting to make the Craptop unusable. It’s a mystery to me why and how machines grow old and unreliable — but no more, I suppose, than the mystery of why we do.

What happens to the Craptop now? Swarthmore’s an enlightened place, and so, the brand assures me, is Apple: I assume a recycling program exists to deconstruct the Craptop into ecologically-neutral components or repurpose its parts into new devices. In his article “Out with the Trash: On the Future of New Media” (Residual Media, Ed. Charles R. Acland, University of Minnesota Press, 2007), Jonathan Sterne writes eloquently and sardonically of the phenomenon of obsolete computer junk, and curious readers are well advised to seek out his words. For my part, I’ll just note my gratitude to the humble Craptop, and try not to resent the newer model on which, ironically, I write its elegy: soon enough, for it and for all of us, the end will come, so let us celebrate the devices of here and now.

The End of the World (As We Know It)

Sometimes the metaphor is so perfect it seems the gods of discourse and simulation must have conspired to produce it. The video clip now spreading across the internet — in the Huffington Post‘s words, “like wildfire” — not only visualizes the earth’s destruction by asteroid, but the global proliferation of the clip itself, a CG cartoon leaping from one link to another in a contagious collective imagining of apocalypse:

The video has apparently been in existence at least since 2005, when (according to my quick-and-dirty sleuthing) it aired as a segment on the Discovery Channel series Miracle Planet. Only recently — perhaps after being contextually unmoored by the swapping of its narration for a Pink Floyd soundtrack — has it “gone viral,” scorching the graphical territories that have grown around our planet like a second skin since the dual foundings in the 1960s of the internet (nee ARPAnet) and the computer-graphics movement whose granddaddy was Ivan Sutherland. The reasons for the asteroid clip’s sudden popularity are, I suspect, both too mundane and profound ever to explain to anyone’s satisfaction: on one level, it’s about the idle clicking of links and impulsive forwarding of attachments that has become the unconscious microlabor of millions who believe ourselves to be playing as we work (when, in fact, we are working as we play); on another level, it’s about 9/11, The Dark Knight, and conflict in the Middle East. Tipping points, for all their blunt undeniability, remain enigmatic things at heart. Jurassic Park‘s Ian Malcolm (Jeff Goldblum) spilled water off the back of his hand to illustrate nonlinearity and strange attractors; I submit to you “Chocolate Rain,” Twilight, and now a video, running time just under five minutes, that renders in lush but elegant terms the immolation of our homeworld.

I’m not about to get all moralistic on you and suggest there’s something unhealthy about this spectacle, or the way we’re passing it eagerly from platform to platform like a digital hot potato. It is, in a word, supercool, especially when the continents start peeling up like the waxy bacon grease to which I applied my spatula after an indulgent Christmas breakfast last week. In its languid, extended takes it recalls the Spider-Man sequence that Dan North and I recently kicked around, and in its scalar play — a square inch or two of screen display windowing outward onto the collision of planetary bodies — it’s like a peepshow of the gods, the perverse cosmos literally getting its rocks off, caressing earth and stone together like Ben Wa balls. The clip is mercifully blind to the suffering of life on the ground (or for that matter in the air and sea); its only intimations of pain are displaced, oddly, onto architecture, with Big Ben and the Parthenon in flames.

What the clip brings to mind most powerfully, though, is a similar exercise in worldshaping now more than 25 years old: the Project Genesis sequence in Star Trek II: The Wrath of Khan (Nicholas Meyer, 1982). That brilliant, franchise-saving movie revolved around an experimental device called Genesis, a high-tech MacGuffin that motivated the piratical faceoff between Admiral James T. Kirk and Khan Noonien Singh (is my geek showing?) as well as some beautiful matte paintings, a cloud-tank nebula, and a thrilling countdown sequence scored by James Horner before his compositions became simulacra of themselves.

But all of the Genesis device’s visual and auditory puzzle-pieces would not have cohered as potently in my imagination if not for the way it is introduced early in the film, by a short CG sequence showing the effect that Genesis would have on a lifeless planet:

Several things tie the Genesis sequence to the asteroid-strike video: formally, each begins by tracking inward on a celestial body and ends with a pullback to show the world turning serenely in space; the midsection consists of a sweeping orbital arc, dipping down to the level of mountains, forests, and oceans before lifting back into the stratosphere. Most importantly, each details the utter transformation of a planet, albeit in opposite directions: Genesis brings, in the words of Carol Marcus (Bibi Besch), “life from lifelessness,” while the Discovery Channel’s asteroid inverts the dream of creation, showing its necessary, extinguishing counterpole. The difference between them reflects, perhaps, a shift in how we imagine the possibilities of technology through science fiction: Star Trek‘s utopian vision has given way to the more shadowed and conspiratorial nihilism of Battlestar Galactica (a series that begins in the fires of nuclear armageddon).

But there is also a story here of computer graphics and how they have, for all their evolution, stayed much the same in their aesthetics and predilections. The Genesis sequence was a groundbreaking piece of work from the nascent CGI department at Industrial Light and Magic — a proof-of-concept exercise in ray tracing and fractal modeling by artists and equipment that would soon spin off into Pixar. ILM founder George Lucas, obsessed with extending his authorial control through the development of digital production tools like SoundDroid and EditDroid (forerunner of Avid and nonlinear editing systems), let the future juggernaut slip through his fingers, only later realizing the degree to which CGI would revolutionize filmmaking by merging the elastic, constructive capabilities of animation with the textured realism of live-action. In Pixar’s most recent work — the acclaimed Wall-E, whose glories I’ve been revisiting on my Blu-Ray player — one can see the same hunger to take worlds apart in favor of building new ones, an awareness of how closely, in the world of visual-effects engineering, creation and destruction intertwine. Like other films that have captured my attention on the blog this year — I Am Legend, Planet of the ApesWall-E serves up apocalypse as spectacle, a tradition that continues (proudly, perversely) with the asteroid video.

Happy new year to all, and best wishes for 2009!

Getting Granular with Setpieces

Dan North has published an excellent analysis of the Sandman birth sequence in Spider-Man 3, using this three-minute shot as springboard for a characteristically deft dissection of visual-effects aesthetics and the relationship between CG and live-action filmmaking. His concluding point, that CGI builds on rather than supplants indexical sensibilities — logically extending the cinematographic vocabulary rather than coining utterly alien neologisms — is one that is too often lost in discussions that stress digital technology’s alleged alterity to traditional filmic practices. I’d noticed the Sandman sequence too; in fact, it was paratextually telegraphed to me long before I saw the movie itself, in reviews like this from the New York Times:

… And when [The Sandman] rises from a bed of sand after a “particle atomizer” scrambles his molecules, his newly granulated form shifts and spills apart, then lurches into human form with a heaviness that recalls Boris Karloff staggering into the world as Frankenstein’s monster. There’s poetry in this metamorphosis, not just technological bravura, a glimpse into the glory and agony of transformation.

I don’t have anything to add to Dan’s exegesis (though if I were being picky, I might take issue with his suggestion that the Sandman sequence simply could not have been realized without computer-generated effects; while it’s true that this particular rendering, with its chaotic yet structured swarms of sand-grains, would have taxed the abilities of “stop-motion or another kind of pro-filmic object animation,” the fact is that there are infinitely many ways of designing and staging dramatic events onscreen, and in the hands of a different creative imagination than Sam Raimi and his previz team, the Sandman’s birth might have been realized in much more allusive, poetic, and suggestive ways, substituting panache for pixels; indeed, for all the sequence’s correctly lauded technical artistry and narrative concision, there is something ploddingly literal at its heart, a blunt sense of investigation that smacks of pornography, surveillance-camera footage, and NASA animations — all forms, incidentally, that share the Spider-Man scene’s unflinching long take).

But my attention was caught by this line of Dan’s:

This demarcation of the set-piece is a common trope in this kind of foregrounded spectacle — it has clear entry and exit points and stands alone as an autonomous performance, even as it offers some narrative information; It possesses a limited colour scheme of browns and greys (er … it’s sand-coloured), and the lack of dialogue or peripheral characters further enforces the self-containment.

I’ve long been interested in the concept of the setpiece, that strange cinematic subunit that hovers somewhere between shot, scene, and sequence, hesitating among the registers of cinematography, editing, and narrative, partaking of all while being confinable to none. Setpieces can be an unbroken single shot from the relatively brief (the Sandman’s birth or the opening to Welles’s Touch of Evil) to the extravagantly extended (the thirteen-minute tracking shot with which Steadicam fetishist Brian DePalma kicks off Snake Eyes). But we’re perhaps most familiar with the setpiece as constituted through the beats of action movies: hailstorms of tightly edited velocity and collision like the car chases in Bullitt or, more humorously, Foul Play; the fight scenes and song-and-dance numbers that act as structuring agents and generic determinants of martial-arts movies and musicals respectively; certain “procedural” stretches of heist, caper, and espionage films, like the silent CIA break-in of Mission Impossible (smartly profiled in a recent Aspect Ratio post). Setpieces often occur at the start of movies or near the end as a climactic sequence, but just as frequently erupt throughout the film’s running time like beads on a string; Raiders of the Lost Ark is a gaudy yet elegant necklace of such baubles, including one of my favorites, the “basket chase” set in Cairo. Usually wordless, setpieces tend to feature their own distinctive editing rhythms, musical tracks, and can-you-top-this series of gags and physical (now digital) stunts.

Setpieces are, in this sense, like mini-movies embedded within bigger movies, and biological metaphor might be the best way to describe their temporal and reproductive scalability. Like atavistic structures within the human body, setpieces seem to preserve long-ago aesthetics of early cinema: their logic of action and escalation recalls Edison kinetoscopes and Keystone Cops chases, while more hushed and contemplative setpieces (like the Sandman birth) have about them something of the arresting stillness and visual splendor of the actualite. Or to get all DNAish on you, setpieces are not unlike the selfish genes of which Richard Dawkins writes: traveling within the hosts of larger filmic bodies, vying for advantage in the cultural marketplace, it is actually the self-interested proliferation of setpieces that drives the replication — and evolution — of certain genres. The aforementioned martial-arts movies and musicals, certainly; but also the spy movie, the war and horror film, racing movies, and the many vivid flavors of gross-out comedy. The latest innovation in setpiece genetics may be the blockbuster transmedia franchise, which effectively “brands” certain sequences and delivers them in reliable (and proprietary) form to audiences: think of the lightsaber duels in any given phenotypic expression of Star Wars, from film to comic to videogame.

On an industrial level, of course, setpieces also signal constellations of labor that we can recognize as distinct from (while inescapably articulated to) the films’ ostensible authors. One historical instance of this is the work of Slavko Vorkapich, renowned for the montages he contributed to other peoples’ movies — so distinctive in his talents that to “Vorkapich” something became a term of art in Hollywood. Walt Disney was a master when it came to channeling and rebranding the work of individual artists under his own overweening “vision”; more recently we have the magpie-like appropriations of George Lucas, who was only in a managerial sense the creator of the Death Star battle that ends the 1977 Star Wars: A New Hope. This complexly composited and edited sequence (itself largely responsible for bringing setpieces into being as an element of fannish discourse) was far more genuinely the accomplishment of John Dykstra and his crew at Industrial Light and Magic, not to mention editors Richard Chew and Marcia Lucas. Further down the line — to really ramify the author function out of existence — the battle’s parentage can be traced to the cinematographers and editors who assembled the World War II movies — Tora! Tora! Tora!, The Bridges at Toko-Ri, The Dam Busters, etc. — from which Lucas culled his reference footage, a 16mm reel that Dykstra and ILM used as a template for the transcription of Mustangs and Messerschmitts into X-Wings and TIE Fighters.

Thirty years after the first Star Wars, sequences in blockbuster films are routinely farmed out to visual effects houses, increasing the likelihood that subunits of the movie will manifest their own individuating marks of style, dependent on the particular aesthetic tendencies and technological proficiencies of the company in question. (Storyboards and animatics, as well as on-the-fly oversight of FX shots in pipeline, help to minimize the levels of difference here, smoothing over mismatches in order to fit the outsourced chunks of content together into a singularly authored text — hinting at new ways in which the hoary concepts of “compositing” and “continuity” might be redeployed as a form of meta-industrial critique.) In the case of Spider-Man 3, no fewer than eight FX houses were involved (BUF, Evil Eye Pictures, Furious FX, Gentle Giant Studios, Giant Killer Robots, Halon Entertainment, Tweak Films, and X1fx) in addition to Sony Pictures Imageworks, which produced the Sandman shot.

When we look at a particular setpiece, then, we also pinpoint a curious node in the network of production: a juncture at which the multiplicity of labor required to generate blockbuster-scale entertainment must negotiate with our sense of a unified, unique product / author / vision. Perhaps this is simply an accidental echo of the collective-yet-singular aura that has always attended the serial existence of superheroes; what is Spider-Man but a single artwork scripted, drawn, acted, and realized onscreen by decades of named and nameless creators? But before all of this, we confront a basic heterogeneity that textures film experience: our understanding, at once obvious and profound, that some parts of movies stand out as better or worse or more in need of exploration than others. Spider-Man 3, as Dan acknowledges, is not a great film; but that does not mean it cannot contain great moments. In sifting for and scrutinizing such gems, I wonder if academics like us aren’t performing a strategic if unconscious role — one shared by the increasingly contiguous subcultures of journalists, critics, and fans — our dissective blogging facilitating a trees-over-forest approach to film analysis, a “setpiece-ification” that reflects the odd granularity of contemporary blockbuster media.

A-Z

As I throttle down for Thanksgiving week and a much-anticipated break from this busy semester (which I regret has allowed so little time for blogging), viruses are much on my mind: I await with some nervousness the onset of one of those academic-calendar colds that conveniently hold off until I’m done teaching. But other kinds of replicative infection are creeping into my life, today in the form of the Alphabet Meme, passed on to me by Chris Cagle of Category D, who caught it from Thom at Film of the Year. (I never realized how similar blogging and sex are: evidently when you link to someone, you link to everyone he or she has linked to.) Anyway, the goal of the exercise is a 26-item list of “Best Films,” corresponding to the letters of the alphabet. I’ll be forthright in acknowledging that my list has nothing to do with “bestness” and everything to do with love — simply put, the movies that mean the most to me. I’m a little too conscious of and skeptical about canonicity to nominate best-ofs; what is canon, anyway, but a kind of ubervirus, replicating within our taste hierarchies and the IPOs of cultural capital? The primary difference between irrational, irreducible favoritism and the stolid edifice of “the best that has been thought and said” (or in this case, filmed) is, it seems to me, one of authorship: the former is idiosyncratic, individual, owned, while the latter circulates unmoored in a kind of terrible immanence, its promiscuous power deriving precisely from its anonymity.

Or maybe I’m just feeling defensive. The list below, larded with science fiction and pop pleasures, nakedly exposes me as a cinematic philistine, a clear case of arrested development. How I reconcile this with my own day job of reproducing the canon (teaching Citizen Kane and Il Conformista semester after semester), I don’t know. But with turkey and stuffing on the horizon, I choose to leave the soul-searching to another day.

Before sharing my list, I believe I’m supposed to spread the meme to five other victims, er, friends. Let’s see: how about valued contributors Michael Duffy and MDR; Dan North at Spectacular Attractions; Nina Busse of Ephemeral Traces; and Tim Burke of Easily Distracted?

MY LIST

Alien
Battle Beyond the Stars
Close Encounters of the Third Kind
Die Hard
The Exorcist
Forbidden Planet
Groundhog Day
Harold and Maude
Invasion of the Body Snatchers (1978)
Jacob’s Ladder
King Kong (1933)
Logan’s Run
Miracle Mile
Night of the Living Dead (1968)
Outland
The Parallax View
The Quiet Earth
Run Lola Run
Superman (1978)
Terminator 2: Judgment Day
Ugetsu Monogatari
The Vanishing
The Wizard of Oz
X: The Man with the X-Ray Eyes
Young Frankenstein
Zardoz

Holograms

It’s still Jessica Yellin and you look like Jessica Yellin and we know you are Jessica Yellin. I think a lot of people are nervous out there. All right, Jessica. You were a terrific hologram.

— Wolf Blitzer, CNN

I woke this morning feeling distinctly unreal — a result of staying up late to catch every second of election coverage (though the champagne and cocktails with which I and my wife celebrated Obama’s amazing win undoubtedly played a part). But even after I checked the web to assure myself that, indeed, the outcome was not a nighttime dream but a daylight reality, I couldn’t shake the odd sense of being a projection of light myself, much like the “holograms” employed by CNN as part of their news coverage (Here’s the YouTube video, for as long as it might last):

I’ve written before on the spectacular plenitude of high-definition TV cross-saturated with intensive political commentary, an almost subjectivity-annihilating information flow on the visual, auditory, and ideological registers. In the case of CNN’s new trick in the toolbox, my first reaction was to giggle; the projection of reporter Jessica Yellin into the same conversational space as Wolf Blitzer was like a weird halftime show put on by engineering students as a graduation goof. But the cable news channel seemed to mean it, by God, and I have little doubt that we’ll see more such holographic play in coverage to come, as the technology becomes cheaper and its functionality streamlined into a single switch thrown on some hidden mixing board — shades of Walter Benjamin’s observation in “The Work of Art in the Age of Mechanical Reproduction” about striking a match.

Leaving aside the joking references to Star Wars (whose luminously be-scanlined projection of Princess Leia served, in 1977, to fold my preadolescent crush on Carrie Fisher into parallel fetishes with science-fiction technology and the visual-effects methods used to create them), last night’s “breakthrough” transmission of Yellin from Chicago to New York contains a subtle and disturbing undertone that should not be lost on feminist critics or theorists of simulation. This 2008 version of Alexander Graham Bell’s “Mr. Watson — Come here — I want to see you” employed as its audiovisual payload a woman’s body. It was, in this sense, just the latest retelling of the sad old story in which the female form is always-already rendered a simulacrum in the visual circuits of male desire. Yellin’s hologram, positioned in compliant stasis at the twinned focus of Blitzer’s crinkly, interrogative gaze and a floating camera that constantly reframed her phantasmic form, echoed the bodies of many a CG doll before it: those poor gynoids, from SIGGRAPH’s early Marilyn Monrobot to Shrek‘s Princess Fiona and Aki Ross in Final Fantasy: The Spirits Within, whose high-rez objectification marks the triumphal convergence of representational technology and phallic hegemony.

But beyond the obvious (and necessary) Mulveyan critique exists another interesting point. The news hologram, achieved by cybernetically tying together the behavior of two sets of cameras separated by hundreds of miles, is a remarkable example of realtime visual effects: the instantaneous compositing of spaces and bodies that once would have taken weeks or months to percolate through the production pipeline of even the best FX house. That in this case we don’t call it a visual effect, but a “news graphic” or the like, speaks more to the discursive baffles that generate such distinctions than to any genuine ontological difference. (A similar principle applies to the term “hologram”; what we’re really seeing is a sophisticated variant of chroma key, that venerable greenscreen technology by which TV forecasters are pasted onto weather maps. In this case, it’s been augmented by hyperfast, on-the-fly match-moving.) Special and visual effects are only recognized as such in narrative film and television — never in news and commercials, though that is where visual-effects R&D burns most brightly.

As to my own hologrammatic status, I assume it will fade as the magic of this political moment sinks in. An ambiguous tradeoff: one kind of reality becoming wonderfully solid, while another — the continuing complicity between gendered power and communication / imaging technology — recedes from consciousness.

Replicants

I look at Blade Runner as the last analog science-fiction movie made, because we didn’t have all the advantages that people have now. And I’m glad we didn’t, because there’s nothing artificial about it. There’s no computer-generated images in the film.

— David L. Snyder, Art Director

Any movie that gets a “Five-Disc Ultimate Collectors Edition” deserves serious attention, even in the midst of a busy semester, and there are few films more integral to the genre of science fiction or the craft of visual effects than Blade Runner. (Ordinarily I’d follow the stylistic rules about which I browbeat my Intro to Film students and follow this title with the year of release, 1982. But one of the many confounding and wonderful things about Blade Runner is the way in which it resists confinement to any one historical moment. By this I refer not only to its carefully designed and brilliantly realized vision of Los Angeles in 2019 [now a mere 11 years away!] but the many-versioned indeterminacy of its status as an industrial artifact, one that has been revamped, recut, and released many times throughout the two and a half decades of its cultural existence. Blade Runner in its revisions has almost dissolved the boundaries separating preproduction, production, and postproduction — the three stages of the traditional cinematic lifecycle — to become that rarest of filmic objects, the always-being-made. The only thing, in fact, that keeps Blade Runner from sliding into the same sad abyss as the first Star Wars [an object so scribbled-over with tweaks and touch-ups that it has almost unraveled the alchemy by which it initially transmuted an archive of tin-plated pop-culture precursors into a golden original] is the auteur-god at the center of its cosmology of texts: unlike George Lucas, Ridley Scott seems willing to use words like “final” and “definitive” — charged terms in their implicit contract to stop futzing around with a collectively cherished memory.)

I grabbed the DVDs from Swarthmore’s library last week to prep a guest lecture for a seminar a friend of mine is teaching in the English Department, and in the course of plowing through the three-and-a-half-hour production documentary “Dangerous Days” came across the quote from David L. Snyder that opens this post. What a remarkable statement — all the more amazing for how quickly and easily it goes by. If there is a conceptual digestive system for ideas as they circulate through time and our ideological networks, surely this is evidence of a successfully broken-down and assimilated “truth,” one which we’ve masticated and incorporated into our perception of film without ever realizing what an odd mouthful it makes. There’s nothing artificial about it, says David Snyder. Is he referring to the live-action performances of Harrison Ford, Rutger Hauer, and Sean Young? The “retrofitted” backlot of LA 2019, packed with costumed extras and drenched in practical environmental effects from smoke machines and water sprinklers? The cars futurized according to the extrapolative artwork of Syd Mead?

No: Snyder is talking about visual effects — the virtuoso work of a small army headed by Douglas Trumbull and Richard Yuricich — a suite of shots peppered throughout the film that map the hellish, vertiginous altitudes above the drippy neon streets of Lawrence G. Paull’s production design. Snyder refers, in other words, to shots produced exclusively through falsification: miniature vehicles, kitbashed cityscapes, and painted mattes, each piece captured in multiple “passes” and composited into frames that present themselves to the eye as unified gestalts but are in fact flattened collages, mosaics of elements captured in radically different scales, spaces, and times but made to coexist through the layerings of the optical printer: an elaborate decoupage deceptively passing itself off as immediate, indexical reality.

I get what Snyder is saying. There is something natural and real about the visual effects in Blade Runner; watching them, you feel the weight and substance of the models and lighting rigs, can almost smell the smoky haze being pumped around the light sources to create those gorgeous haloes, a signature of Trumbull’s FX work matched only by his extravagant ballet of ice-cream-cone UFOs amid boiling cloudscapes and miniature mountains in Close Encounters of the Third Kind. But what no one points out is that all of these visual effects — predigital visual effects — were once considered artificial. We used to think of them as tricks, hoodwinks, illusions. Only now that the digital revolution has come and gone, turning everything into weightless, effortless CG, do we retroactively assign the fakery of the past a glorious authenticity.

Or so the story goes. As I suggest above, and have argued elsewhere, the difference between “artificial” and “actual” in filmmaking is as much a matter of ideology as industrial method; perceptions of the medium are slippery and always open to contestation. Special and visual effects have always functioned as a kind of reality pump, investing the “nonspecial” scenes and sequences around them with an air of indexical reliability which is, itself, perhaps the most profound “effect.” With vanishingly few exceptions, actors speak lines written for them; stories are stitched into seamless continuity from fragments of film shot out of order; and, inescapably, a camera is there to record what’s happening, yet never reveals its own existence. Cinema is, prior to everything else, an artifact, and special effects function discursively to misdirect our attention onto more obvious classes of manipulation.

Now the computer has arrived as the new trick in town, enabling us to rebrand everything that came before as “real.” It’s an understandable turn of mind, but one that scholars and critics ought to navigate carefully. (Case in point: Snyder speaks as though computers didn’t exist at the time of Blade Runner. Yet it is only through the airtight registration made possible by motion-control cinematography, dependent on microprocessors for precision and memory storage for repeatability, that the film’s beautiful miniatures blend so smoothly with their surroundings.) It is possible, and worthwhile, to immerse ourselves in the virtual facade of ideology’s trompe-l’oeil — a higher order of special effect — while occasionally stepping back to acknowledge the brush strokes, the slightly imperfect matte lines that seam the composited elements of our thought.