Replicants

I look at Blade Runner as the last analog science-fiction movie made, because we didn’t have all the advantages that people have now. And I’m glad we didn’t, because there’s nothing artificial about it. There’s no computer-generated images in the film.

— David L. Snyder, Art Director

Any movie that gets a “Five-Disc Ultimate Collectors Edition” deserves serious attention, even in the midst of a busy semester, and there are few films more integral to the genre of science fiction or the craft of visual effects than Blade Runner. (Ordinarily I’d follow the stylistic rules about which I browbeat my Intro to Film students and follow this title with the year of release, 1982. But one of the many confounding and wonderful things about Blade Runner is the way in which it resists confinement to any one historical moment. By this I refer not only to its carefully designed and brilliantly realized vision of Los Angeles in 2019 [now a mere 11 years away!] but the many-versioned indeterminacy of its status as an industrial artifact, one that has been revamped, recut, and released many times throughout the two and a half decades of its cultural existence. Blade Runner in its revisions has almost dissolved the boundaries separating preproduction, production, and postproduction — the three stages of the traditional cinematic lifecycle — to become that rarest of filmic objects, the always-being-made. The only thing, in fact, that keeps Blade Runner from sliding into the same sad abyss as the first Star Wars [an object so scribbled-over with tweaks and touch-ups that it has almost unraveled the alchemy by which it initially transmuted an archive of tin-plated pop-culture precursors into a golden original] is the auteur-god at the center of its cosmology of texts: unlike George Lucas, Ridley Scott seems willing to use words like “final” and “definitive” — charged terms in their implicit contract to stop futzing around with a collectively cherished memory.)

I grabbed the DVDs from Swarthmore’s library last week to prep a guest lecture for a seminar a friend of mine is teaching in the English Department, and in the course of plowing through the three-and-a-half-hour production documentary “Dangerous Days” came across the quote from David L. Snyder that opens this post. What a remarkable statement — all the more amazing for how quickly and easily it goes by. If there is a conceptual digestive system for ideas as they circulate through time and our ideological networks, surely this is evidence of a successfully broken-down and assimilated “truth,” one which we’ve masticated and incorporated into our perception of film without ever realizing what an odd mouthful it makes. There’s nothing artificial about it, says David Snyder. Is he referring to the live-action performances of Harrison Ford, Rutger Hauer, and Sean Young? The “retrofitted” backlot of LA 2019, packed with costumed extras and drenched in practical environmental effects from smoke machines and water sprinklers? The cars futurized according to the extrapolative artwork of Syd Mead?

No: Snyder is talking about visual effects — the virtuoso work of a small army headed by Douglas Trumbull and Richard Yuricich — a suite of shots peppered throughout the film that map the hellish, vertiginous altitudes above the drippy neon streets of Lawrence G. Paull’s production design. Snyder refers, in other words, to shots produced exclusively through falsification: miniature vehicles, kitbashed cityscapes, and painted mattes, each piece captured in multiple “passes” and composited into frames that present themselves to the eye as unified gestalts but are in fact flattened collages, mosaics of elements captured in radically different scales, spaces, and times but made to coexist through the layerings of the optical printer: an elaborate decoupage deceptively passing itself off as immediate, indexical reality.

I get what Snyder is saying. There is something natural and real about the visual effects in Blade Runner; watching them, you feel the weight and substance of the models and lighting rigs, can almost smell the smoky haze being pumped around the light sources to create those gorgeous haloes, a signature of Trumbull’s FX work matched only by his extravagant ballet of ice-cream-cone UFOs amid boiling cloudscapes and miniature mountains in Close Encounters of the Third Kind. But what no one points out is that all of these visual effects — predigital visual effects — were once considered artificial. We used to think of them as tricks, hoodwinks, illusions. Only now that the digital revolution has come and gone, turning everything into weightless, effortless CG, do we retroactively assign the fakery of the past a glorious authenticity.

Or so the story goes. As I suggest above, and have argued elsewhere, the difference between “artificial” and “actual” in filmmaking is as much a matter of ideology as industrial method; perceptions of the medium are slippery and always open to contestation. Special and visual effects have always functioned as a kind of reality pump, investing the “nonspecial” scenes and sequences around them with an air of indexical reliability which is, itself, perhaps the most profound “effect.” With vanishingly few exceptions, actors speak lines written for them; stories are stitched into seamless continuity from fragments of film shot out of order; and, inescapably, a camera is there to record what’s happening, yet never reveals its own existence. Cinema is, prior to everything else, an artifact, and special effects function discursively to misdirect our attention onto more obvious classes of manipulation.

Now the computer has arrived as the new trick in town, enabling us to rebrand everything that came before as “real.” It’s an understandable turn of mind, but one that scholars and critics ought to navigate carefully. (Case in point: Snyder speaks as though computers didn’t exist at the time of Blade Runner. Yet it is only through the airtight registration made possible by motion-control cinematography, dependent on microprocessors for precision and memory storage for repeatability, that the film’s beautiful miniatures blend so smoothly with their surroundings.) It is possible, and worthwhile, to immerse ourselves in the virtual facade of ideology’s trompe-l’oeil — a higher order of special effect — while occasionally stepping back to acknowledge the brush strokes, the slightly imperfect matte lines that seam the composited elements of our thought.

TWC 1 Arrives (with Gaming CFP)

The first issue of Transformative Works and Cultures is now available. Table of contents below, along with a call for submissions for Issue 2, on Games.

Editorial

TWC Editor: Transforming academic and fan cultures

Theory

Abigail De Kosnik: Participatory democracy and Hillary Clinton’s marginalized fandom

Louisa Ellen Stein: “Emotions-Only” versus “Special People”: Genre in fan discourse

Anne Kustritz: Painful pleasures: Sacrifice, consent, and the resignification of BDSM symbolism in “The Story of O” and “The Story of Obi”

Francesca Coppa: Women, “Star Trek,” and the early development of fannish vidding

Praxis

Catherine Tosenberger: “The epic love story of Sam and Dean”: “Supernatural,” queer readings, and the romance of incestuous fan fiction

Madeline Ashby: Ownership, authority, and the body: Does antifanfic sentiment reflect posthuman anxiety?

Michael A. Arnzen: The unlearning: Horror and transformative theory

Sam Ford: Soap operas and the history of fan discussion

Symposium

Dana L. Bode: And now, a word from the amateurs

Rebecca Lucy Busker: On symposia: LiveJournal and the shape of fannish discourse

Cathy Cupitt: Nothing but Net: When cultures collide

Bob Rehak: Fan labor audio feature introduction

Interview

TWC Editor: Interview with Henry Jenkins

Veruska Sabucco: Interview with Wu Ming

TWC Editor: Interview with the Audre Lorde of the Rings

Review

Mary Dalton: “Teen television: Essays on programming and fandom,” edited by Sharon Marie Ross and Louisa Ellen Stein

Eva Marie Taggart: “Fans: The mirror of consumption,” by Cornel Sandvoss

Katarina Maria Hjarpe: “Cyberspaces of their own,” by Rhiannon Bury

Barna William Donovan: “The new influencers,” by Paul Gillin

And here is the CFP for Issue 2:

Special Issue: Games as Transformative Works

Transformative Works and Cultures, Vol. 2 (Spring 2009)
Deadline: November 15, 2008
Guest Editor: Rebecca Carlson

Transformative Works and Cultures (TWC) invites essays on gaming and gaming culture as transformative work. We are interested in game studies in all its theoretical and practical breadth, but even more so in the way fan culture shapes itself around and through gaming interfaces. Potential topics include but are not limited to game audiences as fan cultures; anthropological approaches to game design and game engagement; on- and off-line game experiences; textual and cultural analysis of games; fan appropriations and manipulations of games; and intersections between games and other fan artifacts.
TWC is a new Open Access, international peer-reviewed online journal published by the Organization for Transformative Works. TWC aims to provide a publishing outlet that welcomes fan-related topics and to promote dialogue between the academic community and the fan community. The first issue of TWC (September 2008) is available at http://journal.transformativeworks.org/. TWC accepts rolling electronic submissions of full essays through its Web site, where full guidelines are provided. The final deadline for inclusion in the special games issue is November 15, 2008.
TWC encourages innovative works that situate popular media, fan communities, and transformative works within contemporary culture via a variety of critical approaches, including but not limited to feminism, queer theory, critical race studies, political economy, ethnography, reception theory, literary criticism, film studies, and media studies. Submissions should fit into one of three categories of varying scope:

Theory: These often interdisciplinary essays with a conceptual focus and a theoretical frame offer expansive interventions in the field of fan studies. Peer review. Length, 5,000-8,000 words plus a 100-250-word abstract.
Praxis: These essays may apply a specific theory to a formation or artifact; explicate fan practice; perform a detailed reading of a specific text; or otherwise relate transformative phenomena to social, literary, technological, and/or historical frameworks. Peer review. Length, 4,000-7,000 words plus a 100-250-word abstract.
Symposium: Symposium is a section of concise, thematically contained essays. These short pieces provide insight into current developments and debates surrounding any topic related to fandom or transformative media and cultures. Editorial review. Length, 1,500-2,500 words.
Submission information: http://journal.transformativeworks.org/index.php/twc/about/submissions

Titles on the Fringe

Once again I find myself without much to add to the positive consensus surrounding a new media release; in this case, it’s the FOX series Fringe, which had its premiere on Tuesday. My friends and fellow bloggers Jon Gray and Geoff Long both give the show props, which by itself would have convinced me to donate the time and DVR space to watch the fledgling serial spread its wings. The fact that the series is a sleek update of The X-Files is just icing on the cake.

In this case, it’s a cake whose monster-of-the-week decorations seem likely to rest on a creamy backdrop of conspiracy; let’s hope Fringe (if it takes off) does a better job of upkeep on its conspiracy than did X-Files. That landmark series — another spawn of the FOX network, though from long ago when it was a brassy little David throwing stones at the Goliaths of ABC, NBC, and CBS — became nearly axiomatic for me back in 1993 when I stumbled across it one Friday night. I watched it obsessively, first by myself, then with a circle of friends; it was, for a time, a perfect example not just of “appointment television” but of “subcultural TV,” accumulating local fanbaselets who would crowd the couch, eat take-out pizza, and stay up late discussing the series’ marvelously spooky adumbrations and witty gross-outs. But after about three seasons, the show began to falter, and I watched in sadness as The X-Files succumbed to the fate of so many serial properties that lose their way and become craven copies of themselves: National Lampoon, American Flagg, Star Wars.

The problem with X-Files was that it couldn’t, over its unforgivably extended run of nine seasons, sustain the weavework necessary for a good, gripping conspiracy: a counterpoint of deferral and revelation, unbelievable questions flowing naturally from believable answers with the formal intricacy of a tango. After about season six, I couldn’t even bring myself to watch anymore; to do so would have been like visiting an aged and senile relative in a nursing home, a loved one who could no longer recognize me, or me her.

I have no idea whether Fringe will ever be as good as the best or as bad as the worst of The X-Files, but I’m already looking forward to finding out. I’ve written previously about J. J. Abrams and his gift for creating haloes of speculation around the media properties with which his name is associated, such as Alias, Lost, and Cloverfield. He’s good at the open-ended promise, and while he’s proven himself a decent director of standalone films (I’m pretty sure the new Star Trek will rock), his natural environment is clearly the serial structure of dramatic television narrative, which even in its sunniest incarnation is like a friendly conspiracy to satisfy week-by-week while keeping us coming back for more.

As I stated at the beginning, other commentators are doing a fine job of assessing Fringe‘s premise and cast of characters. The only point I’ll add is that the show’s signature visual — as much a part of its texture as the timejumps on Lost or the fades-to-white on Six Feet Under — turns me on immensely. I’m speaking, of course, about the floating 3D titles that identify locale, as in this shot:

Jon points out that the conceit of embedding titles within three-dimensional space has been done previously in Grand Theft Auto 4. Though that videogame’s grim repetitiveness was too much (or not enough) for this gamer, I appreciated the title trick, and recognized it as having an even longer lineage. The truth is, embedded titles have been “floating” around the mediascape for several years. The first time I noticed them was in David Fincher’s magnificent, underrated Panic Room. There, the opening credits unfold in architectural space, suspended against the buildings of Manhattan in sunlit Copperplate:

My fascination with Panic Room, a high-tech homage to Alfred Hitchcock in which form mercilessly follows function (the whole film is a trap, a cinematic homology of the brownstone in which Jodie Foster defends herself against murderous intruders), began with that title sequence and only grew. Notice, for example, how Foster’s name lurks in the right-hand corner of one shot, as though waiting for its closeup in the next:

The work of visual-effects houses Picture Mill and Computer Cafe, Panic Room‘s embedded titles make us acutely uneasy by conflating two spaces of film spectatorship that ordinarily remain reassuringly separate: the “in-there” of the movie’s action and the “out-here” of credits, subtitles, musical score, and other elements that are of the movie but not perceivable by the characters in the storyworld. It’s precisely the difference between diegetic and nondiegetic, one of the basic distinctions I teach students in my introductory film course.

But embedded titles such as the ones in Panic Room and Fringe confound easy categorical compartmentalization, rupturing the hygienic membrane that keeps the double registers of filmic phenomenology apart. The titles hang in an undecidable place, with uncertain epistemological and ontological status, like ghosts. They are perfect for a show that concerns itself with the threads of unreality that run through the texture of the everyday.

Ironically, the titles on Fringe are receiving criticism from fans like those on this Ain’t It Cool talkback, who see them as a cliched attempt to capitalize on an overworked idea:

The pilot was okay, but the leads were dull and the dialogue not much better. And the establishing subtitles looked like double ripoff of the opening credits of Panic Room and the “chapter 1” titles on Heroes. They’re “cool”, but they’ll likely become distracting in the long run.

I hated the 3D text … This sort of things has to stop. it’s not cool, David Fincher’s title sequence in Panic Room was stupid, stop it. It completly takes me out of the scene when this stuff shows up on screen. It reminds you you’re watching TV. It takes a few seconds to realize it’s not a “real” object and other characters, cars, plans, are not seeing that object, even though it’s perfectly 3D shaded to fit in the scene. And it serves NO PURPOSE other than to take you out of the scene and distract you. it’s a dumb, childish, show-off-y amateurish “let’s copy Fincher” thing, and I want it out of this and Heroes.

…I DVR’d the show while I was working, came in about 40 minutes into it before flipping over to my recording. They were outside the building at Harvard and I thought, “Hey cool, Harvard built huge letters spelling out their name outside one of their buildings.”… then I realized they were just ripping off the Panic Room title sequence. Weak.

The visual trick of embedded titles is, like any fusion of style and technology, a packaged idea with its own itinerary and lifespan; it will travel from text to text and medium to medium, picked up here in a movie, there in a videogame, and again in a TV series. In an article I published last year in Film Criticism, I labeled such entities “microgenres,” basing the term on my observation of the strange cultural circulation of the bullet time visual effect:

If the sprawling experiment of the Matrix trilogy left us with any definite conclusion, it is this: special effects have taken on a life of their own. By saying this, I do not mean simply to reiterate the familiar (and debatable) claim that movies are increasingly driven by spectacle over story, or that, in this age of computer-generated imagery (CGI), special effects are “better than ever.” Instead, bullet time’s storied trajectory draws attention to the fact that certain privileged special effects behave in ways that confound traditional understandings of cinematic narrative, meaning, and genre — quite literally traveling from one place to another like mini-movies unto themselves. As The Matrix‘s most emblematic signifier and most quoted element, bullet time spread seemingly overnight to other movies, cloaking itself in the vestments of Shakespearean tragedy (Titus, 1999), high-concept television remake (Charlie’s Angels, 2000), caper film (Swordfish, 2001), teen adventure (Clockstoppers, 2002), and cop/buddy film (Bad Boys 2, 2003). Furthermore, its migration crossed formal boundaries into animation, TV ads, music videos, and computer games, suggesting that bullet time’s look — not its underlying technologies or associated authors and owners — played the determining role in its proliferation. Almost as suddenly as it sprang on the public scene, however, bullet time burned out. Advertisements for everything from Apple Jacks and Taco Bell to BMW and Citibank Visa made use of its signature coupling of slowed time and freely roaming cameras. The martial-arts parody Kung Pow: Enter the Fist (2002) recapped The Matrix‘s key moments during an extended duel between the Chosen One (Steve Oedekerk) and a computer-animated cow. Put to scullery work as a sportscasting aid in the CBS Superbowl, parodied in Scary Movie (2000), Shrek (2001), and The Simpsons, the once-special effect died from overexposure, becoming first a cliche, then a joke. The rise and fall of bullet time — less a singular special effect than a named and stylistically branded package of photographic and digital techniques — echoes the fleeting celebrity of the morph ten years earlier. Both played out their fifteen minutes of fame across a Best-Buy’s-worth of media screens. And both hint at the recent emergence of an unusual, scaled-down class of generic objects: aggregates of imagery and meaning that circulate with startling rapidity, and startlingly frank public acknowledgement, through our media networks.

Clearly, embedded titles are undergoing a similar process, arising first as an innovation, then reproducing virally across a host of texts. Soon enough, I’m sure, we’ll see the parodies: imagine a film of the Scary Movie ilk in which someone clonks his head on a floating title. Ah, well: such is media evolution. In the meantime, I’ll keep enjoying the effect in its more sober incarnation on Fringe, where this particular package of signifiers has found a respectful — and generically appropriate — home.

Convention in a Bubble

A quick followup to my post from two weeks ago (a seeming eternity) on my gleeful, gluttonous anticipation of the Democratic and Republican National Conventions as high-def smorgasbords for my optic nerve. I watched and listened dutifully, and now — literally, the morning after — I feel stuffed, sated, a little sick. But that’s part of the point: pain follows pleasure, hangover follows bender. Soon enough, I’ll be hungry for more: who’s with me for the debates?

Anyway, grazing through the morning’s leftovers in the form of news sites and blogs, I was startled by the beauty of this interactive feature from the New York Times, a 360-degree panorama of the RNC’s wrapup. It’s been fourteen years since Quicktime technology pervily cross-pollinated Star Trek: The Next Generation‘s central chronotope, the U.S.S. Enterprise 1701-D, in a wondrous piece of reference software called the Interactive Technical Manual. I remember being glued to the 640X480 display of my Macintosh whatever-it-was (the Quadra? the LC?), exploring the innards of the Enterprise from stem to stern through little Quicktime VR windows within which, by clicking and dragging, you could turn in a full circle, look up and down, zoom in and out. Now a more potent and less pixilated descendent of that trick has been used to capture and preserve for contemplation a bubble of spacetime from St. Paul, Minnesota, at the orgiastic instant of the balloons’ release which signaled the conclusion of the Republicans’ gathering.

Quite apart from the political aftertaste (and let’s just say that this week was like the sour medicine I had to swallow after the Democrats’ spoonful of sugar), there’s something sublime about clicking around inside the englobed map. Hard to pinpoint the precise location of my delight: is it that confetti suspended in midair, like ammo casings in The Matrix‘s bullet-time shots? The delegates’ faces, receding into the distance until they become as abstractedly innocent as a galactic starfield or a sprinkle-encrusted doughnut? Or is it the fact of navigation itself, the weirdly pleasurable contradiction between my fixed immobility at the center of this reconstructed universe and the fluid way I crane my virtual neck to peer up, down, and all around? Optical cryptids such as this confound the classical Barthesian punctum. So like and yet unlike the photographic, cinematographic, and ludic regimes that are its parents (parents probably as startled and dismayed by their own fecundity as the rapidly multiplying Palin clan), the image-machine of the Flash bubble has already anticipated the swooping search paths of my fascinated gaze and embedded them algorithmically within itself.

If I did have to choose the place I most love looking, it would be at the faces captured nearest the “camera” (here in virtualizing quotes because the bubble actually comprises several stitched-together images, undercutting any simple notion of a singular device and instant of capture). Peering down at them from what seems just a few feet away, the reporters seem poignant — again, innocent — as they stare toward center stage with an intensity that matches my own, yet remain oblivious to the panoptic monster hanging over their heads, unaware that they have been frozen in time. How this differs from the metaphysics of older photography, I can’t say; I just know that it does. Perhaps it’s the ontology of the bubble itself, at once genesis and apocalypse: an expanding shock wave, the sudden blistered outpouring of plasma that launched the universe, a grenade going off. The faces of those closest to “me” (for what am I in this system? time-traveler? avatar? ghost? god?) are reminiscent of those stopped watches recovered from Hiroshima and Nagasaki, infinitely recording the split-second at which one reality ended while another, harsher and hotter, exploded into existence.

It remains to be seen what will come of this particular Flashpoint. For the moment — a moment which will last forever — you can explore the bubble to your heart’s content.

The Voice of God

The news of Don LaFontaine’s death triggers the same phenomenon that marked his strange career: the bulk of it, spent intoning over movie trailers, bypassed our conscious notice, and it is only with his passing that the man and his peculiar, familiar, theatrical magic swims into full presence. We had to lose him to appreciate him.

LaFontaine’s voiceovers typically began with the phrase “In a world …”

In a world … where criminals run the streets and cops have lost all hope —

In a world … of wealth, power, and seduction —

In a world … where major corporations are run by fluffy kittens —

(OK, I made that last one up.) The very definition of portentous, In a world‘s ritual followup was always some variant of But then … or But now … or Until one day …, signaling a disequilibrium that launches the plot. But with LaFontaine’s work, whatever movie the trailer pointed to was not really the point. The pleasure of previews is of a qualitatively different order; a brilliant shorthand or hieroglyph, an elegantly spun abstraction, the planting of anticipation without the need for actual payoff (how many movies do we actually watch, compared to the number of previews we’re exposed to?). LaFontaine’s gift was precisely that of jouissance, endlessly deferred desire. And his particular way of narrating that unfulfillable promise — almost comical in its gravitas and urgency, utterly sincere yet somehow tongue-in-cheek — was what made the man a genre unto himself.

This might be true of other voice artists — Mel Blanc, Maurice LaMarche, and Harry Shearer come to mind. What Roland Barthes called the grain of the voice acts, for these talents, as a unifying field of identity stretched across a dozen or a hundred bodies and faces, or in LaFontaine’s case, a thousand acts of promotional montage. If, as so many psychoanalytic theorizations of the medium suppose, moves are indeed like dreams, then LaFontaine was the voice that read us our bedtime story, easing us through those liminal minutes of prefatory errata before we surrendered completely to a collective hallucinatory slumber. He was there and not-there in much the same way as the invisible stylings of Classical Hollywood: continuity editing, “inaudible” sound. Yet periodically, we were jolted back into awareness of his existence, through parodies on Family Guy and cameos in Geico ads. These moments of delicious rupture also functioned as a kind of righteous suture, reuniting a voice and body that had been sundered, bringing the man out from behind the curtain to take a much-deserved bow. Self-mocking he might have been, but I always greeted LaFontaine’s appearance with the affectionate recognition of a long-lost uncle.

The IMDb page devoted to LaFontaine reflects his weirdly inside-out career, listing his gag appearances and TV gigs while omitting the massive archive of VO work that led to those other jobs. The much-consulted database’s taxonomic structure does not track movie trailers, and so an entire field of cinematic labor is allowed to vanish from our cybernetically-arrayed knowledge of the medium. The people who cut together and score previews, who write the boiled-down narration and engineer the vertiginous genre-pivots that mark the best of the bunch (I thought it was a Civil War melodrama — but it turns out to be a screwball comedy about vampires!), toil away at a level once removed from the credited army at the end of the major motion pictures they advertise, and twice removed from the big-name stars, producers, directors, and writers who precede the titles. A shame: what we might call the submedium of the movie preview is probably more integral to maintaining our generalized perceptions of cinema than any single instance of the feature film.

Trying to mount a project for the next Media in Transition conference, I’ve been thinking about modern ephemera: emergent categories of the forgotten in new media. It’s an environment whose database narratives, paratextual proliferation, and reliance on ever-expanding memory and storage media and constantly-evolving archive and retrieval tools promise an end to the kinds of historical losses that cripple our understanding of, say, the dawn of cinema. LaFontaine’s paradoxical legacy, both glorious and pitiable, reminds me that there are still many gaps in the network — many lacunae in our vision, and much to explore and remedy.