TWC 1 Arrives (with Gaming CFP)

The first issue of Transformative Works and Cultures is now available. Table of contents below, along with a call for submissions for Issue 2, on Games.

Editorial

TWC Editor: Transforming academic and fan cultures

Theory

Abigail De Kosnik: Participatory democracy and Hillary Clinton’s marginalized fandom

Louisa Ellen Stein: “Emotions-Only” versus “Special People”: Genre in fan discourse

Anne Kustritz: Painful pleasures: Sacrifice, consent, and the resignification of BDSM symbolism in “The Story of O” and “The Story of Obi”

Francesca Coppa: Women, “Star Trek,” and the early development of fannish vidding

Praxis

Catherine Tosenberger: “The epic love story of Sam and Dean”: “Supernatural,” queer readings, and the romance of incestuous fan fiction

Madeline Ashby: Ownership, authority, and the body: Does antifanfic sentiment reflect posthuman anxiety?

Michael A. Arnzen: The unlearning: Horror and transformative theory

Sam Ford: Soap operas and the history of fan discussion

Symposium

Dana L. Bode: And now, a word from the amateurs

Rebecca Lucy Busker: On symposia: LiveJournal and the shape of fannish discourse

Cathy Cupitt: Nothing but Net: When cultures collide

Bob Rehak: Fan labor audio feature introduction

Interview

TWC Editor: Interview with Henry Jenkins

Veruska Sabucco: Interview with Wu Ming

TWC Editor: Interview with the Audre Lorde of the Rings

Review

Mary Dalton: “Teen television: Essays on programming and fandom,” edited by Sharon Marie Ross and Louisa Ellen Stein

Eva Marie Taggart: “Fans: The mirror of consumption,” by Cornel Sandvoss

Katarina Maria Hjarpe: “Cyberspaces of their own,” by Rhiannon Bury

Barna William Donovan: “The new influencers,” by Paul Gillin

And here is the CFP for Issue 2:

Special Issue: Games as Transformative Works

Transformative Works and Cultures, Vol. 2 (Spring 2009)
Deadline: November 15, 2008
Guest Editor: Rebecca Carlson

Transformative Works and Cultures (TWC) invites essays on gaming and gaming culture as transformative work. We are interested in game studies in all its theoretical and practical breadth, but even more so in the way fan culture shapes itself around and through gaming interfaces. Potential topics include but are not limited to game audiences as fan cultures; anthropological approaches to game design and game engagement; on- and off-line game experiences; textual and cultural analysis of games; fan appropriations and manipulations of games; and intersections between games and other fan artifacts.
TWC is a new Open Access, international peer-reviewed online journal published by the Organization for Transformative Works. TWC aims to provide a publishing outlet that welcomes fan-related topics and to promote dialogue between the academic community and the fan community. The first issue of TWC (September 2008) is available at http://journal.transformativeworks.org/. TWC accepts rolling electronic submissions of full essays through its Web site, where full guidelines are provided. The final deadline for inclusion in the special games issue is November 15, 2008.
TWC encourages innovative works that situate popular media, fan communities, and transformative works within contemporary culture via a variety of critical approaches, including but not limited to feminism, queer theory, critical race studies, political economy, ethnography, reception theory, literary criticism, film studies, and media studies. Submissions should fit into one of three categories of varying scope:

Theory: These often interdisciplinary essays with a conceptual focus and a theoretical frame offer expansive interventions in the field of fan studies. Peer review. Length, 5,000-8,000 words plus a 100-250-word abstract.
Praxis: These essays may apply a specific theory to a formation or artifact; explicate fan practice; perform a detailed reading of a specific text; or otherwise relate transformative phenomena to social, literary, technological, and/or historical frameworks. Peer review. Length, 4,000-7,000 words plus a 100-250-word abstract.
Symposium: Symposium is a section of concise, thematically contained essays. These short pieces provide insight into current developments and debates surrounding any topic related to fandom or transformative media and cultures. Editorial review. Length, 1,500-2,500 words.
Submission information: http://journal.transformativeworks.org/index.php/twc/about/submissions

Titles on the Fringe

Once again I find myself without much to add to the positive consensus surrounding a new media release; in this case, it’s the FOX series Fringe, which had its premiere on Tuesday. My friends and fellow bloggers Jon Gray and Geoff Long both give the show props, which by itself would have convinced me to donate the time and DVR space to watch the fledgling serial spread its wings. The fact that the series is a sleek update of The X-Files is just icing on the cake.

In this case, it’s a cake whose monster-of-the-week decorations seem likely to rest on a creamy backdrop of conspiracy; let’s hope Fringe (if it takes off) does a better job of upkeep on its conspiracy than did X-Files. That landmark series — another spawn of the FOX network, though from long ago when it was a brassy little David throwing stones at the Goliaths of ABC, NBC, and CBS — became nearly axiomatic for me back in 1993 when I stumbled across it one Friday night. I watched it obsessively, first by myself, then with a circle of friends; it was, for a time, a perfect example not just of “appointment television” but of “subcultural TV,” accumulating local fanbaselets who would crowd the couch, eat take-out pizza, and stay up late discussing the series’ marvelously spooky adumbrations and witty gross-outs. But after about three seasons, the show began to falter, and I watched in sadness as The X-Files succumbed to the fate of so many serial properties that lose their way and become craven copies of themselves: National Lampoon, American Flagg, Star Wars.

The problem with X-Files was that it couldn’t, over its unforgivably extended run of nine seasons, sustain the weavework necessary for a good, gripping conspiracy: a counterpoint of deferral and revelation, unbelievable questions flowing naturally from believable answers with the formal intricacy of a tango. After about season six, I couldn’t even bring myself to watch anymore; to do so would have been like visiting an aged and senile relative in a nursing home, a loved one who could no longer recognize me, or me her.

I have no idea whether Fringe will ever be as good as the best or as bad as the worst of The X-Files, but I’m already looking forward to finding out. I’ve written previously about J. J. Abrams and his gift for creating haloes of speculation around the media properties with which his name is associated, such as Alias, Lost, and Cloverfield. He’s good at the open-ended promise, and while he’s proven himself a decent director of standalone films (I’m pretty sure the new Star Trek will rock), his natural environment is clearly the serial structure of dramatic television narrative, which even in its sunniest incarnation is like a friendly conspiracy to satisfy week-by-week while keeping us coming back for more.

As I stated at the beginning, other commentators are doing a fine job of assessing Fringe‘s premise and cast of characters. The only point I’ll add is that the show’s signature visual — as much a part of its texture as the timejumps on Lost or the fades-to-white on Six Feet Under — turns me on immensely. I’m speaking, of course, about the floating 3D titles that identify locale, as in this shot:

Jon points out that the conceit of embedding titles within three-dimensional space has been done previously in Grand Theft Auto 4. Though that videogame’s grim repetitiveness was too much (or not enough) for this gamer, I appreciated the title trick, and recognized it as having an even longer lineage. The truth is, embedded titles have been “floating” around the mediascape for several years. The first time I noticed them was in David Fincher’s magnificent, underrated Panic Room. There, the opening credits unfold in architectural space, suspended against the buildings of Manhattan in sunlit Copperplate:

My fascination with Panic Room, a high-tech homage to Alfred Hitchcock in which form mercilessly follows function (the whole film is a trap, a cinematic homology of the brownstone in which Jodie Foster defends herself against murderous intruders), began with that title sequence and only grew. Notice, for example, how Foster’s name lurks in the right-hand corner of one shot, as though waiting for its closeup in the next:

The work of visual-effects houses Picture Mill and Computer Cafe, Panic Room‘s embedded titles make us acutely uneasy by conflating two spaces of film spectatorship that ordinarily remain reassuringly separate: the “in-there” of the movie’s action and the “out-here” of credits, subtitles, musical score, and other elements that are of the movie but not perceivable by the characters in the storyworld. It’s precisely the difference between diegetic and nondiegetic, one of the basic distinctions I teach students in my introductory film course.

But embedded titles such as the ones in Panic Room and Fringe confound easy categorical compartmentalization, rupturing the hygienic membrane that keeps the double registers of filmic phenomenology apart. The titles hang in an undecidable place, with uncertain epistemological and ontological status, like ghosts. They are perfect for a show that concerns itself with the threads of unreality that run through the texture of the everyday.

Ironically, the titles on Fringe are receiving criticism from fans like those on this Ain’t It Cool talkback, who see them as a cliched attempt to capitalize on an overworked idea:

The pilot was okay, but the leads were dull and the dialogue not much better. And the establishing subtitles looked like double ripoff of the opening credits of Panic Room and the “chapter 1” titles on Heroes. They’re “cool”, but they’ll likely become distracting in the long run.

I hated the 3D text … This sort of things has to stop. it’s not cool, David Fincher’s title sequence in Panic Room was stupid, stop it. It completly takes me out of the scene when this stuff shows up on screen. It reminds you you’re watching TV. It takes a few seconds to realize it’s not a “real” object and other characters, cars, plans, are not seeing that object, even though it’s perfectly 3D shaded to fit in the scene. And it serves NO PURPOSE other than to take you out of the scene and distract you. it’s a dumb, childish, show-off-y amateurish “let’s copy Fincher” thing, and I want it out of this and Heroes.

…I DVR’d the show while I was working, came in about 40 minutes into it before flipping over to my recording. They were outside the building at Harvard and I thought, “Hey cool, Harvard built huge letters spelling out their name outside one of their buildings.”… then I realized they were just ripping off the Panic Room title sequence. Weak.

The visual trick of embedded titles is, like any fusion of style and technology, a packaged idea with its own itinerary and lifespan; it will travel from text to text and medium to medium, picked up here in a movie, there in a videogame, and again in a TV series. In an article I published last year in Film Criticism, I labeled such entities “microgenres,” basing the term on my observation of the strange cultural circulation of the bullet time visual effect:

If the sprawling experiment of the Matrix trilogy left us with any definite conclusion, it is this: special effects have taken on a life of their own. By saying this, I do not mean simply to reiterate the familiar (and debatable) claim that movies are increasingly driven by spectacle over story, or that, in this age of computer-generated imagery (CGI), special effects are “better than ever.” Instead, bullet time’s storied trajectory draws attention to the fact that certain privileged special effects behave in ways that confound traditional understandings of cinematic narrative, meaning, and genre — quite literally traveling from one place to another like mini-movies unto themselves. As The Matrix‘s most emblematic signifier and most quoted element, bullet time spread seemingly overnight to other movies, cloaking itself in the vestments of Shakespearean tragedy (Titus, 1999), high-concept television remake (Charlie’s Angels, 2000), caper film (Swordfish, 2001), teen adventure (Clockstoppers, 2002), and cop/buddy film (Bad Boys 2, 2003). Furthermore, its migration crossed formal boundaries into animation, TV ads, music videos, and computer games, suggesting that bullet time’s look — not its underlying technologies or associated authors and owners — played the determining role in its proliferation. Almost as suddenly as it sprang on the public scene, however, bullet time burned out. Advertisements for everything from Apple Jacks and Taco Bell to BMW and Citibank Visa made use of its signature coupling of slowed time and freely roaming cameras. The martial-arts parody Kung Pow: Enter the Fist (2002) recapped The Matrix‘s key moments during an extended duel between the Chosen One (Steve Oedekerk) and a computer-animated cow. Put to scullery work as a sportscasting aid in the CBS Superbowl, parodied in Scary Movie (2000), Shrek (2001), and The Simpsons, the once-special effect died from overexposure, becoming first a cliche, then a joke. The rise and fall of bullet time — less a singular special effect than a named and stylistically branded package of photographic and digital techniques — echoes the fleeting celebrity of the morph ten years earlier. Both played out their fifteen minutes of fame across a Best-Buy’s-worth of media screens. And both hint at the recent emergence of an unusual, scaled-down class of generic objects: aggregates of imagery and meaning that circulate with startling rapidity, and startlingly frank public acknowledgement, through our media networks.

Clearly, embedded titles are undergoing a similar process, arising first as an innovation, then reproducing virally across a host of texts. Soon enough, I’m sure, we’ll see the parodies: imagine a film of the Scary Movie ilk in which someone clonks his head on a floating title. Ah, well: such is media evolution. In the meantime, I’ll keep enjoying the effect in its more sober incarnation on Fringe, where this particular package of signifiers has found a respectful — and generically appropriate — home.

Convention in a Bubble

A quick followup to my post from two weeks ago (a seeming eternity) on my gleeful, gluttonous anticipation of the Democratic and Republican National Conventions as high-def smorgasbords for my optic nerve. I watched and listened dutifully, and now — literally, the morning after — I feel stuffed, sated, a little sick. But that’s part of the point: pain follows pleasure, hangover follows bender. Soon enough, I’ll be hungry for more: who’s with me for the debates?

Anyway, grazing through the morning’s leftovers in the form of news sites and blogs, I was startled by the beauty of this interactive feature from the New York Times, a 360-degree panorama of the RNC’s wrapup. It’s been fourteen years since Quicktime technology pervily cross-pollinated Star Trek: The Next Generation‘s central chronotope, the U.S.S. Enterprise 1701-D, in a wondrous piece of reference software called the Interactive Technical Manual. I remember being glued to the 640X480 display of my Macintosh whatever-it-was (the Quadra? the LC?), exploring the innards of the Enterprise from stem to stern through little Quicktime VR windows within which, by clicking and dragging, you could turn in a full circle, look up and down, zoom in and out. Now a more potent and less pixilated descendent of that trick has been used to capture and preserve for contemplation a bubble of spacetime from St. Paul, Minnesota, at the orgiastic instant of the balloons’ release which signaled the conclusion of the Republicans’ gathering.

Quite apart from the political aftertaste (and let’s just say that this week was like the sour medicine I had to swallow after the Democrats’ spoonful of sugar), there’s something sublime about clicking around inside the englobed map. Hard to pinpoint the precise location of my delight: is it that confetti suspended in midair, like ammo casings in The Matrix‘s bullet-time shots? The delegates’ faces, receding into the distance until they become as abstractedly innocent as a galactic starfield or a sprinkle-encrusted doughnut? Or is it the fact of navigation itself, the weirdly pleasurable contradiction between my fixed immobility at the center of this reconstructed universe and the fluid way I crane my virtual neck to peer up, down, and all around? Optical cryptids such as this confound the classical Barthesian punctum. So like and yet unlike the photographic, cinematographic, and ludic regimes that are its parents (parents probably as startled and dismayed by their own fecundity as the rapidly multiplying Palin clan), the image-machine of the Flash bubble has already anticipated the swooping search paths of my fascinated gaze and embedded them algorithmically within itself.

If I did have to choose the place I most love looking, it would be at the faces captured nearest the “camera” (here in virtualizing quotes because the bubble actually comprises several stitched-together images, undercutting any simple notion of a singular device and instant of capture). Peering down at them from what seems just a few feet away, the reporters seem poignant — again, innocent — as they stare toward center stage with an intensity that matches my own, yet remain oblivious to the panoptic monster hanging over their heads, unaware that they have been frozen in time. How this differs from the metaphysics of older photography, I can’t say; I just know that it does. Perhaps it’s the ontology of the bubble itself, at once genesis and apocalypse: an expanding shock wave, the sudden blistered outpouring of plasma that launched the universe, a grenade going off. The faces of those closest to “me” (for what am I in this system? time-traveler? avatar? ghost? god?) are reminiscent of those stopped watches recovered from Hiroshima and Nagasaki, infinitely recording the split-second at which one reality ended while another, harsher and hotter, exploded into existence.

It remains to be seen what will come of this particular Flashpoint. For the moment — a moment which will last forever — you can explore the bubble to your heart’s content.

The Voice of God

The news of Don LaFontaine’s death triggers the same phenomenon that marked his strange career: the bulk of it, spent intoning over movie trailers, bypassed our conscious notice, and it is only with his passing that the man and his peculiar, familiar, theatrical magic swims into full presence. We had to lose him to appreciate him.

LaFontaine’s voiceovers typically began with the phrase “In a world …”

In a world … where criminals run the streets and cops have lost all hope —

In a world … of wealth, power, and seduction —

In a world … where major corporations are run by fluffy kittens —

(OK, I made that last one up.) The very definition of portentous, In a world‘s ritual followup was always some variant of But then … or But now … or Until one day …, signaling a disequilibrium that launches the plot. But with LaFontaine’s work, whatever movie the trailer pointed to was not really the point. The pleasure of previews is of a qualitatively different order; a brilliant shorthand or hieroglyph, an elegantly spun abstraction, the planting of anticipation without the need for actual payoff (how many movies do we actually watch, compared to the number of previews we’re exposed to?). LaFontaine’s gift was precisely that of jouissance, endlessly deferred desire. And his particular way of narrating that unfulfillable promise — almost comical in its gravitas and urgency, utterly sincere yet somehow tongue-in-cheek — was what made the man a genre unto himself.

This might be true of other voice artists — Mel Blanc, Maurice LaMarche, and Harry Shearer come to mind. What Roland Barthes called the grain of the voice acts, for these talents, as a unifying field of identity stretched across a dozen or a hundred bodies and faces, or in LaFontaine’s case, a thousand acts of promotional montage. If, as so many psychoanalytic theorizations of the medium suppose, moves are indeed like dreams, then LaFontaine was the voice that read us our bedtime story, easing us through those liminal minutes of prefatory errata before we surrendered completely to a collective hallucinatory slumber. He was there and not-there in much the same way as the invisible stylings of Classical Hollywood: continuity editing, “inaudible” sound. Yet periodically, we were jolted back into awareness of his existence, through parodies on Family Guy and cameos in Geico ads. These moments of delicious rupture also functioned as a kind of righteous suture, reuniting a voice and body that had been sundered, bringing the man out from behind the curtain to take a much-deserved bow. Self-mocking he might have been, but I always greeted LaFontaine’s appearance with the affectionate recognition of a long-lost uncle.

The IMDb page devoted to LaFontaine reflects his weirdly inside-out career, listing his gag appearances and TV gigs while omitting the massive archive of VO work that led to those other jobs. The much-consulted database’s taxonomic structure does not track movie trailers, and so an entire field of cinematic labor is allowed to vanish from our cybernetically-arrayed knowledge of the medium. The people who cut together and score previews, who write the boiled-down narration and engineer the vertiginous genre-pivots that mark the best of the bunch (I thought it was a Civil War melodrama — but it turns out to be a screwball comedy about vampires!), toil away at a level once removed from the credited army at the end of the major motion pictures they advertise, and twice removed from the big-name stars, producers, directors, and writers who precede the titles. A shame: what we might call the submedium of the movie preview is probably more integral to maintaining our generalized perceptions of cinema than any single instance of the feature film.

Trying to mount a project for the next Media in Transition conference, I’ve been thinking about modern ephemera: emergent categories of the forgotten in new media. It’s an environment whose database narratives, paratextual proliferation, and reliance on ever-expanding memory and storage media and constantly-evolving archive and retrieval tools promise an end to the kinds of historical losses that cripple our understanding of, say, the dawn of cinema. LaFontaine’s paradoxical legacy, both glorious and pitiable, reminds me that there are still many gaps in the network — many lacunae in our vision, and much to explore and remedy.

Conventional Wisdom

Ooooh, the next two weeks have me tingling with anticipation: it’s time again for the Democratic National Convention and its bearded-Spock alternate-universe doppelganger, the Republican National Convention. I intend to watch from my cushy couch throne, which magisterially oversees a widescreen high-def window into the mass ornament of our country’s competing electoral carnivals.

Strangely, the Olympics didn’t hold me at all (beyond the short-lived controversy of their shameless simulationism), even though they served up night after night of HD spectacle. It wasn’t until I drove into the city last week to take in a Phillies game that I realized how hungry I am to immerse myself in that weird, disembodied space of the arena, where folks to the right and left of you are real enough, but rapidly fall away into a brightly-colored pointillist ocean, a rasterized mosaic that is, simply, the crowd, banked in rows that rise to the skyline, a bowl of enthusiastic spectatorial specks training their collective gaze on each other as well as inward on a central proscenium of action. At the baseball game I was in a state of happy distraction, dividing my attention among the actual business of balls, strikes, and runs; the fireworky HUDs of jumbotrons, scoreboards, and advertising banners, some of which were static billboards and others smartly marching graphics; the giant kielbasa (or “Bull Dog”) smothered with horseradish and barbecue sauce clutched in my left hand, while in my right rested a cold bottle of beer; and people, people everywhere, filling the horizon. I leaned over to my wife and said, “This is better than HD — but just barely.”

Our warring political parties’ conventions are another matter. I don’t want to be anywhere near Denver or Minneapolis/St. Paul in any physical, embodied sense. I just want to be there as a set of eyes and ears, embedded amid the speechmakers and flagwavers through orbital crosscurrents of satellite-bounced and fiber-optics-delivered information flow. I’ll watch every second, and what I don’t watch I’ll DVR, and what I don’t DVR I’ll collect later through the discursive lint filters of commentary on NPR, CNN, MSNBC, and of course Comedy Central.

The main pleasure in my virtual presence, though, will be jumping around from place to place inside the convention centers. I remember when this joyous phenomenon first hit me. It was in 1996, when Bill Clinton was running against Bob Dole, and my TV/remote setup were several iterations of Moore’s Law more primitive than what I wield now. Still, I had the major network feeds and public broadcasting, and as I flicked among CBS, NBC, ABC, and PBS (while the radio piped All Things Considered into the background), I experienced, for the first time, teleportation. Depending on which camera I was looking through, which microphone I was listening through, my virtual position jumped from point to point, now rubbing shoulders with the audience, now up on stage with the speaker, now at the back of the hall with some talking head blocking my view of the space far in the distance where I’d been an instant previously. It was not the same as Classical Hollywood bouncing me around inside a space through careful continuity editing; nor was it like sitting in front of a bank of monitors, like a mall security guard or the Architect in The Matrix Reloaded. No, this was multilocation, teletravel, a technological hopscotch in increments of a dozen, a hundred feet. I can’t wait to find out what all this will be like in the media environment of 2008.

As for the politics of it all, I’m sure I’ll be moved around just as readily by the flow of rhetoric and analysis, working an entirely different (though no less deterministic) register of ideological positioning. Film theory teaches us that perceptual pleasure, so closely allied with perceptual power, starts with the optical and aural — in a word, the graphic — and proceeds downward and outward from there, iceberg-like, into the deepest layers of self-recognition and subjectivity. I’ll work through all of that eventually — at least by November 4! In the meantime, though, the TV is warming up. And the kielbasa’s going on the grill.

Movie-a-Day: June 2008

June has always been a grand month for me; my birthday falls within it, as does the end of the school year (and I love it that this annual academic bisection still occurs in my life, just as it did when I was in junior high). This time around, June was also full of an unusual number of movies across a variety of genres. Mel Gibson’s Passion of the Christ gets a big fat star for being the most special-effects-intensive guinea-pig movie I’ve ever seen, and once I get a chance to watch the copy of Flower of Flesh and Blood that’s been lurking on my shelf, I plan to write about the spectacle of the systematically destroyed body. Though I found a few things to say about Jumper, I cared for the film no more than the rest of the planet did; not so John Huston’s Asphalt Jungle, an exemplary, unhappy caper film that almost made the cut for this fall’s Intro to Film course (instead, I’ll be kicking off the term with Sunset Boulevard, another perfect movie). After Kurt Wimmer’s brilliant sleeper Equilibrium, Ultraviolet was a real letdown, almost enough to make me stay away from the filmmaker’s work in the future — an anti-auteur effect, something like what I experienced with Darren Aronofsky and his repellent if skillful Requiem for a Dream. On the evidence of Jules et Jim, I’m starting to agree with my friend Chris Dumas that Truffaut is a limp noodle by comparison with Godard, whose films dominated my April. The month’s biggest surprise was a Spanish horror movie, Tombs of the Blind Dead — utterly seductive with its half-clothed victims, slow-moving twig-taloned zombies, and an ending sequence of shrieking, freezeframed paralysis. Murnau’s Faust, by contrast, was a glorious trippy dream: a bit of slog through the middle romantic section, but bookended by astounding, cosmic visions.

Movie-a-Day: June 2008

Viridiana (Luis Bunuel, 1961)
Tombs of the Blind Dead (Amando de Ossorio, 1971)*
You Only Live Once (Fritz Lang, 1937)
Tokyo Drifter (Seijun Suzuki, 1966)
The Firemen’s Ball (Milos Forman, 1967)
Revenge of the Creature (Jack Arnold, 1955)
The Passion of the Christ (Mel Gibson, 2004)*
Peyton Place (Mark Robson, 1957)
Faust (F. W. Murnau, 1926)*
The Asphalt Jungle (John Huston, 1950)
Earth vs. the Flying Saucers (Fred F. Sears, 1956)
Futurama: The Beast with A Billion Backs (Peter Avanzino, 2008)
Jumper (Doug Liman, 2008)
Jules et Jim (Francois Truffaut, 1962)
Eyes Without a Face (Georges Franju, 1960)
Ali: Fear Eats the Soul (Rainer Werner Fassbinder, 1974)*
Steamboy (Katsuhiro Otomo, 2004)
Killer of Sheep (Charles Burnett, 1977)
Kronos (Kurt Neumann, 1957)
Lost Horizon (Frank Capra, 1937)*
The Long Goodbye (Robert Altman, 1973)
Be Kind Rewind (Michel Gondry, 2008)
Destination Moon (Irving Pichel, 1950)
The Mummy (Terence Fisher, 1959)
Ultraviolet (Kurt Wimmer, 2006)
Rome: Open City (Roberto Rossellini, 1945)
On Dangerous Ground (Nicholas Ray, 1952)
The Man with the Golden Arm (Otto Preminger, 1955)
Mysterious Island (Cy Endfield, 1961)
Porco Rosso (Hayao Miyazaki, 1992)
WALL-E (Andrew Stanton, 2008)*

Technologies of Disappearance

My title is a lift from Alan N. Shapiro’s interesting and frustrating book on Star Trek as hyperreality, but what motivates me to write today are three items bobbing around in the news: two from the world of global image culture, the other from the world of science and technology.

Like Dan North, who blogs smartly on special effects and other cool things at Spectacular Attractions, I’m not deeply into the Olympics (either as spectator or commentator), but my attention was caught by news of what took place at last week’s opening ceremonies in Beijing. In the first case, Lin Miaoke, a little girl who sang the revolutionary hymn “Ode to the Motherland,” was, in turns out, lip-synching to the voice of another child, Yang Peiyi, who was found by the Communist Party politburo to be insufficiently attractive for broadcast. And in the second case, a digitally-assisted shot of fireworks exploding in the nighttime sky was used in place of the evidently less-impressive real thing.

To expound on the Baudrilliardian intricacies at play here hardly seems necessary: the two incidents were tied together instantly by the world press and packaged in headlines like “Fakery at the Olympics.” As often happens, the Mass Media Mind — churning blindly away like something from John Searles’s Chinese room thought experiment — has stumbled upon a rhetorical algorithm that tidily condenses several discourses: our simultaneous awe and dread of the powers of technological simulation; the sense that the Olympics embodies an omnivorous spectacularity threatening to consume and amplify beyond recognition all that is homely and human in scale; and good ol’ fashioned Orientalism, here resurrected as suspicion of the Chinese government’s tendency toward manipulation and disguise. (Another “happy” metaphorical alignment: the visibility-cloaking smog over Beijing, so ironically photogenic as a contrast to the crisp and colorful mass ornament of the crowded, beflagged arenas.)

If anything, this image-bite of twinned acts of deception functions, itself, as another and trickier device of substitution. Judging the chicanery, we move within what Adorno called the closed circle of ideology, smugly wielding criticism while failing to escape the orbit of readymade meanings to question more fundamental issues at stake. We enjoy, that is, our own sense of scandal, thinking it premised on a sure grasp of what is true and indexical — the real singer, the unaltered skies — and thus reinscribe a belief that the world can be easily sorted into what is real and what is fake.

Of course it’s all mediated, fake and real at the same time, calibrated as cunningly as the Olympics themselves. Real bodies on bright display in extremes of exertion unimaginable by this couch potato: the images on my high-def screen have rarely been so viscerally indexical in import, every grimace and bead of sweat a profane counterpoint to sacred ballistics of muscled motion. But I fool myself if I believe that the reality of the event is being delivered to me whole. Catching glimpses of the ongoing games as I shuffle through surrounding channels of televisual flow is like seeing a city in flickers from a speeding train: the experience julienned by commercials and camera cuts, embroidered by thickets of helpful HUD graphics and advertisers’ eager logos. Submerged along another axis entirely is the vanished reality of the athletes’ training: eternities of drilling and repetition, an endless dull disciplining at profound odds with the compacted, adrenalized, all-or-nothing showstoppers of physical prowess.

Maybe the collective fascination of the lip-synching stems from our uncomfortable awareness that we’re engaged in a vicarious kind of performance theft, sitting back and dining on the visual feast of borrowed bodily labor. And maybe the sick appeal of the CG fireworks is our guilty knowledge that human beings are functioning as special effects themselves, there to evince oohs and ahs. All I know is that the defense offered up by the guy I heard on BBC World News this morning seems to radically miss the point. Madonna and Britney lip-synch, he said: why is this any different? As for the digital fireworks, did we really expect helicopters to fly close to the airborne pyrotechnics? The cynicism of the first position, that talent is always a manufactured artifact, is matched by the blase assumption of the second, permuting what we might call the logic of the stuntman: if an exploit is too dangerous for a lead actor to perform, sub in a body worth a little less. In the old days, filmmakers did it with people whose names appeared only in the end credits (and then not among the cast). Nowadays, filmmakers hand the risk over to technological standins. In either case, visualization has trumped representation, the map preceding the territory.

But I see I’ve fallen into the trap I outlined earlier, dressing up in windy simulationist rhetoric a more basic dismay. Simply put, I’m sad to think of Yang Peiyi’s rejection as unready for global prime time, based on a chubby face and some crooked teeth (features, let me add, now unspooling freely across the world’s screens — anyone else wondering how she’ll feel at age twenty about having been enshrined as the Ugly Ducking?). Prepping my Intro to Film course for the fall, I thought about showing Singin’ in the Rain — beneath its happy musical face a parable of insult in which pretty but untalented people highjack the vocal performances of skilled but unglamorous backstage workers. Hey, I was kind of a chubby-faced, snaggle-toothed kid too, but at least I got to sing my own part (Frank Butler) in Annie Get Your Gun.

In other disappearance news: scientists are on their way to developing invisibility. Of this I have little to say, except that I’m relieved the news is getting covered at all. There’s more than one kind of disappearance, and if attention to events at Berkeley and Beijing is reassuring in any measure, it’s in the “making visible” of cosmetic technologies that, in their amnesiac emissions and omissions, would otherwise sand off the rough, unpretty edges of the world.

Singing Along with Dr. Horrible

Duration, when it comes to media, is a funny thing. Dr. Horrible’s Sing-Along Blog, the web-distributed musical (official site here; Wiki here), runs a tad over 42 minutes in all, or about the length of an average hour-long block of TV entertainment with commercials properly interspersed. But my actual experience of it was trisected into chunks of approximately 15 minutes, for like your standard block of TV programming (at least in the advertising-supported format favored in the U.S.), Dr. Horrible is subdivided into acts, an exigence which shapes the ebb and flow of its dramatic humours while doing double service as a natural place to pause and reflect on what you’ve seen — or to cut yourself another slice of ice-cream cake left over from your Dairy-Queen-loving relatives’ visit.

That last would be a blatantly frivolous digression, except in this key sense: working my way through the three acts of Dr. Horrible was much like consuming thick slices of icy sweetness: each individual slab almost sickeningly creamy and indulgent, yet laced throughout with a tantalizingly bitter layer of crisp chocolate waferage. Like the cake, each segment of the show left me a little swoony, even nauseated, aesthetic sugar cascading through my affective relays. After each gap, however, I found myself hungry for more. Now, in the wake of the total experience, I find myself contemplating (alongside the concentrated coolness of the show itself) the changing nature of TV in a digital moment in which the forces of media evolution — and more properly convergence — have begun to produce fascinating cryptids: crossbred entities in which the parent influences, harmoniously combined though they might be, remain distinct. Sweet cream, bitter fudge: before everything melts together to become the soupy unremarkable norm, a few observations.

Ultimately, it took me more than two weeks to finish Dr. Horrible. I watched the first two acts over two nights with my wife, then finished up on my own late last week. (For her part, Katie was content to have the ending spoiled by an online forum she frequents: a modern Cliffs Notes for media-surfers in a hurry to catch the next wave.) So another durative axis enters the picture — the runtime of idiosyncratic viewing schedules interlacing the runtime of actual content, further macerated by multiple pausings and rewindings of the iPod Touch that was the primary platform, the postage-stamp proscenium, for my download’s unspooling. Superstring theorists think they have things hard with their 10, 11, or 26 dimensions!

As such, Horrible‘s cup of video sherbet was the perfect palate-cleanser between rounds of my other summer viewing mission — all five seasons of The Wire. I’m racing to get that series watched before the school year (another arbitrary temporal framework) resumes in three weeks; enough of my students are Wireheads that I want to be able to join in their conversations, or at least not have to fake my knowing nods or shush the conversation before endings can be ruined. On that note, two advisories about the suspense of watching The Wire. First, be careful on IMDb. Hunting down members of the exceptionally large and splendid cast risks exposing you to their characters’ lifespans: finding out that such-and-such exits the series after 10, 11, or 26 episodes is a pretty sure clue as to when they’ll take a bullet in the head or OD on a needle. Second, and relatedly, it’s not lost on this lily-white softy of an academic that I would not last two frickin’ seconds on the streets of Baltimore — fighting on either side of the drug wars.

Back to Dr. Horrible. Though other creators hold a somewhat higher place in my Pantheon of Showrunners (Ronald D. Moore with Battlestar Galactica, Matt Weiner with Mad Men, and above them all, of course, Trek‘s Great Bird of the Galaxy, Gene Roddenberry), Joss Whedon gets mad props for everything from Buffy the Vampire Slayer to Firefly/Serenity and for fighting his way Dante Alighieri-like through the development hell of Alien Resurrection. I was only so-so about the turn toward musical comedy Whedon demonstrated in “Once More with Feeling,” the BtVS episode in which a spell forced everyone to sing their parts; I always preferred Buffy when the beating of its heart of darkness drowned out its antic, cuddly lullabies.

But Dr. Horrible, in a parallel but separate universe of its own, is free to mix its ugliness and frills in a fresh ratio, and the (re)combination of pathos and hummable tunes works just fine for me. Something of an inversion of High School Musical, Dr. Horrible is one for all the kids who didn’t grow up pretty and popular. Moreover, its rather lonesome confidence in superweaponry and cave lairs suggests a masculine sensibility: Goth Guy rather than Gossip Girl. Its characters are presented as grownups, but they’re teenagers at the core, and the genius is in the indeterminacy of their true identities; think of Superman wearing both his blue tights and Clark Kent’s blue business suit and still not winning Lois Lane’s heart. My own preteen crush on Sabrina Duncan (Kate Jackson) of Charlie’s Angels notwithstanding, I first fell truly in love in high school, and it’s gratifying to see Dr. Horrible follow the arc of unrequited love, with laser-guided precision, to its accurate, apocalyptically heartbreaking conclusion.

What of the show as a media object, which is to say, a packet-switched quantum of graphic data in which culture and technology mingle undecidably like wave and particle? NPR hailed it as the first genuine flowering of TV in a digital milieu, and perhaps they’re right; the show looks and acts like an episode of something larger, yet it’s sui generis, a serial devoid of seriality. It may remain a kind of mule, giving rise to nothing beyond the incident of itself, or it may reproduce wildly within the corporate cradle of Whedon’s Mutant Enemy and in the slower, rhizomatic breeding beds of fanfic and fanvids. It’s exciting to envision a coming world in which garage- and basement-based production studios generate in plenty their own Dr. Horribles for grassroots dissemination; among the villains who make up the Evil League of Evil, foot-to-hoof with Bad Horse, there must surely stand an Auteur of Doom or two.

In the mise-en-abyme of digital networks, long tails, and the endlessly generative matrix of comic books and musical comedy, perhaps we will all one day turn out to be mad scientists, conquering the world only to find we have sacrificed the happy dreams that started it all.

Crudeness, Complexity, and Venom’s Bite

Back in the 70s, like most kids who grew up middle-class and media-saturated in the U.S., I lived for the blocks of cartoons that aired after school and on Saturday mornings. From Warner Brothers and Popeye shorts to affable junk like Hong Kong Phooey, I devoured just about everything, with the notable exception of Scooby Doo, which I endured with resigned numbness as a bridge between more interesting shows. (Prefiguring my later interest in special effects both cheesy and classy, I was also nutty for the live-action Filmation series the networks would occasionally try out on us: cardboard superhero morality plays like Shazam! and Isis, as well as SF-lite series Ark II, Space Academy, and Jason of Star Command, which was the Han Solo to Space Academy‘s Luke Skywalker.)

Nowadays, as a fancypants professor of media studies who teaches courses on animation and fandom, I have, I suppose, moved on to a more mature appreciation of the medium’s possibilities, just as animation itself has found a new cultural location in primetime fare like Family Guy, South Park, and CG features from Pixar and DreamWorks that speak simultaneously to adult and child audiences. But the unreformed ten-year-old in me is still drawn to kids’ cartoons – SpongeBob is sublime, and I rarely missed an episode of Bruce Timm’s resurrection of Superman from the 1990s. This week I had a look at the new CW series, The Spectacular Spider-Man (Wiki rundown here; Sony’s official site here), and was startled both by my own negative response to the show’s visual execution and my realization that the transmedia franchise has passed me by while I was busy with others things … like going to graduate school, getting married, and buying a house. Maybe the photographic evidence of a youthful encounter that recently turned up has made me sensitive to the passage of time; whatever the cause, the new series came as a shock.

First, the visual issue. It’s jolting how crude the animation of the new Spider-Man looks to my eye, especially given my belief that criticisms of this type are inescapably tied to generational position: the graphics of one era seem trite beside the graphics of another, a grass-is-always-greener perceptual mismatch we all too readily misrecognize as transhistorical, inherent, beyond debate. In this case, time’s arrow runs both ways: The garbage kids watch today doesn’t hold a candle to the art we had when I was young from one direction, Today’s shows [or movies, or music, or baseball teams, etc.] are light-years beyond that laughable crap my parents watched from the other. Our sense of a media object’s datedness is based not on some teleological evolution (as fervently as we might believe it to be so) but on stylistic shifts and shared understandings of the norm — literally, states of the art. This technological and aesthetic flux means that very little cultural material from one decade to another escapes untouched by some degree of ideological Doppler shift, whether enshrined as classic or denigrated as obsolete, retrograde, stunted.

Nevertheless, I have a hard time debating the evidence of my eyes – eyes here understood as a distillation of multiple, ephemeral layers of taste, training, and cultural comfort zoning. The character designs, backgrounds, framing and motion of The Spectacular Spider-Man seem horribly low-res at first glance: inverting the too-many-notes complaint leveled at W. A. Mozart, this Spider-Man simply doesn’t have enough going on inside it. Of course, bound into this assessment of the cartoon’s graphic surface is an indictment of more systemic deficits: the dialogue, characterization, and storytelling seem thin, undercooked, dashed off. Around my visceral response to the show’s pared-down quality there is a whiff of that general curmudgeonly rot (again, one tied to aging — there are no young curmudgeons): The Spectacular Spider-Man seems slangy and abrupt, rendered in a rude optical and narrative shorthand that irritates me because it baffles me. I see the same pattern in my elderly parents’ reactions to certain contemporary films, whose rhythms seem to them both stroboscopically intense and conceptually vapid.

The irony in all this is that animation historically has been about doing more with less — maximizing affective impact, narrative density, and thematic heft with a relative minimum of brush strokes, keyframes, cel layers, blobs of clay, or pixels. Above all else, animation is a reducing valve between the spheres of industrial activity that generate it and the reception contexts in which the resulting texts are encountered. While the mechanism of the live-action camera captures reality in roughly a one-to-one ratio, leaving only the stages of editing and postproduction to expand the labor-time involved in its production, animation is labor- and time-intensive to its very core: it takes far longer to produce one frame than it takes to run that frame through the projector. (This is nowhere clearer than in contemporary CG filmmaking; in the more crowded shots of Pixar’s movie Cars, for example, some frames took entire weeks to render.)

As a result, animation over the decades has refined a set of representational strategies for the precise allocation of screen activity: metering change and stasis according to an elaborate calculus in which the variables of technology, economics, and artistic expression compete — often to the detriment of one register over another. Most animation textbooks introduce the idea of limited animation in reference to anime, whose characteristic mode of economization is emblematized by frozen or near-frozen images imparted dynamism by a subtle camera movement. But in truth, all animation is limited to one degree or another. And the critical license we grant those limitations speaks volumes about collective cultural assumptions. In Akira, limitation is art: in Super Friends (a fragment of which I caught while channel-surfing the other day and found unwatchably bad), it’s a commercial cutting-of-corners so base and clumsy as to make your eyeballs burst.

It’s probably clear that with all these caveats and second-guessings, I don’t trust my own response to The Spectacular Spider-Man‘s visual sophistication (or lack of it). My confidence in my own take is further undermined by the realization that the cartoon, as the nth iteration of a Spider-Man universe approaching its fiftieth year, pairs its apparent crudeness with vast complexity: for it is part of one of our few genuine transmedia franchises. I’ve written on transmedia before, each time, I hope, getting a little closer to understanding what these mysterious, emergent entities are and aren’t. At times I see them as nothing more than a snazzy rebranding of corporate serialized media, an enterprise almost as old as that other oldest profession, in which texts-as-products reproduce themselves in the marketplace, jiggering just enough variation and repetition into each spinoff that it hits home with an audience eager for fresh installments of familiar pleasures. At other times, though, I’m less cynical. And for all its sketchiness, The Spectacular Spider-Man offers a sobering reminder that transmedia superheroes have walked the earth for decades: huge, organic archives of storytelling, design networks, and continuously mutating continuity.

Geoff Long, who has thought about the miracles and machinations of transmedia more extensively and cogently than just about anyone I know, recently pointed out that we live amid a glut of new transmedia lines, most of which — like those clouds of eggs released by sea creatures, with only a lottery-winning few lucky enough to survive and reproduce — are doomed to failure. Geoff differentiates between these “hard” transmedia launches and more “soft” and “crunchy” transmedia that grow slowly from a single, largely unanticipated success. In Spider-Man, Batman, Superman and the like, we have serial empires of apparent inexhaustibility: always more comic books, movies, videogames, action figures to be minted from the template.

But the very scale of a long-lived transmedia system means that, at some point, it passes you by; which is what happened to me with Spider-Man, around the time that Venom appeared. This symbiotic critter (I could never quite figure out if it’s a sentient villain, an alter-ego of Spidey, or just a very aggressive wardrobe malfunction) made its appearance around 1986, approximately the same time that I was getting back into comic books through Love and Rockets, Cerebus, and the one-two punch of Frank Miller’s The Dark Knight Returns and Moore’s and Gibbons’s Watchmen. Venom represented a whole new direction for Spider-Man, and, busy with other titles, I never bothered to do the homework necessary to bind him into my personal experience of Spider-Man’s diegetic history. Thus, Sam Raimi’s botched Spider-Man 3 left me cold (though it did restage some of the Gwen Stacy storyline that broke my little heart in the 70s), and when Venom happened to show up on the episode of Spectacular Spider-Man that I watched, I realized just how out of touch I’ve become. Venom is everywhere, and any self-respecting eight-year-old could probably lecture me on his lifespan and dietary habits.

Call this lengthy discourse a meditation on my own aging — a bittersweet lament on the fact that you can’t stay young forever, can’t keep up with everything the world of pop entertainment has to offer. Long after I’ve stopped breathing, the networked narratives of my favorite superheroes and science-fiction worlds will continue to proliferate. My mom and dad can enjoy this summer’s Iron Man without bothering over the lengthy history of that hero; perhaps I’ll get to the same point when, as an old man one day, I confront some costumed visual effect whose name I’ve never heard of. In the meantime, Venom oozes virally through the sidechannels and back-alleys of Spider-Man’s mediaverse, popping up in the occasional cartoon to tease me — much as he does the eternally-teenaged, ever-tormented Peter Parker — with a dark glimpse of my own mortality, as doled out in the traumas of transmedia.

Science of Special Effects: Deadline Extended

Back in June I posted a CFP for The ‘Science’ of Special Effects: Aesthetic Approaches to Industry, a set of panels I’m helping organize for the upcoming conference Film & Science: Fictions, Documentaries, and Beyond in Chicago (October 30-November 2). My co-chair Michael Duffy and I have already received and accepted a number of great proposals covering topics from posthumous digital performances to visual effects in war films and the “obstructed spectacle” of Cloverfield — to name just a few. But there’s always room for more, and now that the deadline for proposals has been extended to September 1, we hope to see even more papers come in through the electronic transom. The area outline is below; submissions can be sent to me at brehak1@swarthmore.edu or Michael at michael.s.duffy@googlemail.com.

The ‘Science’ of Special Effects: Aesthetic Approaches to Industry

This area examines the industrial, technological, theoretical, and aesthetic questions surrounding special-effects technologies. Presenters may investigate historical changes in special and visual effects, as in the gradual switch from physical to digital applications; they may focus on the use of visual effects in film or television texts that do not fit into typically spectacle-driven genres (i.e., effects in drama, comedy, and musical narratives instead of in action-adventure, science fiction, or fantasy); they may consider the theoretical implications of special/visual effects and technology on texts; or they may concentrate on neglected historical and aesthetic values of effects development.

Possible papers or panels might include the following:

  • An investigation of the terms “Special Effect” and “Visual Effect,” what they constitute, and how their definitions have been delineated and complicated by changing technologies.
  • Special/visual effects “stars” such as Stan Winston, Douglas Trumbull, or Richard Edlund, and their impact on the construction and application of visual effects images for mainstream/non-mainstream cinema.
  • The changing relationship between visual effects technologies and pre-production, i.e. looking at “previz,” at the development of films “around” their effects sequences, or at the use of physical materials such as maquettes as templates for eventual CG elements.
  • How contemporary visual-effects practitioners negotiate and incorporate real world
    “physics” into their design of digital characters (“synthespians”) and environments.
  • How visual effects contribute to the formation of complete “environments” on screen, how they are incorporated into narratives, and how meaning is affected when a physical environment is entirely fabricated.
  • The implementation of special/visual effects by costume and motion-capture “artists” and actors, and how studies of these practices can offer insight into classic and contemporary working relationships between effects practitioners, actors and crew.
  • The Visual Effects Society and its impact on the industry and filmmaking throughout the organization’s history.
  • How directors or other creative personalities use physical and digital effects in their projects (e.g., Robert Zemeckis’s application of digital technologies or Guillermo Del Toro’s proclaimed interest in keeping a 50/50 balance between physical and digital effects).