Conferencing

Big academic conferences have a strange energy — which is to say, they have an energy that is palpable and powerful but exceeds my ability to understand or, more importantly, locate myself within it. It is, in part, a concentration of brain power, the collected expertise of a scholarly discipline (in this case, cinema and media studies) brought together for five days and four nights in a Boston hotel. I experience this first aspect as a kind of floating cerebral x-ray of whatever room I’m in, the heads around me overlaid with imagined networks of knowledge like 3D pop-up maps of signal strength in competing cell-phone ads.

But there is another, related dimension, and that is the sheer social density of such gatherings. The skills we develop as students and scholars are honed for the most part in isolation: regardless of the population of the campuses where we work, the bulk of our scholarly labor transpires in the relative silence of the office, the quiet arena of the desktop, the soft skritch of pencil against paper or gentle clicking of computer keyboards still a million times louder than the galaxies of thought whirling through our minds. (Libraries are a good metaphors for what I’m talking about here: quiet spaces jammed with unvocalized cacophanies of text, physical volumes side by side but never communicating with each other save for their entangled intimacies of footnotes and citations.)

Bring us all together for a conference and instantly the silence of our long internal apprenticeships, our walkabouts of research, converts to a thousand overlapping conversations, like a thunderstorm pouring from supersaturated clouds. We’re hungry for company, most of us, and the sudden toggle from solitary to social can be daunting.

When we arrived, the hotel’s computers were down, and the lobby was jammed with people waiting to check in, dragging their suitcases like travelers waiting to board an airplane. A set of clocks over the reception desk read out times from across the world — San Francisco, London, Tokyo — in cruel chronological contrast to the state of stasis that gripped us. Amid the digital work stoppage, I met a colleague’s ten-year-old son, who proudly showed me a bow and arrow he had fashioned from a twig and a taut willow branch found outside in the city’s public gardens. Plucking the bowstring like a musical instrument, he modestly estimated the range of his makeshift weapon (“about six feet”), but all I could do was marvel at his ingenuity in putting wood to work while electronic technologies ground to a halt, stranding all of us brainy adults in long and weary lines. Maybe the whole conference would run better if we swapped our iPads and phones and laptops for more primitive but reliable hand-fashioned instruments; but then, just as our scholarship can’t proceed in a social vacuum, maybe we need the network.

The new iPad

I’m neither an automatic Apple acolyte nor a naysayer, but the company and its technologies do go deep with me: my first computer, purchased back in 1980, was an Apple II+ with 48K of RAM, and between me and my wife, the household currently holds six or seven Apple devices, including multiple MacBooks and iPods. That I integrate these machines with a powerful PC that is my primary workstation and gaming platform does not dilute the importance of the role Apple has played in my life.

All that said, today’s announcement of the latest iPad strikes me as a letdown, and I’ve been trying to figure out why. A Retina display with four times the resolution of the current device is nothing to sneeze at, and I’m glad to see a better rear-facing camera. But the quantum leap in capability and, more importantly, a certain escalation of the brand are missing. I am the happy owner of an iPad 2, brought a year ago during a difficult time for Katie and me; March 2011 was a profoundly unhappy month, and I am not embarrassed to say that my iPad was one of the small comforts that got me through long nights at the hospital. Perhaps if I was going through something equivalently tragic now, I might again turn to a technological balm, but I doubt that the new iPad would do the trick. It’s a cautious, almost timid refinement of existing hardware, and I daresay that were Steve Jobs still around, Apple might have taken a bolder leap forward.

I was struck by one statement in the promotional video I watched: the assertion that in an ideal Apple-based technological experience, the mediating device disappears from consciousness, allowing you to concentrate on what you’re doing, rather than the thing you’re doing it with. True enough, I suppose — I’m not thinking about the keyboard on which I’m typing this blog entry, or the screen on which I’m reading my own words. But such analyses leave out the powerful effect of the brand that surrounds those moments of “flow.” The iPad, like so many Apple innovations, is a potent and almost magical object in terms of the self-identifications it provides, and in off-screen moments I am always highly conscious of being an iPad user. It’s a happy interpellation, one I accept enthusiastically, turning with eagerness toward the policeman’s call. It’s anything but a transparent experience, and the money I give Apple goes at least as much to support my own subjectification as to underwrite a particular set of technological and creative affordances. The new iPad lacks this aura, so for now, I’ll stick with what I have.

Category confusion

Picture it: three men on a plane, ranged unluckily in the same row of seats: a chance adjacency we each interpreted as punishment by the fates. To my right in the window seat, a college student slouching in sweatpants, about the same size as me (6 foot three, two hundred and twenty-ish pounds) but with a dense muscularity, a neutron-star version of my slack and dissipated forty-five-year-old self. To my left, a rotund gentleman my age or a little older. As I maneuvered past him to take my place in the middle seat, he said with a combination of apology and accomplishment, “I lost 100 pounds, now I’m working on the next 100.”

I rarely find myself in the goldilocks zone, but there I was, sandwiched between a smaller and a larger version of myself. All of us wedging shoulders uncomfortably for the two-hour flight from Atlanta to Detroit.

But more interesting than the cramps I courteously self-inflicted holding myself in a polite pretzel, a pacifying topographic adaptation to the shape envelopes of my flanking neighbors, was the way for the first time I was tricked by new media. An iPad picked the pocket of my imaginary.

To the left, big neighbor read his big novel, a heavy brick of paper. To the right, college student played his PSP. I in the middle read The Passage, by Justin Cronin, on my iPad. A bell gonged, the pilot said we were on approach to DTW, the flight attendant told us to turn off our electronic devices. The PSP got put away, The print novel didn’t. And my iPad? It stayed open throughout the landing; it wasn’t until we touched down that I realized, with a guilty start, that I had forgotten it was an electronic device at all.

Ebooks and ereading are not natural to me: they have felt unpleasantly frictionless and inherently duplicitous in their mimicry of an ontologically distinct media experience. But today something changed; the contents of my mind shifted during travel, and I accepted the iPad into that group of personal technologies I pay the high compliment of naturalizing by forgetting they are technologies in the first place.

What is … Watson?

We have always loved making our computers perform. I don’t say “machines” — brute mechanization is too broad a category, our history with industrialization too long (and full of skeletons). Too many technological agents reside below the threshold of our consciousness: the dumb yet surgically precise robots of the assembly line, the scrolling tarmac of the grocery-store checkout counter that delivers our purchases to another unnoticed workhorse, the cash register. The comfortable trance of capitalism depends on labor’s invisibility, and if social protocols command the human beings on either side of transactions to at least minimally acknowledge each other — in polite quanta of eye contact, murmured pleasantries — we face no such obligation with the machines to whom we have delegated so much of the work of maintaining this modern age.

But computers have always been stars, and we their anxious stage parents. In 1961 an IBM 704 was taught to sing “Daisy Bell” (inspiring a surreal passage during HAL’s death scene in 2001: A Space Odyssey), and in 1975 Steve Dompier made his hand-built Altair 8800 do the same, buzzing tunes through a radio speaker at a meeting of the Homebrew Computer Club, an early collective of personal-computing enthusiasts. I was neither old enough nor skilled enough to take part in that initial storm surge of the microcomputer movement, but like many born in the late 1960s, was perfectly poised to catch the waves that crashed through our lives in the late 70s and early 80s: the TRS-80, Apple II, and Commodore PET; video arcades; consoles and cartridges for playing at home, hooked to the TV in a primitive convergence between established and emerging technologies, conjoined by their to-be-looked-at-ness.

Arcade cabinets are meant to be clustered around, joysticks passed around an appreciative couchbound audience. Videogames of any era show off the computer’s properties and power, brightly blipping messages whose content, reversing McLuhan, is new media, presenting an irresistible call both spectacular and interactive to any nerds within sensory range. MIT’s Spacewar worked both as game and graphics demo, proof of what the state of the art in 1962 could do: fifty years later, the flatscreens of Best Buy are wired to Wiis and PlayStation 3s, beckoning consumers in endless come-on (which might be one reason why the games in so many franchises have become advertisements for themselves).

But the popular allure of computers isn’t only in their graphics and zing. We desire from them not just explorable digital worlds but minds and souls themselves: another sentient presence here on earth, observing, asking questions, offering commentary. We want, in short, company.

Watson, the IBM artifact currently competing against champions Ken Jennings and Brad Rutter on Jeopardy, is the latest digital ingenue to be prodded into the spotlight by its earnest creators (a group that in reaction shots of the audience appears diverse, but whose public face in B-roll filler sums to the predictable type: white, bespectacled, bearded, male). Positioned between Jennings and Rutter, Watson is a black slab adorned with a cheerful logo, er, avatar, conveying through chance or design an uneasy blend of 2001‘s monolith and an iPad. In a nearby non-space hums the UNIVAC-recalling bulk of his actual corpus, affixed to a pushbutton whose humble solenoid — to ring in for answers — is both a cute nod to our own evolution-designed hardware and a sad reminder that we still need to even the playing field when fighting Frankenstein’s Monster.

There are two important things about Watson, and despite the technical clarifications provided by the informational segments that periodically and annoyingly interrupt the contest’s flow, I find it almost impossible to separate them in my mind. Watson knows a lot; and Watson talks. Yeats asked, “How can we know the dancer from the dance?” Watson makes me wonder how much of the Turing Test can be passed by a well-designed interface, like a good-looking kid in high school charming teachers into raising his grades. Certainly, it is easy to invest the AI with a basic identity and emotional range based on his voice, whose phonemes are supplied by audiobook narrator Jeff Woodman but whose particular, peculiar rhythms and mispronunciations — the foreign accent of speech synthesis, as quaint as my father’s Czech-inflected English — are the quirky epiphenomena of vast algorithmic contortions.

Another factor in the folksiness of Watson is that he sounds like a typical Jeopardy contestant — chirpy, nervous, a little full of himself — and so highlights the vaguely androidish quality of the human players. IBM has not just built a brain in a box; they’ve built a contestant on a TV game show, and it was an act of genius to embed this odd cybernetic celebrity, half quick-change artist, half data-mining savant, in the parasocial matrix of Alex Trebek and his chronotypic stage set: a reality already half-virtual. Though I doubt the marketing forces at IBM worried much about doomsday fears of runaway AIs, the most remarkable thing about Watson may be how benign he seems: an expert, and expertly unthreatening, system. (In this respect, it’s significant that the computer was named not for the brilliant and erratic Sherlock Holmes, but his perpetually one-step-behind assistant.)

Before the competition started, I hadn’t thought much about natural-language processing and its relationship to the strange syntactic microgenre that is the Jeopardy question. But as I watched Watson do his weird thing, mixing moronic stumbles with driving sprints of unstoppable accuracy, tears welled in my eyes at the beautiful simplicity of the breakthrough. Not, of course, the engineering part — which would take me several more Ph.D.s (and a whole lotta B-roll) to understand — but the idea of turning Watson into one of TV’s limited social beings, a plausible participant in established telerituals, an interlocutor I could imagine as a guest on Letterman, a relay in the quick-moving call-and-response of the one quiz show that has come to define, for a mass audience, high-level cognition, constituted through a discourse of cocky yet self-effacing brilliance.

Our vantage point on Watson’s problem-solving process (a window of text showing his top three answers and level of confidence in each) deromanticizes his abilities somewhat: he can seem less like a thinking agent than an overgrown search engine, a better-behaved version of those braying “search overload” victims in the very obnoxious Bing ads. (Tip to Microsoft: stop selling your products by reminding us how much it is possible to hate them.) But maybe that’s all we are, in the end: social interfaces to our own stores of internal information and experience, talkative filters customized over time (by constant interaction with other filters) to mistake ourselves for ensouled humans.

At the end of the first game on Tuesday night, Watson was ahead by a mile. We’ll see how he does in the concluding round tonight. For the life of me, I can’t say whether I want him to win or to lose.

Awaiting Avatar

Apparently Avatar, which opened on Friday at an immersive neural simulation pod near you, posits an intricate and very real connection between the natural world and its inhabitants: animus in action, the Gaia Hypothesis operationalized on a motion-capture stage. If this is so — if some oceanic metaconsciousness englobes and organizes our reality, from blood cells to weather cells — then perhaps it’s not surprising that nature has provided a perfect metaphor for the arrival of James Cameron’s new film in the form of a giant winter storm currently coloring radar maps white and pink over most of the eastern seaboard, and trapping me and my wife (quite happily) at home.

Avatar comes to mind because, like the blizzard, it’s been approaching for some time — on a scale of years and months rather than hours and minutes, admittedly — and I’ve been watching its looming build with identical avidity. I know Avatar’s going to be amazing, just as I knew this weekend’s storm was going to be a doozy (the expectation is 12-18 inches in the Philadelphia area, and out here in our modest suburb, the accumulation is already enough to make cars look as though they have fuzzy white duplicates of themselves balanced on their roofs). In both cases, of course, this foreknowledge is not as monolithic or automatic a thing as it might appear. The friendly meteorologists on the Weather Channel had to instruct me in the storm’s scale and implacability, teaching me my awe in advance; similarly, we all (and I’m referring here to the entire population of planet earth) have been well and thoroughly tutored in the pleasurable astonishment that awaits us when the lights go down and we don our 3D glasses to take in Cameron’s fable of Jake Sully’s time among the Na’vi.

If it isn’t clear yet, I haven’t seen Avatar. I’m waiting out the weekend crowds (and, it turns out, a giant blizzard) and plan to catch a matinee on Tuesday, along with a colleague and her son, through whose seven-year-old subjectivity I ruthlessly intend to focalize the experience. (I did something similar with my nephew, then nine, whom I took to see The Phantom Menace in 1999; turns out the prequels are much more watchable when you have an innocent beside you with no memory of what George Lucas and Star Wars used to be.) But I still feel I know just about everything there is to know about Avatar, and can name-drop its contents with confidence, thanks to the broth of prepublicity in which I’ve been marinating for the last several weeks.

All of that information, breathlessly assuring me that Avatar will be either complete crap (the /tv/ anons on 4chan) or something genuinely revolutionary (everyone else), partakes of a cultural practice spotlighted by my friend Jonathan Gray in his smart new book Show Sold Separately: Promos, Spoilers, and Other Media Paratexts. While we tend to speak of film and television in an always-already past tense (“Did you see it?” “What did you think?”), the truth is something very different. “Films and television programs often begin long before we actively seek them out,” Jon observes, going on to write about “the true beginnings of texts as coherent clusters of meaning, expectation, and engagement, and about the text’s first initial outposts, in particular trailers, posters, previews, and hype” (47). In this sense, we experience certain media texts a priori — or rather, we do everything but experience them, gorging on adumbration with only that tiny coup de grace, the film itself, arriving at the end to provide a point of capitation.

The last time I experienced anything as strong as Avatar‘s advance shockwave of publicity was with Paranormal Activity (and a couple of years ago before that with Cloverfield), but I am not naive enough to think such occurrences rare, particularly in blockbuster culture. If anything, the infrequency with which I really rev up before a big event film suggests that the well-coordinated onslaught is as much an intersubjective phenomenon as an industrial one; marketing can only go so far in setting the merry-go-round in motion, and each of us must individually make the choice to hop on the painted horse.

And having said that, I suppose I may not be as engaged with Avatar‘s prognosticatory mechanisms as I claim to be.  I’ve kept my head down, refusing to engage fully with the tableaux being laid out before me. As a fan of science-fiction film generally, and visual effects in particular, this seemed only wise; in the face of Avatar hype, the only choices appear to be total embrace or outright and hostile rejection. I want neither to bless nor curse the film before I see it. But it’s hard to stay neutral, especially when a film achieves such complete (if brief) popular saturation and friends who know I study this stuff keep asking me for my opinion. (Note: I am very glad that friends who know I study this stuff keep asking me for my opinion.)

So, a few closing thoughts on Avatar, offered in advance of seeing the thing. Think of them as open-ended clauses, half-told jokes awaiting a punchline; I’ll come back with a new post later this week.

  • Language games. One aspect of the film that’s drawn a great deal of attention is the invention of a complete Na’vi vocabulary and grammar. Interesting to me as an example of Cameron’s endless depth of invention — and desire for control — as well as an aggressive counter to the Klingon linguistics that arose more organically from Star Trek. Will fan cultures accrete around Avatar as hungrily as they did around that more slowly-building franchise, their consciousness organized (to misquote Lacan) by a language?
  • Start the revolution without me. We’ve been told repeatedly and insistently that Avatar is a game-changer, a paradigm shift in science-fiction storytelling. For me, the question this raises is not Is it or isn’t it? but rather, What is the role of the revolutionary in our SF movies, and in filmmaking more generally? How and why, in other words, is the “breakthrough” marketed to us as a kind of brand — most endemically, perhaps, in movies like Avatar that wear their technologies on their sleeve?
  • Multiple meanings of “Avatar.” The film’s story, as by now everyone knows, revolves around the engineering of alien bodies in which human subjectivities can ride, a kind of biological cosplay. But on another, artifactual level, avatarial bodies and mechanisms of emotional “transfer” underpin the entire production, which employs performance capture and CG acting at an unprecedented level. In what ways is Avatar a movie about itself, and how do its various messages about nature and technology interact with that supertext?

Digital Dogsbodies

It’s funny — and perhaps, in the contagious-episteme fashion of Elisha Gray and Alexander Graham Bell filing patents for the telephone on the very same date, a bit creepy — that Dan North of Spectacular Attractions should write on the topic of dog movies while I was preparing my own post about Space Buddies. This Disney film, which pornlike skipped theaters to go straight to DVD and Blu-Ray, is one of a spate of dog-centered films that have become a crowdpleasing filmic staple of late. Dan poses the question, “What is it about today that people need so many dog movies?” and goes on to speculate that we’re collectively drowning our sorrows at the world’s ugliness with a massive infusion of cute: puppyism as cultural anodyne.

Maybe so. It seems to me, though, that another dynamic is in operation here — and with all due respect to my follow scholar of visual effects, Dan may be letting the grumbly echoes of the Frankfurt School distract him from a fascinating nexus of technology, economics, and codes of expressive aesthetics driving the current crop of cinematic canines. Simply put, dogs make excellent cyberstars.

Think about it. Nowadays we’re used to high-profile turns by hybrids of human and digital performance: Angelina Jolie as Grendel’s goldplated mother in Beowulf, Brad Pitt as the wizened baby in The Curious Case of Benjamin Button. (Hmm, it only now strikes me that this intertextual madonna-and-child are married in real life; perhaps the nuclear family is giving way to the mo-capped one?) Such top-billed performances are based on elaborate rendering pipelines, to be sure, but their celebrity/notoriety is at least as much about the uniquely sexy and identifiable star personae attached to these magic mannequins: a higher order of compositing, a discursive special effect. It takes a ton of processing power to paint the sutured stars onto the screen, and an equivalent amount of marketing and promotion — those other, Foucauldian technologies — to situate them as a specific case of the more general Steady March Toward Viable Synthespianism. Which means, in terms of labor and capital, they’re bloody expensive. Mountains of data are moved in service of the smallest details of realism, and even then, nobody can get the eyes right.

But what of the humble cur and the scaled-down VFX needed to sell its blended performance? The five puppy stars of Space Buddies are real, indexically photographed dogs with digitally-retouched jaw movements and eyebrow expressions; child voice actors supply the final, intangible, irreplaceable proof of character and personality. (To hell with subsurface skin scatter and other appeals to our pathetically seducible eyes; the real threshold of completely virtual performance remains believable speech synthesis.) The canine cast of Beverly Hills Chihuahua, while built on similar principles, are ontologically closer to the army of Agent Smiths in The Matrix Reloaded’s burly brawl — U-Capped fur wrapped over 3D doll armatures and arrayed in Busby-Berkeleyish mass ornament. They are, in short, digital dogsbodies, and as we wring our hands over the resurrection of Fred Astaire in vacuum-cleaner ads and debate whether Ben Burtt’s sound design in Wall-E adds up to a best-actor Oscar, our screens are slowly filling with animals’ special-effects-driven stardom. How strange that we’re not treating them as the landmarks they are — despite their immense profitability, popularity, and paradoxical common-placeness. It’s like Invasion of the Body Snatchers, only cuddly!

I don’t mean to sound alarmist — though writing about the digital’s supposed incursion into the real always seems to bring out the edge in my voice. In truth, the whole thing seems rather wonderful to me, not just because I really dug Space Buddies, but because the dogsbody has been around a long time, selling audiences on the dramatic “realism” of talking animals. From Pluto to Astro, Scooby Doo to Rowlf, Lady and the Tramp to K-9, Dynomutt, and Gromit, dogs have always been animated beyond their biological station by technologies of the screen; we accept them as narrative players far more easily than we do more elaborate and singular constructions of the monstrous and exotic. The latest digital tools for imparting expression to dogs’ mouths and muzzles were developed, of all places, in pet-food ads: clumsy stepping stones that now look as dated as poor LBJ’s posthumous lipsynching in Forrest Gump.

These days it’s the rare dog (or cat, bear, and fish) onscreen whose face hasn’t been partially augmented with virtual prosthetics. Ultimately, this is less about technological capability than the legal and monetary bottom line: unlike human actors, animal actors can’t go ballistic on the lighting guy, or write cumbersome provisions into their contracts to copyright their “aura” in the age of mechanical reproduction. Our showbiz beasts exist near the bottom of the labor pool: just below that other mass of bodies slowly being fed into the meat-grinder of digitization, stuntpeople, and just above the nameless hoardes of Orcs jam-packing the horizon shots of Lord of the Rings. I think it was Jean Baudrillard, in The System of Objects, who observed that pets hold a unique status, poised perfectly between people and things. It’s a quality they happen to share with FX bodies, and for this reason I expect we’ll see menageries in the multiplex for years to come.

Requiem for a Craptop

Today I said goodbye to the MacBook that served me and my wife for almost three years — served us tirelessly, loyally, without ever judging the uses to which we put it. It was part of our household and our daily routines, funneling reams of virtual paper past our eyeballs, taking our email dictation, connecting us with friends through Facebook and family through Skype. (Many was the Sunday afternoon I’d walk the MacBook around our house to show my parents the place; I faced into its camera as the bedrooms and staircases and kitchens scrolled behind me like a mutated first-person shooter or a Kubrickian steadicam.) We called it, affectionately, the Craptop; but there was nothing crappy about its animal purity.

It’s odd, I know, to speak this way about a machine, but then again it isn’t: I’m far too respectful of the lessons of science fiction (not to mention those of Foucault, Latour, and Haraway) to draw confident and watertight distinctions between our technologies and ourselves. My sadness about the Craptop’s departure is in part a sadness about my own limitations, including, of course, the ultimate limit: mortality. Even on a more mundane scale, the clock of days, I was unworthy of the Craptop’s unquestioning service, as I am unworthy of all the machines that surround and support me, starting up at the press of a button, the turn of a key.

The Craptop was not just a machine for the home, but for work: purchased by Swarthmore to assist me in teaching, it played many a movie clip and Powerpoint presentation to my students, flew many miles by airplane and rode in the back seat of many a car. It passes from my world now because the generous College has bought me a new unit, aluminum-cased and free of the little glitches and slownesses that were starting to make the Craptop unusable. It’s a mystery to me why and how machines grow old and unreliable — but no more, I suppose, than the mystery of why we do.

What happens to the Craptop now? Swarthmore’s an enlightened place, and so, the brand assures me, is Apple: I assume a recycling program exists to deconstruct the Craptop into ecologically-neutral components or repurpose its parts into new devices. In his article “Out with the Trash: On the Future of New Media” (Residual Media, Ed. Charles R. Acland, University of Minnesota Press, 2007), Jonathan Sterne writes eloquently and sardonically of the phenomenon of obsolete computer junk, and curious readers are well advised to seek out his words. For my part, I’ll just note my gratitude to the humble Craptop, and try not to resent the newer model on which, ironically, I write its elegy: soon enough, for it and for all of us, the end will come, so let us celebrate the devices of here and now.

Replicants

I look at Blade Runner as the last analog science-fiction movie made, because we didn’t have all the advantages that people have now. And I’m glad we didn’t, because there’s nothing artificial about it. There’s no computer-generated images in the film.

— David L. Snyder, Art Director

Any movie that gets a “Five-Disc Ultimate Collectors Edition” deserves serious attention, even in the midst of a busy semester, and there are few films more integral to the genre of science fiction or the craft of visual effects than Blade Runner. (Ordinarily I’d follow the stylistic rules about which I browbeat my Intro to Film students and follow this title with the year of release, 1982. But one of the many confounding and wonderful things about Blade Runner is the way in which it resists confinement to any one historical moment. By this I refer not only to its carefully designed and brilliantly realized vision of Los Angeles in 2019 [now a mere 11 years away!] but the many-versioned indeterminacy of its status as an industrial artifact, one that has been revamped, recut, and released many times throughout the two and a half decades of its cultural existence. Blade Runner in its revisions has almost dissolved the boundaries separating preproduction, production, and postproduction — the three stages of the traditional cinematic lifecycle — to become that rarest of filmic objects, the always-being-made. The only thing, in fact, that keeps Blade Runner from sliding into the same sad abyss as the first Star Wars [an object so scribbled-over with tweaks and touch-ups that it has almost unraveled the alchemy by which it initially transmuted an archive of tin-plated pop-culture precursors into a golden original] is the auteur-god at the center of its cosmology of texts: unlike George Lucas, Ridley Scott seems willing to use words like “final” and “definitive” — charged terms in their implicit contract to stop futzing around with a collectively cherished memory.)

I grabbed the DVDs from Swarthmore’s library last week to prep a guest lecture for a seminar a friend of mine is teaching in the English Department, and in the course of plowing through the three-and-a-half-hour production documentary “Dangerous Days” came across the quote from David L. Snyder that opens this post. What a remarkable statement — all the more amazing for how quickly and easily it goes by. If there is a conceptual digestive system for ideas as they circulate through time and our ideological networks, surely this is evidence of a successfully broken-down and assimilated “truth,” one which we’ve masticated and incorporated into our perception of film without ever realizing what an odd mouthful it makes. There’s nothing artificial about it, says David Snyder. Is he referring to the live-action performances of Harrison Ford, Rutger Hauer, and Sean Young? The “retrofitted” backlot of LA 2019, packed with costumed extras and drenched in practical environmental effects from smoke machines and water sprinklers? The cars futurized according to the extrapolative artwork of Syd Mead?

No: Snyder is talking about visual effects — the virtuoso work of a small army headed by Douglas Trumbull and Richard Yuricich — a suite of shots peppered throughout the film that map the hellish, vertiginous altitudes above the drippy neon streets of Lawrence G. Paull’s production design. Snyder refers, in other words, to shots produced exclusively through falsification: miniature vehicles, kitbashed cityscapes, and painted mattes, each piece captured in multiple “passes” and composited into frames that present themselves to the eye as unified gestalts but are in fact flattened collages, mosaics of elements captured in radically different scales, spaces, and times but made to coexist through the layerings of the optical printer: an elaborate decoupage deceptively passing itself off as immediate, indexical reality.

I get what Snyder is saying. There is something natural and real about the visual effects in Blade Runner; watching them, you feel the weight and substance of the models and lighting rigs, can almost smell the smoky haze being pumped around the light sources to create those gorgeous haloes, a signature of Trumbull’s FX work matched only by his extravagant ballet of ice-cream-cone UFOs amid boiling cloudscapes and miniature mountains in Close Encounters of the Third Kind. But what no one points out is that all of these visual effects — predigital visual effects — were once considered artificial. We used to think of them as tricks, hoodwinks, illusions. Only now that the digital revolution has come and gone, turning everything into weightless, effortless CG, do we retroactively assign the fakery of the past a glorious authenticity.

Or so the story goes. As I suggest above, and have argued elsewhere, the difference between “artificial” and “actual” in filmmaking is as much a matter of ideology as industrial method; perceptions of the medium are slippery and always open to contestation. Special and visual effects have always functioned as a kind of reality pump, investing the “nonspecial” scenes and sequences around them with an air of indexical reliability which is, itself, perhaps the most profound “effect.” With vanishingly few exceptions, actors speak lines written for them; stories are stitched into seamless continuity from fragments of film shot out of order; and, inescapably, a camera is there to record what’s happening, yet never reveals its own existence. Cinema is, prior to everything else, an artifact, and special effects function discursively to misdirect our attention onto more obvious classes of manipulation.

Now the computer has arrived as the new trick in town, enabling us to rebrand everything that came before as “real.” It’s an understandable turn of mind, but one that scholars and critics ought to navigate carefully. (Case in point: Snyder speaks as though computers didn’t exist at the time of Blade Runner. Yet it is only through the airtight registration made possible by motion-control cinematography, dependent on microprocessors for precision and memory storage for repeatability, that the film’s beautiful miniatures blend so smoothly with their surroundings.) It is possible, and worthwhile, to immerse ourselves in the virtual facade of ideology’s trompe-l’oeil — a higher order of special effect — while occasionally stepping back to acknowledge the brush strokes, the slightly imperfect matte lines that seam the composited elements of our thought.

Titles on the Fringe

Once again I find myself without much to add to the positive consensus surrounding a new media release; in this case, it’s the FOX series Fringe, which had its premiere on Tuesday. My friends and fellow bloggers Jon Gray and Geoff Long both give the show props, which by itself would have convinced me to donate the time and DVR space to watch the fledgling serial spread its wings. The fact that the series is a sleek update of The X-Files is just icing on the cake.

In this case, it’s a cake whose monster-of-the-week decorations seem likely to rest on a creamy backdrop of conspiracy; let’s hope Fringe (if it takes off) does a better job of upkeep on its conspiracy than did X-Files. That landmark series — another spawn of the FOX network, though from long ago when it was a brassy little David throwing stones at the Goliaths of ABC, NBC, and CBS — became nearly axiomatic for me back in 1993 when I stumbled across it one Friday night. I watched it obsessively, first by myself, then with a circle of friends; it was, for a time, a perfect example not just of “appointment television” but of “subcultural TV,” accumulating local fanbaselets who would crowd the couch, eat take-out pizza, and stay up late discussing the series’ marvelously spooky adumbrations and witty gross-outs. But after about three seasons, the show began to falter, and I watched in sadness as The X-Files succumbed to the fate of so many serial properties that lose their way and become craven copies of themselves: National Lampoon, American Flagg, Star Wars.

The problem with X-Files was that it couldn’t, over its unforgivably extended run of nine seasons, sustain the weavework necessary for a good, gripping conspiracy: a counterpoint of deferral and revelation, unbelievable questions flowing naturally from believable answers with the formal intricacy of a tango. After about season six, I couldn’t even bring myself to watch anymore; to do so would have been like visiting an aged and senile relative in a nursing home, a loved one who could no longer recognize me, or me her.

I have no idea whether Fringe will ever be as good as the best or as bad as the worst of The X-Files, but I’m already looking forward to finding out. I’ve written previously about J. J. Abrams and his gift for creating haloes of speculation around the media properties with which his name is associated, such as Alias, Lost, and Cloverfield. He’s good at the open-ended promise, and while he’s proven himself a decent director of standalone films (I’m pretty sure the new Star Trek will rock), his natural environment is clearly the serial structure of dramatic television narrative, which even in its sunniest incarnation is like a friendly conspiracy to satisfy week-by-week while keeping us coming back for more.

As I stated at the beginning, other commentators are doing a fine job of assessing Fringe‘s premise and cast of characters. The only point I’ll add is that the show’s signature visual — as much a part of its texture as the timejumps on Lost or the fades-to-white on Six Feet Under — turns me on immensely. I’m speaking, of course, about the floating 3D titles that identify locale, as in this shot:

Jon points out that the conceit of embedding titles within three-dimensional space has been done previously in Grand Theft Auto 4. Though that videogame’s grim repetitiveness was too much (or not enough) for this gamer, I appreciated the title trick, and recognized it as having an even longer lineage. The truth is, embedded titles have been “floating” around the mediascape for several years. The first time I noticed them was in David Fincher’s magnificent, underrated Panic Room. There, the opening credits unfold in architectural space, suspended against the buildings of Manhattan in sunlit Copperplate:

My fascination with Panic Room, a high-tech homage to Alfred Hitchcock in which form mercilessly follows function (the whole film is a trap, a cinematic homology of the brownstone in which Jodie Foster defends herself against murderous intruders), began with that title sequence and only grew. Notice, for example, how Foster’s name lurks in the right-hand corner of one shot, as though waiting for its closeup in the next:

The work of visual-effects houses Picture Mill and Computer Cafe, Panic Room‘s embedded titles make us acutely uneasy by conflating two spaces of film spectatorship that ordinarily remain reassuringly separate: the “in-there” of the movie’s action and the “out-here” of credits, subtitles, musical score, and other elements that are of the movie but not perceivable by the characters in the storyworld. It’s precisely the difference between diegetic and nondiegetic, one of the basic distinctions I teach students in my introductory film course.

But embedded titles such as the ones in Panic Room and Fringe confound easy categorical compartmentalization, rupturing the hygienic membrane that keeps the double registers of filmic phenomenology apart. The titles hang in an undecidable place, with uncertain epistemological and ontological status, like ghosts. They are perfect for a show that concerns itself with the threads of unreality that run through the texture of the everyday.

Ironically, the titles on Fringe are receiving criticism from fans like those on this Ain’t It Cool talkback, who see them as a cliched attempt to capitalize on an overworked idea:

The pilot was okay, but the leads were dull and the dialogue not much better. And the establishing subtitles looked like double ripoff of the opening credits of Panic Room and the “chapter 1” titles on Heroes. They’re “cool”, but they’ll likely become distracting in the long run.

I hated the 3D text … This sort of things has to stop. it’s not cool, David Fincher’s title sequence in Panic Room was stupid, stop it. It completly takes me out of the scene when this stuff shows up on screen. It reminds you you’re watching TV. It takes a few seconds to realize it’s not a “real” object and other characters, cars, plans, are not seeing that object, even though it’s perfectly 3D shaded to fit in the scene. And it serves NO PURPOSE other than to take you out of the scene and distract you. it’s a dumb, childish, show-off-y amateurish “let’s copy Fincher” thing, and I want it out of this and Heroes.

…I DVR’d the show while I was working, came in about 40 minutes into it before flipping over to my recording. They were outside the building at Harvard and I thought, “Hey cool, Harvard built huge letters spelling out their name outside one of their buildings.”… then I realized they were just ripping off the Panic Room title sequence. Weak.

The visual trick of embedded titles is, like any fusion of style and technology, a packaged idea with its own itinerary and lifespan; it will travel from text to text and medium to medium, picked up here in a movie, there in a videogame, and again in a TV series. In an article I published last year in Film Criticism, I labeled such entities “microgenres,” basing the term on my observation of the strange cultural circulation of the bullet time visual effect:

If the sprawling experiment of the Matrix trilogy left us with any definite conclusion, it is this: special effects have taken on a life of their own. By saying this, I do not mean simply to reiterate the familiar (and debatable) claim that movies are increasingly driven by spectacle over story, or that, in this age of computer-generated imagery (CGI), special effects are “better than ever.” Instead, bullet time’s storied trajectory draws attention to the fact that certain privileged special effects behave in ways that confound traditional understandings of cinematic narrative, meaning, and genre — quite literally traveling from one place to another like mini-movies unto themselves. As The Matrix‘s most emblematic signifier and most quoted element, bullet time spread seemingly overnight to other movies, cloaking itself in the vestments of Shakespearean tragedy (Titus, 1999), high-concept television remake (Charlie’s Angels, 2000), caper film (Swordfish, 2001), teen adventure (Clockstoppers, 2002), and cop/buddy film (Bad Boys 2, 2003). Furthermore, its migration crossed formal boundaries into animation, TV ads, music videos, and computer games, suggesting that bullet time’s look — not its underlying technologies or associated authors and owners — played the determining role in its proliferation. Almost as suddenly as it sprang on the public scene, however, bullet time burned out. Advertisements for everything from Apple Jacks and Taco Bell to BMW and Citibank Visa made use of its signature coupling of slowed time and freely roaming cameras. The martial-arts parody Kung Pow: Enter the Fist (2002) recapped The Matrix‘s key moments during an extended duel between the Chosen One (Steve Oedekerk) and a computer-animated cow. Put to scullery work as a sportscasting aid in the CBS Superbowl, parodied in Scary Movie (2000), Shrek (2001), and The Simpsons, the once-special effect died from overexposure, becoming first a cliche, then a joke. The rise and fall of bullet time — less a singular special effect than a named and stylistically branded package of photographic and digital techniques — echoes the fleeting celebrity of the morph ten years earlier. Both played out their fifteen minutes of fame across a Best-Buy’s-worth of media screens. And both hint at the recent emergence of an unusual, scaled-down class of generic objects: aggregates of imagery and meaning that circulate with startling rapidity, and startlingly frank public acknowledgement, through our media networks.

Clearly, embedded titles are undergoing a similar process, arising first as an innovation, then reproducing virally across a host of texts. Soon enough, I’m sure, we’ll see the parodies: imagine a film of the Scary Movie ilk in which someone clonks his head on a floating title. Ah, well: such is media evolution. In the meantime, I’ll keep enjoying the effect in its more sober incarnation on Fringe, where this particular package of signifiers has found a respectful — and generically appropriate — home.

Convention in a Bubble

A quick followup to my post from two weeks ago (a seeming eternity) on my gleeful, gluttonous anticipation of the Democratic and Republican National Conventions as high-def smorgasbords for my optic nerve. I watched and listened dutifully, and now — literally, the morning after — I feel stuffed, sated, a little sick. But that’s part of the point: pain follows pleasure, hangover follows bender. Soon enough, I’ll be hungry for more: who’s with me for the debates?

Anyway, grazing through the morning’s leftovers in the form of news sites and blogs, I was startled by the beauty of this interactive feature from the New York Times, a 360-degree panorama of the RNC’s wrapup. It’s been fourteen years since Quicktime technology pervily cross-pollinated Star Trek: The Next Generation‘s central chronotope, the U.S.S. Enterprise 1701-D, in a wondrous piece of reference software called the Interactive Technical Manual. I remember being glued to the 640X480 display of my Macintosh whatever-it-was (the Quadra? the LC?), exploring the innards of the Enterprise from stem to stern through little Quicktime VR windows within which, by clicking and dragging, you could turn in a full circle, look up and down, zoom in and out. Now a more potent and less pixilated descendent of that trick has been used to capture and preserve for contemplation a bubble of spacetime from St. Paul, Minnesota, at the orgiastic instant of the balloons’ release which signaled the conclusion of the Republicans’ gathering.

Quite apart from the political aftertaste (and let’s just say that this week was like the sour medicine I had to swallow after the Democrats’ spoonful of sugar), there’s something sublime about clicking around inside the englobed map. Hard to pinpoint the precise location of my delight: is it that confetti suspended in midair, like ammo casings in The Matrix‘s bullet-time shots? The delegates’ faces, receding into the distance until they become as abstractedly innocent as a galactic starfield or a sprinkle-encrusted doughnut? Or is it the fact of navigation itself, the weirdly pleasurable contradiction between my fixed immobility at the center of this reconstructed universe and the fluid way I crane my virtual neck to peer up, down, and all around? Optical cryptids such as this confound the classical Barthesian punctum. So like and yet unlike the photographic, cinematographic, and ludic regimes that are its parents (parents probably as startled and dismayed by their own fecundity as the rapidly multiplying Palin clan), the image-machine of the Flash bubble has already anticipated the swooping search paths of my fascinated gaze and embedded them algorithmically within itself.

If I did have to choose the place I most love looking, it would be at the faces captured nearest the “camera” (here in virtualizing quotes because the bubble actually comprises several stitched-together images, undercutting any simple notion of a singular device and instant of capture). Peering down at them from what seems just a few feet away, the reporters seem poignant — again, innocent — as they stare toward center stage with an intensity that matches my own, yet remain oblivious to the panoptic monster hanging over their heads, unaware that they have been frozen in time. How this differs from the metaphysics of older photography, I can’t say; I just know that it does. Perhaps it’s the ontology of the bubble itself, at once genesis and apocalypse: an expanding shock wave, the sudden blistered outpouring of plasma that launched the universe, a grenade going off. The faces of those closest to “me” (for what am I in this system? time-traveler? avatar? ghost? god?) are reminiscent of those stopped watches recovered from Hiroshima and Nagasaki, infinitely recording the split-second at which one reality ended while another, harsher and hotter, exploded into existence.

It remains to be seen what will come of this particular Flashpoint. For the moment — a moment which will last forever — you can explore the bubble to your heart’s content.