Technologies of Disappearance

My title is a lift from Alan N. Shapiro’s interesting and frustrating book on Star Trek as hyperreality, but what motivates me to write today are three items bobbing around in the news: two from the world of global image culture, the other from the world of science and technology.

Like Dan North, who blogs smartly on special effects and other cool things at Spectacular Attractions, I’m not deeply into the Olympics (either as spectator or commentator), but my attention was caught by news of what took place at last week’s opening ceremonies in Beijing. In the first case, Lin Miaoke, a little girl who sang the revolutionary hymn “Ode to the Motherland,” was, in turns out, lip-synching to the voice of another child, Yang Peiyi, who was found by the Communist Party politburo to be insufficiently attractive for broadcast. And in the second case, a digitally-assisted shot of fireworks exploding in the nighttime sky was used in place of the evidently less-impressive real thing.

To expound on the Baudrilliardian intricacies at play here hardly seems necessary: the two incidents were tied together instantly by the world press and packaged in headlines like “Fakery at the Olympics.” As often happens, the Mass Media Mind — churning blindly away like something from John Searles’s Chinese room thought experiment — has stumbled upon a rhetorical algorithm that tidily condenses several discourses: our simultaneous awe and dread of the powers of technological simulation; the sense that the Olympics embodies an omnivorous spectacularity threatening to consume and amplify beyond recognition all that is homely and human in scale; and good ol’ fashioned Orientalism, here resurrected as suspicion of the Chinese government’s tendency toward manipulation and disguise. (Another “happy” metaphorical alignment: the visibility-cloaking smog over Beijing, so ironically photogenic as a contrast to the crisp and colorful mass ornament of the crowded, beflagged arenas.)

If anything, this image-bite of twinned acts of deception functions, itself, as another and trickier device of substitution. Judging the chicanery, we move within what Adorno called the closed circle of ideology, smugly wielding criticism while failing to escape the orbit of readymade meanings to question more fundamental issues at stake. We enjoy, that is, our own sense of scandal, thinking it premised on a sure grasp of what is true and indexical — the real singer, the unaltered skies — and thus reinscribe a belief that the world can be easily sorted into what is real and what is fake.

Of course it’s all mediated, fake and real at the same time, calibrated as cunningly as the Olympics themselves. Real bodies on bright display in extremes of exertion unimaginable by this couch potato: the images on my high-def screen have rarely been so viscerally indexical in import, every grimace and bead of sweat a profane counterpoint to sacred ballistics of muscled motion. But I fool myself if I believe that the reality of the event is being delivered to me whole. Catching glimpses of the ongoing games as I shuffle through surrounding channels of televisual flow is like seeing a city in flickers from a speeding train: the experience julienned by commercials and camera cuts, embroidered by thickets of helpful HUD graphics and advertisers’ eager logos. Submerged along another axis entirely is the vanished reality of the athletes’ training: eternities of drilling and repetition, an endless dull disciplining at profound odds with the compacted, adrenalized, all-or-nothing showstoppers of physical prowess.

Maybe the collective fascination of the lip-synching stems from our uncomfortable awareness that we’re engaged in a vicarious kind of performance theft, sitting back and dining on the visual feast of borrowed bodily labor. And maybe the sick appeal of the CG fireworks is our guilty knowledge that human beings are functioning as special effects themselves, there to evince oohs and ahs. All I know is that the defense offered up by the guy I heard on BBC World News this morning seems to radically miss the point. Madonna and Britney lip-synch, he said: why is this any different? As for the digital fireworks, did we really expect helicopters to fly close to the airborne pyrotechnics? The cynicism of the first position, that talent is always a manufactured artifact, is matched by the blase assumption of the second, permuting what we might call the logic of the stuntman: if an exploit is too dangerous for a lead actor to perform, sub in a body worth a little less. In the old days, filmmakers did it with people whose names appeared only in the end credits (and then not among the cast). Nowadays, filmmakers hand the risk over to technological standins. In either case, visualization has trumped representation, the map preceding the territory.

But I see I’ve fallen into the trap I outlined earlier, dressing up in windy simulationist rhetoric a more basic dismay. Simply put, I’m sad to think of Yang Peiyi’s rejection as unready for global prime time, based on a chubby face and some crooked teeth (features, let me add, now unspooling freely across the world’s screens — anyone else wondering how she’ll feel at age twenty about having been enshrined as the Ugly Ducking?). Prepping my Intro to Film course for the fall, I thought about showing Singin’ in the Rain — beneath its happy musical face a parable of insult in which pretty but untalented people highjack the vocal performances of skilled but unglamorous backstage workers. Hey, I was kind of a chubby-faced, snaggle-toothed kid too, but at least I got to sing my own part (Frank Butler) in Annie Get Your Gun.

In other disappearance news: scientists are on their way to developing invisibility. Of this I have little to say, except that I’m relieved the news is getting covered at all. There’s more than one kind of disappearance, and if attention to events at Berkeley and Beijing is reassuring in any measure, it’s in the “making visible” of cosmetic technologies that, in their amnesiac emissions and omissions, would otherwise sand off the rough, unpretty edges of the world.

Soul of a New Machine

Not much to add to the critical consensus around WALL-E; trusted voices such as Tim Burke’s, as well as the distributed hive mind of Rotten Tomatoes, agree that it’s great. Having seen the movie yesterday (a full two days after its release, which feels like an eternity by the clockspeed of media blogging), I concur — and leave as given my praise for its instantly empathetic characters, striking environments, and balletic storytelling. It’s the first time in a while that tears have welled in my eyes just at the beautiful precision of the choices being made 24 times per second up on the big screen; a happy recognition that Pixar, over and over, is somehow nailing it at both the fine level of frame generation and the macro levels of marketplace logic and movie history. We are in the midst of a classic run.

Building on my comments on Tim’s post, I’m intrigued by the trick Pixar has pulled off in positioning itself amid such turbulent crosscurrents of technological change and cinematic evolution: rapids aboil with mixed feelings about nostalgia for golden age versus the need to stay new and fresh. The movies’ mental market share — the grip in which the cinematic medium holds our collective imaginary — is premised on an essential contradiction between the pleasures of the familiar and the equally strong draw of the unfamiliar. That dialectic is visible in every mainstream movie as a tension between the predictability of genre patterns and the discrete deformations we systematize and label as style.

But nowadays this split has taken on a new visibility, even a certain urgency, as we confront a cinema that seems suddenly digital to its roots. Hemingway (or maybe it was Fitzgerald) wrote that people who go bankrupt do so twice: first gradually, then all at once. The same seems true of computer technology’s encroachment on traditional filmmaking practices. We thought it was creeping up on us, but in a seeming eyeblink, it’s everywhere. Bouncing around inside the noisy carnival of the summer movie season, careening from the waxy simulacrum of Indiana Jones into the glutinous candied nightmare of Speed Racer, it’s easy to feel we’re waking up the morning after an alien invasion, to find ourselves lying in bed with an uncanny synthetic replacement of our spouse.

Pixar’s great and subtle achievement is that it makes the digital/cinema pod-people scenario seem like a simple case of Capgras Syndrome, a fleeting patch of paranoia in which we peer suspiciously at our movies and fail to recognize them as being the same lovable old thing as always. With its unbroken track record of releases celebrated for their “heart,” Pixar is marking out a strategy for the successful future of a fully digital cinema. The irony, of course, is that the studio is doing so by shrugging off its own cutting-edge nature, making high-tech products with low-tech content.

Which is not to say that WALL-E lacks in technological sublimity. On the contrary, it’s a ringing hymn to what machines can do, both in front of and behind the camera. More so than the plastic bobbles of Toy Story, the chitinous carapaces of A Bug’s Life, the scales and fins of Finding Nemo or the polished chassis of Cars, the performers in WALL-E capture the fundamental gadgety wonder of a CG character: they look like little robots, but in another, more inclusive sense they are robots — cyborged 2D sandwiches of actors’ voices, animators’ keyframes, and procedural rendering. There’s a longstanding trope in Pixar films that the coldly inorganic can be brought to life; think of the wooden effigy of a bird built by the heroes of A Bug’s Life, or the existential yearnings of Woody and Buzz Lightyear in the Toy Story films. WALL-E, however, calibrates a much narrower metaphorical gap between its subject matter and its underlying mode of production. Its sweetly comic drama of machines whose preprogrammed functionalities are indistinguishable from their lifeforce is like a reassuring parable of cinema’s future: whether the originating matrix is silicon or celluloid, our virtual pleasures will reflect (even enshrine) an enduring humanity.

I’ll forgo commentary on the story and its rich webwork of themes, except to note a felicitous convergence of technology’s hetero gendering and competing design aesthetics that remap the Macintosh’s white curves onto the eggy life-incubator of EVE — juxtaposed with a masculine counterpart in the ugly-handsome boxiness of PC and LINUX worlds. I delighted in the film’s vision of an interstellar cruise liner populated by placid chubbies, but was also impressed by the opening 30-40 minutes set amid the ruins of civilization. It says something that for the second time this year, a mainstream science-fiction film has enticed us to imagine ourselves the lone survivor of a decimated earth, portraying this situation on one level as a prison of loneliness and on another as an extended vacation: tourists of the apocalypse. I refer here of course to the better-than-expected I Am Legend, whose vistas of a plague-depopulated Manhattan unfold in loving extended takes that invite Bazinian immersion and contemplation:

Beyond these observations, what stands out to me among the many pleasures of WALL-E are the bumper materials on either side of the feature: the short “Presto,” which precedes the main film, and the credit sequence that closes the show. Such paratexts are always meaningful in a Pixar production, but tend to receive less commentary than the “meat” of the movie. Tim points out accurately that “Presto” is the first time a Pixar short has captured the antic Dionysian spirit of a Tex Avery cartoon (though I’d add that Avery’s signature eruption of the id, that curvaceous caricature of womanhood Red, was preemptively foregrounded by Jessica Rabbit in 1988’s Who Framed Roger Rabbit; such sex-doll humor seems unlikely to be emulated any time soon in Pixar’s family-friendly universe — though the Wolf could conceivably make an appearance). What I like about “Presto” is the short’s reliance on “portal logic” — the manifold possibilities for physical comedy and agonistic drama in the phenomenon of spatial bilocation, so smartly operationalized in the Valve videogame Portal.

As for the end credits of WALL-E, they are unexpectedly daring in scope, recapitulating the history of illustration itself — compressing thousands of years of representational practices in a span of minutes. As the first names appear onscreen, cave drawings coalesce, revealing what happens as robots and humans work together to repopulate the earth and nurse its ecosystem back to health. The cave drawings give way to Egyptian-style hieroglyphs and profiled 2D portraiture, Renaissance perspective drawings, a succession of painterly styles. Daring, then subversive: from Seurat’s pointillism, Monet’s impressionism, and Van Gogh’s loony swirls, the credits leap to 8-bit computer graphics circa the early 1980s — around the time, as told in David A. Price’s enjoyable history of the studio, that Pixar itself came into existence. WALL-E and his friends cavort in the form of jagged sprites, the same as you’d find in any Atari 2600 game, or perhaps remediated on the tiny screens of cell phones or the Wii’s retrographics.

I’m not sure what WALL-E‘s credits are “saying” with all this, but surely it provides a clue to the larger logic of technological succession as it is being subtextually narrated by Pixar. Note, for example, that photography as a medium appears nowhere in the credits’ graphic roll call; more scandalously, neither does cinematography — nor animation. In Pixar’s restaging of its own primal scene, the digital emerges from another tradition entirely: one more ludic, more subjective and individualistic, more of an “art.” Like all ideologies, the argument is both transparently graspable and fathoms deep. Cautionary tale, recuperative fantasy, manufactured history doubling as road map for an uncertain digital future: Pixar’s movies, none more so than WALL-E, put it all over at once.

Retrographics and Multiplayer avant la lettre

super-mario-galaxy-21.jpg

Let me start with a disclosure: although I own both a Nintendo Wii and an XBox 360, I almost exclusively play the latter — and rarely play the former. I’ve agonized over this. Why does my peak Wii moment remain the mercenary achievement of tracking one down last summer? Why haven’t the Wii-mote and its associated embodied play style inspired me to spend a fraction as many hours in front of the television as I’ve spent working through Katamari Beautiful, Valve’s Orange Box, Halo 3, and Need for Speed Carbon on the Xbox? The answer, it seems to me, comes down to graphics: Microsoft’s console simply pushes more pixels and throws more colors on my new HD TV, and I vanish into those neon geometries without looking back. I feel guilty about this, vaguely philistine, the same way I felt when I switched from Macintosh to PC. But there it is. Like Roy Neary (Richard Dreyfuss) in Close Encounters of the Third Kind, I go where the pretty lights lead me.

But that doesn’t make the phenomenon of the Wii any less fascinating, and the recent New York Times article on the top-selling console games of 2007 is compelling in its assertion that gamers are turning away from the kind of high-end techno-sublime represented by the Xbox 360 and the Playstation 3 and toward the simpler graphics and more accessible play style of the Wii. It makes sense that a dialectic would emerge in videogames between the superadvanced aesthetic and its primitive-by-comparison cousin; the binary of shiny-new and gnarly-old has structured everything from Quake‘s blend of futuristic cyborgs and medieval demons to Robert Zemeckis’s digital adaptation of the ancient Beowulf.  Anyone who’s discovered the joy of bringing old 8-bit games to life with emulators like MAME knows that the pleasure of play involves an oscillation between where we’ve been and where we’re going; between what passes for new now and what used to do so; between the sensory thrill of the state-of-the-art and the nostalgia of our first innocent encounters with the videogame medium in all its subjectivity-transforming power.

A less elaborate way of saying which is: the Wii represents through its pared-down graphics the return of a historical repressed, the enshrining of a certain simplicity that remains active at the medium’s heart, but until now has not been packaged and sold back to us with quite such panache.

The other interesting claim in the article is that the top games (World of Warcraft, Guitar Hero) are not solitary, solipsistic shooters like Bioshock and Halo, but rich social experiences — you play them with other people around, whether online or ranged around you in the dorm room. Seth Schiesel writes,

Ever since video games decamped from arcades and set up shop in the nation’s living rooms in the 1980s, they have been thought of as a pastime enjoyed mostly alone. The image of the antisocial, sunlight-deprived game geek is enshrined in the popular consciousness as deeply as any stereotype of recent decades.

The thing is, I can’t think of a time when the games I played as a child and teenager in the 1970s and 1980s weren’t social. I always consumed them with a sense of community, whether because my best friend Dan was with me, watching me play (or I watching him) and offering commentary, or because I talked about games endlessly with kids at school. Call it multiplayer avant la lettre; long before LANs and the internet made it possible to blast each other in mazes or admire each other’s avatarial stand-ins, we played our games together, making sense of them as a community — granted, a maligned subculture by the mainstream measure of high school, but a community nonetheless. As graphics get better and technologies more advanced, I hope that gamers don’t rewrite their pasts, forgetting the friendships forged in an around algorithmic culture.

Smut in 1080p

This article on the porn industry’s response to the HD DVD / Blu-Ray format wars caught my eye, reminding me that changing technological standards are an equal-opportunity disrupter. It’s not only the big  movie studios (like Warner Brothers, which made headlines last week by throwing its weight behind Blu-Ray) that must adapt to the sensory promise and commercial peril of HD, but porn providers,  television networks, and videogame makers: up and down and all around the messy scape of contemporary media, its brightly-lit and family-friendly spaces as well as its more shadowy and stigmatized precincts.

The prospect of HD pornography is interesting, of course, because it’s such a natural evolution of this omnipresent yet disavowed form. The employment of media to stimulate, arouse, and drive to climax the apparatus of pleasure hard-wired into our brains and bodies is as old as, well, any medium you care to name. From painted scrolls to printed fiction, stag reels to feature films, comic books to KiSS dolls, porn has always been with us: the dirty little secret (dirty big secret, really, since very few are unaware of it) of a species whose unique co-evolution of optical acuity, symbolic play, and recording and playback instrumentalities has granted us the ability — the curse, some would say — to script and immerse ourselves in virtual realities on page and screen. That porn is now making the leap to a technology promising higher-fidelity imaging and increased storage capacity is hardly surprising.

The news also reminds us of the central, integral role of porn in the economic fortunes of a given medium. I remember discovering, as a teenager in the 1980s, that the mom-and-pop video stores springing up in my home town invariably contained a back room (often, for some reason, accessed through swinging wooden doors like those in an old-time saloon) of “adult” videocassettes. In the 1990s a friend of mine, manager of one of the chain video places that replaced the standalone stores, let me in on the fact that something like 60% of their revenues came from rentals of porn. The same “XXX factor” also structures the internet, providing a vastly profitable armature of explicit websites and chat rooms — to say nothing of the free and anonymous fora of newsgroups, imageboards, torrents, and file-sharing networks — atop which the allegedly dominant presence of Yahoo, Amazon, Google, etc. seem like a thin veneer of respectable masquerade, as flimsy a gateway as those swinging saloon doors.

The inevitable and ironic question facing HD porn is whether it will show too much, a worry deliciously summarized in the article’s mention of “concern about how much the camera would capture in high-definition.” The piece goes on to quote Albert Lazarito, vice president of Silver Sinema, as saying that “Imperfections are modified” by the format. (I suspect that Lazarito actually said, or meant, that imperfections are magnified.) The fact is that porn is frequently a grim, almost grisly, affair in its clinical precision. Unlike the soft-core content of what’s often labeled “erotica,” the blunt capture of sexual congress in porn tends to unfold in ghoulishly long takes, more akin to footage from a surveillance camera or weather satellite than the suturing, storytelling grammar of classical Hollywood. Traditional continuity editing is reserved for the talky interludes between sexual “numbers,” resulting in a binary structure something like the alternation of cut-scenes and interactive play in many videogames. (And here let’s not forget the words attributed to id Software’s John Carmack, Edison of the 3D graphics engine, that “Story in a game is like a story in a porn movie. It’s expected to be there, but it’s not that important.”)

As an industry that sometimes thrives on the paired physical and economic exploitation of its onscreen workers, porn imagery contains its share of bruises, needle marks, botched plastic surgeries, and poorly-concealed grimaces of boredom (at best) or pain (at worst). How will viewers respond to the pathos and suffering at the industry’s core — of capitalism’s antihumanism writ large across the bodies offered up for consumers’ pleasure-at-a-distance — when those excesses are rendered in resolutions of 1920×1080?