Tron: Legacy

This review is dedicated to my friends David Surman and Will Brooker.

Part One: We Have Never Been Digital

***

If Avatar was in fact the “gamechanger” its prosyletizers claimed, then it’s fitting that the first film to surpass it is itself about games, gamers, and gaming. Arriving in theaters nearly a year to the day after Cameron’s florid epic, Tron: Legacy delivers on the promise of an expanded blockbuster cinema while paradoxically returning it to its origins.

Those origins, of course, date back to 1982, when the first Tron — brainchild of Steven Lisberger, who more and more appears to be the Harper Lee of pop SF, responsible for a single inspired act of creation whose continued cultural resonance probably doomed any hope of a career — showed us what CGI was really about. I refer not to the actual computer-generated content in that film, whose 96-minute running time contains only 15-20 minutes of CG animation (the majority of the footage was achieved through live-action plates shot in high contrast, heavily rotoscoped, and backlit to insert glowing circuit paths into the environment and costumes), but instead to the discursive aura of the digital frontier it emits: another sexy, if equally illusory, glow. Tron was the first narrative feature film to serve up “the digital” as a governing design aesthetic as well as a marketing gimmick. Sold as high-tech entertainment event, audiences accepted Lisberger’s folly as precisely that: a time capsule from the future, coming attraction as main event. Tron taught us, in short, to mistake a hodgepodge of experiment and tradition as a more sweeping change in cinematic ontology, a spell we remain under to this day.

But the state of the art has always been a makeshift pact between industry and audience, a happy trance of “I know, but even so …” For all that it hinges on a powerful impression of newness, the self-applied declaration of vanguard status is, ironically, old hat in filmmaking, especially when it comes to the periodic eruptions of epic spectacle that punctuate cinema’s more-of-the-same equilibrium. The mutations of style and technology that mark film’s evolutionary leaps are impossible to miss, given how insistently they are promoted: go to YouTube and look at any given Cecil B. DeMille trailer if you don’t believe me. “Like nothing you’ve ever seen!” may be an irresistible hook (at least to advertisers), but it’s rarely true, if only because trailers, commercials, and other advance paratexts ensure we’ve looked at, or at least heard about, the breakthrough long before we purchase our tickets.

In the case of past breakthroughs, the situation becomes even more vexed. What do you do with a film like Tron, which certainly was cutting-edge at the time of its release, but which, over the intervening twenty-eight years, has taken on an altogether different veneer? I was 16 when I first saw it, and have frequently shown its most famous setpiece — the lightcycle chase — in courses I teach on animation and videogames. As a teenager, I found the film dreadfully inert and obvious, and rewatching it to prepare for Tron: Legacy,  I braced myself for a similarly graceless experience. What I found instead was that a magical transformation had occurred. Sure, the storytelling was as clumsy as before, with exposition that somehow managed to be both overwritten and underexplained, and performances that were probably half-decent before an editor diced them them into novocained amateurism. The visuals, however, had aged into something rather beautiful.

Not the CG scenes — I’d looked at those often enough to stay in touch with their primitive retrogame charm. I’m referring to the live-action scenes, or rather, the suturing of live action and animation that stands in for computer space whenever the camera moves close enough to resolve human features. In these shots, the faces of Flynn (Jeff Bridges), Tron (Bruce Boxleitner), Sark (David Warner), and the film’s other digital denizens are ovals of flickering black-and-white grain, their moving lips and darting eyes hauntingly human amid the neon cartoonage.

Peering through their windows of backlit animation, Tron‘s closeups resemble those in Dreyer’s Passion of Joan of Arc — inspiration for early film theorist Béla Balázs’s lyrical musings on “The Face of Man” — but are closer in spirit to the winking magicians of George Méliès’s trick films, embedded in their phantasmagoria of painted backdrops, double exposures, and superimpositions. Like Lisberger, who would intercut shots of human-scaled action with tanks, lightcycles, and staple-shaped “Recognizers,” Méliès alternated his stagebound performers with vistas of pure artifice, such as animated artwork of trains leaving their tracks to shoot into space. Although Tom Gunning argues convincingly that the early cinema of attractions operated by a distinctive logic in which audiences sought not the closed verisimilar storyworlds of classical Hollywood but the heightened, knowing presentation of magical illusions, narrative frameworks are the sauce that makes the taste of spectacle come alive. Our most successful special effects have always been the ones that — in an act of bistable perception — do double duty as story.

In 1982, the buzzed-about newcomer in our fantasy neighborhoods was CGI, and at least one film that year — Star Trek II: The Wrath of Khan — featured a couple of minutes of computer animation that worked precisely because they were set off from the rest of the movie, as special documentary interlude. Other genre entries in that banner year for SF, like John Carpenter’s remake of The Thing and Steven Spielberg’s one-two punch of E.T. and Poltergeist (the latter as producer and crypto-director), were content to push the limits of traditional effects methods: matte paintings, creature animatronics, gross-out makeup, even a touch of stop-motion animation. Blade Runner‘s effects were so masterfully smoggy that we didn’t know what to make of them — or of the movie, for that matter — but we seemed to agree that they too were old school, no matter how many microprocessors may have played their own crypto-role in the production.

“Old school,” however, is another deceptively relative term, and back then we still thought of special effects as dividing neatly into categories of the practical/profilmic (which really took place in front of the camera) and optical/postproduction (which were inserted later through various forms of manipulation). That all special effects — and all cinematic “truths” — are at heart manipulation was largely ignored; even further from consciousness was the notion that soon we would redefine every “predigital” effect, optical or otherwise, as possessing an indexical authenticity that digital effects, well, don’t. (When, in 1998, George Lucas replaced some of the special-effects shots in his original Star Wars trilogy with CG do-overs, the outrage of many fans suggested that even the “fakest” products of 70’s-era filmmaking had become, like the Velveteen Rabbit, cherished realities over time.)

Tron was our first real inkling that a “new school” was around the corner — a school whose presence and implications became more visible with every much-publicized advance in digital imaging. Ron Cobb’s pristine spaceships in The Last Starfighter (1984); the stained-glass knight in Young Sherlock Holmes (1985); the watery pseudopod in The Abyss (1989); each in its own way raised the bar, until one day — somewhere around the time of Independence Day (1996), according to Michele Pierson — it simply stopped mattering whether a given special effect was digital or analog. In the same way that slang catches on, everything overnight became “CGI.” That newcomer to the neighborhood, the one who had people peering nervously through their drapes at the moving truck, had moved in and changed the suburb completely. Special-effects cinema now operated under a technological form of the one-drop rule: all it took was a dab of CGI to turn the whole thing into a “digital effects movie.” (Certain film scholars regularly use this term to refer to both Titanic [1997] and The Matrix [1999], neither of which employs more than a handful of digitally-assisted shots — many of these involving intricate handoffs from practical miniatures or composited live-action elements.)

Inscribed in each frame of Tron is the idea, if not the actual presence, of the digital; it was the first full-length rehearsal of a special-effects story we’ve been telling ourselves ever since. Viewed today, what stands out about the first film is what an antique and human artifact — an analog artifact — it truly is. The arrival of Tron: Legacy, simultaneously a sequel, update, and reimagining of the original, gives us a chance to engage again with that long-ago state of the art; to appreciate the treadmill evolution of blockbuster cinema, so devoted to change yet so fixed in its aims; and to experience a fresh and vastly more potent vision of what’s around the corner. The unique lure (and trap) of our sophisticated cinematic engines is that they never quite turn that corner, never do more than freeze for an instant, in the guise of its realization, a fantasy of film’s future. In this sense — to rephrase Bruno Latour — we have never been digital.

Part Two: 2,415 Times Smarter

***

In getting a hold on what Tron: Legacy (hereafter T:L) both is and isn’t, I find myself thinking about a line from its predecessor. Ed Dillinger (David Warner), figurative and literal avatar of the evil corporation Encom, sits in his office — all silver slabs and glass surfaces overlooking the city’s nighttime gridglow, in the cleverest and most sustained of the thematic conceits that run throughout both films: the paralleling, to the point of indistinguishability, of our “real” architectural spaces and the electronic world inside the computer. (Two years ahead of Neuromancer and a full decade before Snow Crash, Tron invented cyberspace.)

Typing on a desk-sized touchscreen keyboard that neatly predates the iPad, Dillinger confers with the Master Control Program or MCP, a growling monitorial application devoted to locking down misbehavior in the electronic world as it extends its own reach ever outward. (The notion of fascist algorithm, policing internal imperfection while growing like a malignancy, is remapped in T:L onto CLU — another once-humble program omnivorously metastasized.) MCP complains that its plans to infiltrate the Pentagon and General Motors will be endangered by the presence of a new and independent security watchdog program, Tron. “This is what I get for using humans,” grumbles MCP, which in terms of human psychology we might well rename OCD with a touch of NCP. “Now wait a minute,” Dillinger counters, “I wrote you.” MCP replies coldly, “I’ve gotten 2,415 times smarter since then.”

The notion that software — synecdoche for the larger bugaboo of technology “itself” — could become smarter on its own, exceeding human intelligence and transcending the petty imperatives of organic morality, is of course the battery that powers any number of science-fiction doomsday scenarios. Over the years, fictionalizations of the emergent cybernetic predator have evolved from single mainframe computers (Colossus: The Forbin Project [1970], WarGames [1983]) to networks and metal monsters (Skynet and its time-traveling assassins in the Terminator franchise) to graphic simulations that run on our own neural wetware, seducing us through our senses (the Matrix series [1999-2003]). The electronic world established in Tron mixes elements of all three stages, adding an element of alternative storybook reality a la Oz, Neverland … or Disneyworld.

Out here in the real world, however, what runs beneath these visions of mechanical apocalypse is something closer to the Technological Singularity warned of by Ray Kurzweil and Vernor Vinge, as our movie-making machinery — in particular, the special-effects industry — approaches a point where its powers of simulation merge with its custom-designed, mass-produced dreams and nightmares. That is to say: our technologies of visualization may incubate the very futures we fear, so intimately tied to the futures we desire that it’s impossible to sort one from the other, much less to dictate which outcome we will eventually achieve.

In terms of its graphical sophistication as well as the extended forms of cultural and economic control that have come to constitute a well-engineered blockbuster, Tron: Legacy is at least 2,415 times “smarter” than its 1982 parent, and whatever else we may think of it — whatever interpretive tricks we use to reduce it to and contain it as “just a movie” — it should not escape our attention that the kind of human/machine fusion, not to mention the theme of runaway AI, at play in its narrative are surface manifestations of much more vast and far-reaching transformations: a deep structure of technological evolution whose implications only start with the idea that celluloid art has been taken over by digital spectacle.

The lightning rod for much of the anxiety over the replacement of one medium by another, the myth of film’s imminent extinction, is the synthespian or photorealistic virtual actor, which, following the logic of the preceding paragraphs, is one of Tron: Legacy‘s chief selling points. Its star, Jeff Bridges, plays two roles — the first as Flynn, onetime hotshot hacker, and the second as CLU, his creation and nemesis in the electronic world. Doppelgangers originally, Flynn has aged while CLU remains unchanged, the spitting image of Flynn/Bridges circa 1982.

Except that this image doesn’t really “spit.” It stares, simmers, and smirks; occasionally shouts; knocks things off tables; and does some mean acrobatic stunts. But CLU’s fascinating weirdness is just as evident in stillness as in motion (see the top of this post), for it’s clearly not Jeff Bridges we’re looking at, but a creepy near-miss. Let’s pause for a moment on this question: why a miss at all? Why couldn’t the filmmakers have conjured up a closer approximation, erasing the line between actor and digital double? Nearly ten years after Final Fantasy: The Spirits Within, it seems that CGI should have come farther. After all, the makers of T:L weren’t bound by the aesthetic obstructions that Robert Zemeckis imposed on his recent films, a string of CG waxworks (The Polar Express [2004], Beowulf [2007], A Christmas Carol [2009], and soon — shudder — a Yellow Submarine remake) in which the inescapable wrongness of the motion-captured performances are evidently a conscious embrace of stylization rather than a failed attempt at organic verisimilitude. And if CLU were really intended to convince us, he could have been achieved through the traditional retinue of doubling effects: split-frame mattes, body doubles in clever shot-reverse-shot arrangements, or the combination of these with motion-control cinematography as in the masterful composites of Back to the Future 2, which, made in 1989, is only seven years older than the first Tron.

The answer to the apparent conundrum is this: CLU is supposed to look that way; we are supposed to notice the difference, because the effect wouldn’t be special if we didn’t. The thesis of Dan North’s excellent book Performing Illusions is that no special effect is ever perfect — we can always spot the joins, and the excitement of effects lies in their ceaseless toying with our faculties of suspicion and detection, the interpretation of high-tech dreams. Updating the argument for synthespian performances like CLU’s, we might profitably dispose of the notion that the Uncanny Valley is something to be crossed. Instead, smart special effects set up residence smack-dab in the middle.

Consider by analogy the use of Botox. Is the point of such cosmetic procedures to absolutely disguise the signs of age? Or are they meant to remain forever fractionally detectable as multivalent signifiers — of privilege and wealth, of confident consumption, of caring enough about flaws in appearance to (pretend to) hide them? Here too is evidence of Tron: Legacy’s amplified intelligence, or at least its subtle cleverness: dangling before us a CLU that doesn’t quite pass the visual Turing Test, it simultaneously sells us the diegetically crucial idea of a computer program in the shape of human (which, in fact, it is) and in its apparent failure lulls us into overconfident susceptibility to the film’s larger tapestry of tricks. 2,415 times smarter indeed!

Part Three: The Sea of Simulation

***

Doubles, of course, have always abounded in the works that constitute the Tron franchise. In the first film, both protagonist (Flynn/Tron) and antagonist (Sark/MCP) exist as pairs, and are duplicated yet again in the diegetic dualism of real world/electronic world. (Interestingly, only MCP seems to lack a human manifestation — though it could be argued that Encom itself fulfills that function, since corporations are legally recognized as people.) And the hall of mirrors keeps on going. Along the axis of time, Tron and Tron: Legacy are like reflections of each other in their structural symmetry. Along the axis of media, Jeff Bridges dominates the winter movie season with performances in both T:L and True Grit, a kind of intertextual cloning. (The Dude doesn’t just abide — he multiplies.)

Amid this rapture of echoes, what matters originality? The critical disdain for Tron: Legacy seems to hinge on three accusations: its incoherent storytelling; its dependence on special effects; and the fact that it’s largely a retread of Tron ’82. I’ll deal with the first two claims below, but on the third count, T:L must surely plead “not guilty by reason of nostalgia.” The Tron ur-text is a tale about entering a world that exists alongside and within our own — indeed, that subtends and structures our reality. Less a narrative of exploration than of introspection, its metaphysics spiral inward to feed off themselves. Given these ouroboros-like dynamics, the sequel inevitably repeats the pattern laid down in the first, carrying viewers back to another embedded experience — that of encountering the first Tron — and inviting us to contrast the two, just as we enjoy comparing Flynn and CLU.

But what about those who, for reasons of age or taste, never saw the first Tron? Certainly Disney made no effort to share the original with us; their decision not to put out a Blu-Ray version, or even rerelease the handsome two-disk 20th anniversary DVD, has led to conspiratorial muttering in the blogosphere about the studio’s coverup of an outdated original, whose visual effects now read as ridiculously primitive. Perhaps this is so. But then again, Disney has fine-tuned the business of selectively withholding their archive, creating rarity and hence demand for even their flimsiest products. It wouldn’t at all surprise me if the strategy of “disappearing” Tron pre-Tron: Legacy were in fact an inspired marketing move, one aimed less at monetary profit than at building discursive capital. What, after all, do fans, cineastes, academics, and other guardians of taste enjoy more than a privileged “I’ve seen it and you haven’t” relationship to a treasured text? Comic-Con has become the modern agora, where the value of geek entertainment items is set for the masses, and carefully coordinated buzz transmutes subcultural fetish into pop-culture hit.

It’s maddeningly circular, I know, to insist that it takes an appreciation of Tron to appreciate Tron: Legacy. But maybe the apparent tautology resolves if we substitute terms of evaluation that don’t have to do with blockbuster cinema. Does it take appreciation of Ozu (or Tarkovsky or Haneke or [insert name here]) to appreciate other films by the same director? Tron: Legacy is not in any classical sense an auteurist work — I couldn’t tell you who directed it without checking IMDb — but who says the brand itself can’t function as an auteur, in the sense that a sensitive reading of it depends on familiarity with tics and tropes specific to the larger body of work? Alternatively, we might think of Tron as sub-brand of a larger industrial genre, the blockbuster, whose outward accessibility belies the increasingly bizarre contours of its experience. With its diffuse boundaries (where does a blockbuster begin and end? — surely not within the running time of a single feature-length movie) and baroque textual patterns (from the convoluted commitments of transmedia continuity to rapidfire editing and slangy shorthands of action pacing), the contemporary blockbuster possesses its own exotic aesthetic, one requiring its own protocols of interpretation, its own kind of training, to properly engage. High concept does not necessarily mean non-complex.

Certainly, watching Tron: Legacy, I realized it must look like visual-effects salad to an eye untrained in sensory overwhelm. I don’t claim to enjoy everything made this way: Speed Racer made me queasy, and Revenge of the Fallen put me into an even deeper sleep than did the first Transformers. T:L, however, is much calmer in its way, perhaps because its governing look — blue, silver, and orange neon against black — keeps the frame-cramming to a minimum. (The post-1983 George Lucas committed no greater sin than deciding to pack every square inch of screen with nattering detail.) Here the sequel’s emulation of Tron‘s graphics is an accidental boon: limited memory and storage led in the original to a reliance on black to fill in screen space, a restriction reinvented in T:L as strikingly distinctive design. Our mad blockbusters may indeed be getting harder to watch and follow. But perhaps we shouldn’t see this as proof of commercially-driven intellectual bankruptcy and inept execution, but as the emergence of a new — and in its way, wonderfully difficult and challenging — mode of popular art.

T:L works for me as a movie not because its screenplay is particularly clever or original, but because it smoothly superimposes two different orders of technological performance. The first layer, contained within the film text, is the synthesis of live action and computer animation that in its intricate layering succeeds in creating a genuinely alternate reality: action-adventure seen through the kino-eye. Avatar attempted this as well, but compared to T:L, Cameron’s fantasia strikes me as disingenuous in its simulationist strategy. The lush green jungles of Pandora and glittering blue skin of the Na’vi are the most organic of surfaces in which CGI could cloak itself: a rendering challenge to be sure, but as deceptively sentimental in its way as a Thomas Kinkade painting. Avatar is the digital performing in “greenface,” sneakily dissembling about its technological core. Tron: Legacy, by contrast, takes as its representational mission simulation itself. Its tapestry of visual effects is thematically and ontologically coterminous with the world of its narrative; it is, for us and for its characters, a sea of simulation.

Many critics have missed this point, insisting that the electronic world the film portrays should have reflected the networked environment of the modern internet. But what T:L enshrines is not cyberspace as the shared social web it has lately become, but the solipsistic arena of first-person combat as we knew it in videogames of the late 1970s. As its plotting makes clear, T:L is at heart about the arcade: an ethos of rastered pyrotechnics and three-lives-for-a-quarter. The adrenaline of its faster scenes and the trances of its slower moments (many of them cued by the silver-haired Flynn’s zen koans) perfectly capture the affective dialectics of cabinet contests like Tempest or Missile Command: at once blazing with fever and stoned on flow.

The second technological performance superimposed on Tron: Legacy is, of course, the exhibition apparatus of IMAX and 3D, inscribed in the film’s planning and execution even for those who catch the print in lesser formats. In this sense, too, T:L advances the milestone planted by Avatar, beacon of an emerging mode of megafilm engineering. It seems the case that every year will see one such standout instance of expanded blockbuster cinema — an event built in equal parts from visual effects and pop-culture archetypes, impossible to predict but plain in retrospect. I like to imagine that these exemplars will tend to appear not in the summer season but at year’s end, as part of our annual rituals of rest and renewal: the passing of the old, the welcoming of the new. Tron: Legacy manages to be about both temporal polarities, the past and the future, at once. That it weaves such a sublime pattern on the loom of razzle-dazzle science fiction is a funny and remarkable thing.

***

To those who have read to the end of this essay, it’s probably clear that I dug Tron: Legacy, but it may be less clear — in the sense of “twelve words or less” — exactly why. I confess I’m not sure myself; that’s what I’ve tried to work out by writing this. I suppose in summary I would boil it down to this: watching T:L, I felt transported in a way that’s become increasingly rare as I grow older, and the list of movies I’ve seen and re-seen grows ever longer. Once upon a time, this act of transport happened automatically, without my even trying; I stumbled into the rabbit-holes of film fantasy with the ease of … well, I’ll let Laurie Anderson have the final words.

I wanted you. And I was looking for you.
But I couldn’t find you.
I wanted you. And I was looking for you all day.
But I couldn’t find you. I couldn’t find you.

You’re walking. And you don’t always realize it,
but you’re always falling.
With each step you fall forward slightly.
And then catch yourself from falling.
Over and over, you’re falling.
And then catching yourself from falling.
And this is how you can be walking and falling
at the same time.

Modeling Monsters, Part Three

This is the third in a series of posts on a new project of mine exploring movie-monster fandom and “kid culture” in the U.S. from the early 1960s onward. My focus will be less on monster movies themselves than on the objects that circulated around and constituted the films’ public — and personal — presence: model kits, toys, games, and other paraphernalia. Approaching media culture through its object practices, I argue, reveals a dynamic space of production in which texts, images, and objects translate and transform one another in flows of commodities, collectibles, and creativity. [Previous posts can be found here.]

The line of monster models put out by Aurora starting in 1962 were not, of course, the first figure kits; neither were they the first scale plastic models. As Thomas Graham notes in his collectors’ guide Aurora Model Kits, the company’s first foray into the world of scale model kits was in 1952, with a line of airplanes. Other kits released by Aurora in its first decade of operation included automobiles, boats, submarines, tanks, and missiles. These subjects shared a set of qualities: they were based not on fictional, licensed properties but on existing real-world referents; not on organic, living beings (with the exception of figure kits, which I will discuss below) but on mechanical vessels; and, though they comprised a range of historical periods from antique cars (the WWI-era Stutz Bearcat) to the latest in mid-century aerospace experimentation (the Ryan X-13 Vertijet), favored transport technology and armaments from the two major global military conflicts.

This focus was unsurprising, given the circumstances of plastic kits’ emergence as a popular pastime in the U.S. after the end of World War II. The enormous social and economic changes following 1945 included a radical expansion of products geared to recreation, as factories and workforces were repurposed to drive an affluent North American economy (and a culture of advertising emerged in parallel to foment the necessary appetites). This prosperity played out simultaneously on two levels, one for parents and one for children; as Graham describes it, “Veterans from World War II and Korea resumed their lives, moving to the suburban world of ranch style homes with new Chevies, Fords and Studebakers parked in the car ports. Their kids rode bicycles, shot Daisy air rifles, watched Sky King on TV, listened to 45 rpm records, and read comic books. And they made model airplanes.” (5) As goods proliferated across the spheres of youth and adulthood, the toys of the former scaled down to cheaper, playable size the luxury items of the latter: cars in the garage were mirrored by model autos inside the house. Yet the objects of childhood supplied by mass culture also distorted and amplified the world of adults, condensing primal drives and half-repressed memories in material form: science-fiction serials, air-rifle weapons, and most of all the replication of wartime air- and seacraft in miniature suggest that children of the Baby Boom were awash not just in the detritus of overproductive industry but the solidified, visualized dreams — and nightmares — of the preceding generation.

In Hobbies: Leisure and the Culture of Work in America, Steven M. Gelber situates the postwar explosion of plastic kits against a longer history of crafting and collecting that dates from the late nineteenth century, when social and economic changes in the workplace led to a colonization of domestic space and time by the recognized and hence legitimized world of handicrafts. “Before about 1880 a hobby was a dangerous obsession,” he writes. “After that date it became a productive use of free time.” (3) For Gelber, the paradox of such activities is that they reproduce the attributes of labor, such as regimented time and the creation of commodities, in a domain that should ideally be distinct from, and uncorrupted by, such labor.

Hobbies are a contradiction; they take work and turn it into leisure, and take leisure and turn it into work. Like work, hobbies require specialized knowledge and skills to produce a product that has marketplace value (even if there is no thought of selling it). … Hobbies occupy the borderland that is beyond play but not yet employment. More than any other form of recreational activity, hobbies challenge the easy bifurcation of life’s activities into work and leisure. (23)

Hobbies, in this view, have the ideological effect of industrializing the home, and reconfiguring domestic subjects as subjects of domestic labor — bringing what might otherwise be an unruly and disobedient space into line with the values and beliefs of modern capitalist society. This essentially disciplinary function did not preclude the very real pleasures that could be obtained from, for example, sewing, stamp collecting, or needlepoint. In addition, hobbies could represent complex negotations with the economy, as when items of furniture built at home replaced those that would otherwise have been purchased at stores, or when, during the Great Depression of the 1930s, hobbies took on new ameliorative significance as an inexpensive way to fill the working hours denied to the unemployed. Finally, hobbies remixed gendered identities and skill sets in unusual ways, as women explored newly authorized forms of creative expression and men adapted to more domestic roles.

For Gelber, however, the rising popularity of kits in the postwar period was a less positive development. Citing a 1949 catalog for the hobby supplier American Handicrafts, Gelber notes the way in which kits — prepackaged sets of items for assembly into everything from pot holders and pottery to woven stools and Indian beads — “severely limited hobbyists’ creativity but greatly facilitated their productivity.” (262) Because one could only build a kit into its intended object, and because this process required nothing more than the following of instructions, kits represented a more blunt and dire industrialization of home spaces and subjects, turning hobbies — sometimes literally — into paint-by-numbers activities, “no more art than gluing together a plastic model was a craft.” (263)

The kit was the ultimate victory of the assembly line. Whereas craft amateurs had previously sought to preserve an appreciation for hand craftsmanship in the face of industrialization, kit hobbyists conceded production to the machine. They became the leisure-time equivalent of the apocryphal Ford worker who, as his last wish before retiring, requested permission to finish tightening the bolt he had been starting for the last thirty years. Kit assemblers did not dream of designing the product or forming its parts. It was enough that they could surpass the Ford worker’s wish and actually assemble the whole thing. Forty years of assembly line mentality had transformed the public’s understanding of personal agency from that of the artisan to that of a glorified factory worker. (262-263)

The worst thing about kits, in Gelber’s view, was that “the hobbyist did not have to engage the hobby at a higher level of abstraction.” DIY projects or handicrafts built from the ground up required a hobbyist to solve many problems ahead of time, exerting his or her individuality through the choice of object and materials, along with the tools and skills required. With kits, on the other hand, “There were no preliminary steps, no planning or organizing, no thinking about the process. In other words, the hobbyists did not have to engage the craft intellectually.” (262)

Gelber’s history of hobbies in America stops around 1950, at the dawn of the plastic-kit craze — a time that saw sales of plastic models grow from $44 million in 1945 to $300 million by 1953. His critical reading of the kit phenomenon brings up valid points: this transition to a new era both of industry and recreation marked a profound reconfiguration of longstanding, and much cherished, traditions. The same period saw the diminishment of certain knowledges and skills shared by a public base increasingly dependent on the prefabricated products of mass culture: manufacturing and distribution technologies that Bruno Latour, by way of Marx, has called “congealed labor.” And while, as we shall see, his derision of model-kit building as limited and artless pastime requiring no creative input from the assembler ignores the kinds of transformation, circulation, and sharing that would come to define object practices in 1960s horror fandom, it is easy to imagine the more generous readings that contemporary hobby culture, outside the scope of his book, might engender.

In the next installment, I will turn to the heyday of Aurora, from 1962-1977, and the line of monster kits that made it a success.

Works Cited

Gelber, Steven M. Hobbies: Leisure and the Culture of Work in America. New York: Columbia University Press, 1999.

Graham, Thomas. Aurora Model Kits. 2nd Ed. Atglen, PA: Schiffer Publishing, 2006.


Modeling Monsters, Part Two

This is the second of a series of posts on a new project of mine exploring movie-monster fandom and “kid culture” in the U.S. from the early 1960s onward. My focus will be less on monster movies themselves than on the objects that circulated around and constituted the films’ public — and personal — presence: model kits, toys, games, and other paraphernalia. Approaching media culture through its object practices, I argue, reveals a dynamic space of production in which texts, images, and objects translate and transform one another in flows of commodities, collectibles, and creativity. [Previous posts can be found here.]

Although Famous Monsters of Filmland launched its first issue in 1958, the figure that grounds this study — the Aurora line of plastic model kits based on classic movie monsters — did not appear there until the middle of 1962. Up to that point, the magazine’s “Monster Mail Order” pages featured a collection of materials sharing a vaguely horrific theme: shrunken heads, monster hands and feet, talking skulls, dangling skeletons, and — usually set off on a page of its own, bordered in black — a line of rubber masks that included Screaming Skull; Witch; Vampire; Igor; and Werewolf. By Issue 3 (April 1959), “monster stationery” and 3D comics had been added; by Issue 5 (November 1959), horror movies themselves joined the lineup, with full-length and abridged versions of The Phantom of the Opera (1925), It Came from Outer Space (1953), and The Creature from the Black Lagoon (1954) available for home viewing in 8mm and 16mm versions.

I will return to the range of products featured in FM’s advertising pages later, as part of a larger consideration of the magazine’s content and evolution. But for now I want to focus on an ad that appeared in Issue 18 (July 1962) for the first of what would become a long line of monster kits from Aurora:

Selling for a dollar (plus 35 cents postage and handling) from the newly-formed Captain Company, which operated out of FM’s original home base in Philadelphia, this figure differed from the other products advertised in the magazine in that it wore its DIY nature on its tattered, graveyard-smelling sleeve. A finished and painted version of the kit appears beside an exploded view of its parts, emphasizing rather than eliding the act of construction required to make it whole. Reflecting this, the ad copy trumpets:

YOU ASKED FOR IT — AND HERE IT IS: A COMPLETE KIT of molded styrene plastic to assemble the world’s most FAMOUS MONSTER — Frankenstein! A total of 25 separate pieces go into the making of this exciting, perfectly-scaled model kit by Aurora, quality manufacturer of scale model hobby sets. The FRANKENSTEIN MONSTER stands over 12-inches when assembled. You paint it yourself with quick-dry enamel, and when finished the menacing figure of the great monster appears to walk right off the GRAVESTONE base that is part of this kit.

Taken with the insistent second-person you, the conscious avowal of the kit’s “kit-ness” suggests that, from the start, the appeal of modeling monsters stemmed from its inherently involved and interactive quality: not just activity, but your activity was needed to bring this monster to life. The advertisement hailed readers of FM in a textual foreshadowing of the later “objectual” interpellation promised by the plastic kit itself. In addition, the agency of the reader-cum-builder blurs into that of the creature, which, though a static and nonarticulated figure in its final form, “appears to walk right off” its base. In all, this first appearance of an Aurora kit in FM embodies a nested series of felicitous symmetries, down to the choice of its subject: the Frankenstein Monster, which, in both the Mary Shelley novel that originated it and the 1931 film adaptation that supplied its most iconic rendering, was built from dead parts — an act of promethean “assembly” whose fulcrum is precisely the animate/inanimate divide.

The same principle, of course, could be said to underlie any plastic model kit, whose essence is that it comes in pieces requiring assembly by its owner. Model kits thus metaphorize the object practices of 1960s monster fandom, which similarly took the “pieces” offered by mass culture — in this case, the archive of Universal Studios’ classic horror-film output of the 1930s and 1940s — and transmuted them through a variety of activities into a variety of forms. Significantly, these activities and the forms to which they gave rise often involved movement along what I will call the dimensional axis of media fictions: from printed texts and two-dimensional imagery (both still and moving) into three-dimensional shapes and artifacts. The rubber masks, motorized banks, desktop dioramas, and clay figurines that replicated in the bedrooms and basements of FM readers manifested in material form the texts and images of horror movies and TV shows, enabling fans not just to buy and build, but handle and share, horror culture in tactile form. In turn, these objects often fed back into the production of new texts and images, for example the filming of amateur 8mm monster movies. All of these aspects of horror-media culture came together in Famous Monsters‘ readers, writers, editors, and vendors, and the play of texts and objects that bound and defined them.

In the next installment, I will look more closely at the history of model-kit building as it emerged from hobby, crafting, and collecting cultures in the first half of the twentieth century and, with the introduction of injection-molded plastic kits after World War II, became big business.

Modeling Monsters, Part One

This is the first of a series of posts on a new project of mine exploring movie-monster fandom and “kid culture” in the U.S. from the early 1960s onward. My focus will be less on monster movies themselves than on the objects that circulated around and constituted the films’ public — and personal — presence: model kits, toys, games, and other paraphernalia. Approaching media culture through its object practices, I argue, reveals a dynamic space of production in which texts, images, and objects translate and transform one another in flows of commodities, collectibles, and creativity.

In July 2010, a glossy publication appeared on newsstands, its cover adorned with a colorful Basil Gogos painting of Bela Lugosi in his iconic role of Count Dracula. Under Lugosi’s leering portrait run the words The Return of the World’s First Monster Fan Magazine! Inside, Publisher Philip Kim and Editor in Chief Michael Heisler’s introduction (titled, in punning fashion, “Opening Wounds”) frames the new magazine both as tribute to and continuation of Famous Monsters of Filmland, the long-running brainchild of professional horror fan and collector Forrest J. Ackerman. Ackerman, who died in 2008 at the age of 92, published Famous Monsters with James Warren from 1958 to 1983, after which the title passed controversially among several different hands before its official relaunch by Kim and Heisler.

After asserting Famous Monsters‘ role as “a conduit for undiscovered talent and future giants” that will “again touch fandom through treasures, events, and partnerships,” the introduction goes on to promise returning readers a few surprises. “We’ve got a Captain Company section that’s not quite like anything you’ve seen in FM before,” Heisler writes. The nearly audible wink in his words evidently refers to the fact that the closing pages of the magazine are dominated by a photo spread of sexily fanged, Goth-complexioned models, like something out of True Blood, along with a list of their apparel for sale: “Night of the Living Dead Fitted Women’s Tee,” “Famous Monsters Embroidered Fleece Full Zip Hoodie,” “Nosferatu Collage Fitted Tee.” The following page adds a few more items to the mix, from reproductions of 60s-era FM issues to commemorative coins, silk prints of Ackerman, and statues of Buffy the Vampire Slayer and Angel.

These are not, of course, the only advertisements appearing in the relaunched FM: other ads sell DVDs of cult films, movie posters, lunch boxes, and license plates, promote tattoo parlors, and announce various upcoming film festivals and conventions, suggesting something of the rich commercial and subcultural networks that have always intersected in FM’s pages. But the tensions — as well as the similarities — between the “classical” and “rebooted” Famous Monsters of Filmland are particularly evident in the Captain Company display, for it was this mail-order business that launched with FM and dominated its advertising pages from its earliest days. Indeed, Captain Company’s content was so plentiful that it came to seem an equal partner in the magazine’s editorial content; articles celebrating the stop-motion artistry of Ray Harryhausen and the torturous makeup feats of Lon Chaney, Sr. blended osmotically with ads for model kits, buttons, posters, books, records, 8mm and 16mm films, and a variety of other commodities, so that the process of learning about and appreciating horror films, directors, actors, and special-effects stars was difficult to distinguish from the acquisition of horror-themed paraphernalia. And while the new Captain Company and its related partners in sales are perhaps “not quite like” their classical predecessors, both are predicated on the idea that, in fact, fandom of horror media has for several decades depended profoundly on the creation and circulation of objects as much as texts.

Film and television studies have tended to overlook or sideline the material life of media fictions, consigning such objects to the blighted category of the commercial tie-in: the cheap plastic toy designed to cash in on the Star Wars craze, the t-shirt emblazoned with Bella and Edward of the Twilight saga. Too often, the implication is that the owners of such objects are cultural dupes. Fan studies have made important interventions in the transformative texts that writers and vidders spark from the raw material of TV shows and movies, but pay less attention to the crafts and collectibles that frequently accompany this culture of creativity. An examination of Famous Monsters in its heyday — the early 1960s through the mid-1970s — offers an expanded picture of how that publication served as a central site for the distribution of material wares, while providing visual templates and discursive forums for the activities of construction, collection, and display that defined horror-film fandom during this period. Viewed longitudinally, the Baby Boom generation that came of age with Famous Monsters and other publications devoted to horror, science fiction, and fantasy helped to feed both the audience pool and the professional base responsible for the boom in blockbuster SF that dominated late-70s and early-80s cinema. Finally, the current market for collectible and constructible items, epitomized by statues of movie, TV, and comic-book characters, springs from generational roots in Famous Monsters‘ prime decades of operation.

This concludes my opening thoughts on the Modeling Monsters project. In the next installment, I will turn to the grounding figure of my study: the plastic model kit, in particular the Aurora line of classic movie monsters.

Works Cited

Kim, Philip and Michael Heisler. “Opening Wounds.” Famous Monsters of Filmland 251 (July 2010). 4.

Back to Back to the Future

Revisiting Back to the Future on Blu-Ray, it’s hard not to get sucked into an infinite regress the likes of which would probably have pleased screenwriters Robert Zemeckis and Bob Gale: the 1985 original, viewed 25 years later, has acquired layers of unintended convolution, discovery, and loss.

As with any smartly-executed time travel story (see: La JeteePrimer, Timecrimes, and “City on the Edge of Forever”), the plot gets you thinking in loops, attentive to major and minor patterns of similarity and difference, playing textual detective long before Christopher Nolan came along with the narrative games that are his auteurist signature. And BTTF is nothing if not a well-constructed mousetrap, full of cleverly self-referential production design, dropping hints and clues from its very first frames about the saga that’s about to unfold. (Panning across Doc Brown’s Rube-Goldberg-esque jumble of a laboratory, the camera lingers on a miniature clock from whose hands a figure hangs comically, in a shoutout both to Harold Lloyd in Safety Last! [1923] and to BTTF’s own climax.) In this sense, it’s a film designed for multiple viewings, though the valence of that repetition has changed over the quarter-century since the film’s first release: in the mid-eighties, BTTF was a quintessential summer blockbuster, its commercial life predicated on repeat business and the just-emerging market of home rentals. Nowadays, the fuguelike structure of BTTF lends itself perfectly to the digitalized echo chamber of what Barbara Klinger terms replay culture and the encrusted supplementation of making-of materials sparked in the random-access format of DVDs and fanned into a blaze by the enormously expanded storage of Blu-Rays. (Learning to navigate the interlocked and labyrinthine documentaries on the new Alien set is like being lost on the Nostromo.)

But in 1985, all this was yet to come, and BTTF’s bubbly yet machine-lathed feat of comic SF is all the more remarkable for maintaining its innocence across the gulf of the changing technological and economic contexts that define modern blockbuster culture. It still feels fresh, crisp, alive with possibilities. If anything, it’s somehow gotten younger and more graceful: expository setups that seemed leaden now trip off the actors’ tongues like inspired wordplay, while action setpieces that seemed unnecessarily prolonged — in particular, the elaborate climax in which a time-traveling DeLorean must intersect a lightning-bolt-fueled surge of energy while traveling at exactly 88 miles per hour — now unspool with the precision of a Fred Astaire dance routine. Perhaps the inspired lightness of BTFF is simply a matter of contrast with our current blockbusters, which have grown so dense and heavy with production design and ubiquitous visual effects, and whose chapterized storyworlds so entangled with encyclopedic continuity, that engaging with them feels like subscribing to a magazine — or a gym membership.

BTTF, of course, is a guilty player in this evolution. Its two sequels were filmed back-to-back, an early instance of the “threequelization” that would later result in such elephantine letdowns as the Matrix and Pirates of the Caribbean followups. But just as the story of BTTF involves traveling to an unblemished past, when sin was a tantalizing temptation to be toyed with rather than a buried regret, the reappearance of the film in 2010 allows us to revisit a lost moment of cinema, newly pure-looking in relation to the rote and tired wonders of today. For a special-effects comedy, there are surprisingly few visual tricks in evidence: ILM’s work is confined to some animated electrical discharges, a handful of tidy and unshowy matte shots, and a motion-controlled flying-car takeoff at the end that pretty much sums up the predigital state of the art. Far in the future is Robert Zemeckis’s unfortunate obsession with CGI waxworks a la The Polar Express, Beowulf, A Christmas Carol, and soon — shudder — Yellow Submarine.

As for the Blu-Ray presentation, the prosthetic wattles of old-age makeup stand out as sharply as the heartbreakingly unlined and untremored features of Michael J. Fox, then in his early 20s and a paragon of comic timing. It makes me think of how I’ve changed in the twenty-five years since I first saw the movie, at the age of 19. And somewhere in this circuit of private recollection and public viewing, I get lost again, with both sadness and joy, in the temporal play of popular culture that defines so much of my imagination and experience. The originating cleverness of BTTF’s high concept has been mirrored and folded in on itself as much by the passage of time as by Universal’s marketing strategies, so that in 2010 — once an inconceivably far-flung punchline destination for Doc Brown in the tag that closes the film and sets up its continuation in BTTF 2 — we encounter our own future-as-past, past-as-future: time travel of an equally profound, if more reversible, kind.

Watching Avatar

1262128264270s

Apologies for taking a while to get around to writing about Avatar — befitting the film’s almost absurd graphical heft, the sheer surfeit of its spectacle, I decided to watch it a second time before putting my thoughts into words. In one way, this strategy was useful as a check on my initial enthusiasm; the blissful swoon of first viewing gave way, in the second, to a state resembling boredom during the movie’s more langourous stretches. (Banshee flight training, let’s just say, is not a lightning-fast process.)  But in another way, waiting to write might not have been all that smart, since by now the movie has been discussed to death. Yet for all the hot air and cold type that’s been spent dissecting Avatar, the map of the dialogue still divides neatly into two camps: one insisting that Cameron’s movie is an instant classic of cinematic science fiction, a technological breakthrough and a grand adventure of visual imagination; the other grudgingly admitting that the film is pretty, but beyond that, a trite and obvious story lifted from Pocahontas and Dances With Wolves and populated, moreover, by a bland and predictable set of character-types.

I tend to be forgiving toward experiments as grand as Avatar, especially when they’ve done such a good job laying the groundwork of hopeful expectation. Indeed, as I walked into the theater last week, ripping open the plastic bag containing my 3D glasses, I remember thinking I’d already gotten my money’s worth simply by looking forward so intensely to the experience. There’s also the matter of auteurist precedent: James Cameron has built up an enormous amount of goodwill — and, dare I say it, faith — with his contributions of Terminator, Terminator 2: Judgment Day, and Aliens to the pantheon of SF greatness. (I’m also a closet fan of Battle Beyond the Stars, the derivative but fun 1980 Roger Corman production on which Cameron served as art director and contributed innovative visual effects.)

So I’m not fussed about whether Avatar‘s story is particularly deep or original. This is, to me, a case of the dancer over the dance; the important thing is not the tale, but Avatar‘s telling of it. And I’m sympathetic to the argument that in such a technically intricate production, a relatively simple narrative gearing is required to anchor audiences and lead them, as in a rail game, along a precise path through the jungle. (That said, Cameron’s first “scriptment” was apparently a much more complex and nuanced saga, and one wonders to what degree his narrative ambitions were stripped away as the humongous physical nature of the undertaking became clear.) Cameron is correctly understood as a techno-auteur of the highest order, a man who doesn’t make films so much as build them, and if he has, post-Titanic, become complicit in fanning the flames of his own worshipful publicity, we ought to take that as simply another feat of engineering — in this instance discursive rather than digital. It would hardly be the first time (I’m looking at you, Alfred Hitchcock) and is certainly better-deserved than some (I’m looking at you, George Lucas).

Did I like Avatar? Very much so — but as I indicated above, this is practically a foregone conclusion; to disavow the thing now would be tantamount to aesthetic seppuku. Of course, in the strange numismatics of fandom, hatred is just the other side of the coin from veneration, and the raging “avatolds” (as in, You just got avatold!) of 4chan may or may not realize that, love it or hate it, we’re all playing in Cameron’s world now. And what a world it is, literally! Avatar the film is something of a delivery system for Pandora the planet (OK, moon), an act of subcreation so extensive it has generated its own wiki. The detailed landscapes we see in the movie are merely the topmost layer of a topography and ecosystem fathoms deep, an enormous bank of 3D assets and encyclopedic autotextuality that, now established as a profitable pop-culture phenomenon, stands ready for extrapolation and exploration in transmedia to come. (Ironic, then, that a launching narrative so opposed to stripmining is itself destined to be mined, or in Jason Mittell’s evocative term, drilled.)

And in this sense, I suspect, we can locate a double meaning to the idea of the avatar, or tank-grown alien body driven by human operators via direct neural link. A biological vessel designed to allow visitors to explore an alien world, the story’s avatars are but metaphors for Avatar the movie, itself a technological prosthesis for viewers hungry to experience new landscapes (and for whom the exotics of Jersey Shore don’t cut it). 3D, IMAX, and great sound systems are merely sensory upgrades for our cinematic avatarialism, and as I watched the audience around me check the little glowing squares of their cell phones, my usual dismay was mitigated by the notion that, like the human characters in the movie, they were merely augmenting their immersion with floating GUIs and HUDs.

My liking for the film isn’t entirely unalloyed, and deep down I’m still wondering by what promotional magic we have collectively agreed to see Avatar as a live-action movie with substantial CG components rather than a CG animated film (a la Up, or more analogously Final Fantasy: The Spirits Within) into which human performances have cunningly been threaded. Much has been made of the motion-capture technology by which actors Sam Worthington, Zoe Saldana, Sigourney Weaver et al performed their roles into one end of a real-time rendering apparatus while Cameron peered into a computer display — essentially his own avatarial envoy to Pandora — directing his troupe through their videogame doubles. But this is merely the latest sexing-up of an “apparatus” as old as cinema, by which virtual bodies are brought to life on an animation stand, their features and vocals synched to a dialogue track (and sometimes reference footage of the original performances).

Cameron’s nifty trick, though, has always been to frame his visual and practical effects in ways that lend them a crucial layer of believability. I’m not talking about photorealism, that unreachable horizon (unreachable precisely because it’s a moving target, a fantasized attribute we hallucinate within the imaginary body of cinema: as Lacan would put it, in you more than you). I’m talking about the way he cast Arnold Schwarzenegger as the human skin around a robotic core in the Terminator films, craftily selling an actor of limited expressiveness through the conceit of a cyborg trying to pass as human; Arnold’s stilted performance, rather than a disbelief-puncturing liability, became proof of his (diegetically) mechanoid nature, and when the cutaways to stop-motion stand-ins and Stan Winston’s animatronics took over, we accepted the endoskeleton as though it had been there all along, the real star, just waiting to be discovered. An identical if hugely more expensive logic underlies the human-inhabited Nav’i of Avatar: if Jake Sully’s alien body doesn’t register as absolutely realistic and plausible, it’s OK — for as the editing constantly reminds us, we are watching a performance within a performance, Sully playing his avatar as Worthington plays Sully, Cameron and his cronies at WETA and ILM playing us in a game of high-tech Russian nesting dolls. The biggest “special effect” in Cameron’s films is the way in which diegesis and production reality collapse into each other.

I’m not saying that Avatar isn’t revolutionary, just that amid the more colorful flora and fauna of its technological garden we should be careful to note that other layer of “movie magic,” the impression of reality that is as much a discursive and ideological production as any clump of pixels pushed through a pipeline. We submit, in other words, to Avatar‘s description of itself as a step forward, an excursion into a future cinema as alien and exhilarating as anything to be found on Pandora, and that too is part of the spell the movie casts. Yet the animating spirit behind that future cinema — the ghost in the machine — remains the familiar package of hopes and beliefs we always bring to the darkened theater: the desire to escape into another body, and when the adventure is over, to wake up and go home.

Awaiting Avatar

Apparently Avatar, which opened on Friday at an immersive neural simulation pod near you, posits an intricate and very real connection between the natural world and its inhabitants: animus in action, the Gaia Hypothesis operationalized on a motion-capture stage. If this is so — if some oceanic metaconsciousness englobes and organizes our reality, from blood cells to weather cells — then perhaps it’s not surprising that nature has provided a perfect metaphor for the arrival of James Cameron’s new film in the form of a giant winter storm currently coloring radar maps white and pink over most of the eastern seaboard, and trapping me and my wife (quite happily) at home.

Avatar comes to mind because, like the blizzard, it’s been approaching for some time — on a scale of years and months rather than hours and minutes, admittedly — and I’ve been watching its looming build with identical avidity. I know Avatar’s going to be amazing, just as I knew this weekend’s storm was going to be a doozy (the expectation is 12-18 inches in the Philadelphia area, and out here in our modest suburb, the accumulation is already enough to make cars look as though they have fuzzy white duplicates of themselves balanced on their roofs). In both cases, of course, this foreknowledge is not as monolithic or automatic a thing as it might appear. The friendly meteorologists on the Weather Channel had to instruct me in the storm’s scale and implacability, teaching me my awe in advance; similarly, we all (and I’m referring here to the entire population of planet earth) have been well and thoroughly tutored in the pleasurable astonishment that awaits us when the lights go down and we don our 3D glasses to take in Cameron’s fable of Jake Sully’s time among the Na’vi.

If it isn’t clear yet, I haven’t seen Avatar. I’m waiting out the weekend crowds (and, it turns out, a giant blizzard) and plan to catch a matinee on Tuesday, along with a colleague and her son, through whose seven-year-old subjectivity I ruthlessly intend to focalize the experience. (I did something similar with my nephew, then nine, whom I took to see The Phantom Menace in 1999; turns out the prequels are much more watchable when you have an innocent beside you with no memory of what George Lucas and Star Wars used to be.) But I still feel I know just about everything there is to know about Avatar, and can name-drop its contents with confidence, thanks to the broth of prepublicity in which I’ve been marinating for the last several weeks.

All of that information, breathlessly assuring me that Avatar will be either complete crap (the /tv/ anons on 4chan) or something genuinely revolutionary (everyone else), partakes of a cultural practice spotlighted by my friend Jonathan Gray in his smart new book Show Sold Separately: Promos, Spoilers, and Other Media Paratexts. While we tend to speak of film and television in an always-already past tense (“Did you see it?” “What did you think?”), the truth is something very different. “Films and television programs often begin long before we actively seek them out,” Jon observes, going on to write about “the true beginnings of texts as coherent clusters of meaning, expectation, and engagement, and about the text’s first initial outposts, in particular trailers, posters, previews, and hype” (47). In this sense, we experience certain media texts a priori — or rather, we do everything but experience them, gorging on adumbration with only that tiny coup de grace, the film itself, arriving at the end to provide a point of capitation.

The last time I experienced anything as strong as Avatar‘s advance shockwave of publicity was with Paranormal Activity (and a couple of years ago before that with Cloverfield), but I am not naive enough to think such occurrences rare, particularly in blockbuster culture. If anything, the infrequency with which I really rev up before a big event film suggests that the well-coordinated onslaught is as much an intersubjective phenomenon as an industrial one; marketing can only go so far in setting the merry-go-round in motion, and each of us must individually make the choice to hop on the painted horse.

And having said that, I suppose I may not be as engaged with Avatar‘s prognosticatory mechanisms as I claim to be.  I’ve kept my head down, refusing to engage fully with the tableaux being laid out before me. As a fan of science-fiction film generally, and visual effects in particular, this seemed only wise; in the face of Avatar hype, the only choices appear to be total embrace or outright and hostile rejection. I want neither to bless nor curse the film before I see it. But it’s hard to stay neutral, especially when a film achieves such complete (if brief) popular saturation and friends who know I study this stuff keep asking me for my opinion. (Note: I am very glad that friends who know I study this stuff keep asking me for my opinion.)

So, a few closing thoughts on Avatar, offered in advance of seeing the thing. Think of them as open-ended clauses, half-told jokes awaiting a punchline; I’ll come back with a new post later this week.

  • Language games. One aspect of the film that’s drawn a great deal of attention is the invention of a complete Na’vi vocabulary and grammar. Interesting to me as an example of Cameron’s endless depth of invention — and desire for control — as well as an aggressive counter to the Klingon linguistics that arose more organically from Star Trek. Will fan cultures accrete around Avatar as hungrily as they did around that more slowly-building franchise, their consciousness organized (to misquote Lacan) by a language?
  • Start the revolution without me. We’ve been told repeatedly and insistently that Avatar is a game-changer, a paradigm shift in science-fiction storytelling. For me, the question this raises is not Is it or isn’t it? but rather, What is the role of the revolutionary in our SF movies, and in filmmaking more generally? How and why, in other words, is the “breakthrough” marketed to us as a kind of brand — most endemically, perhaps, in movies like Avatar that wear their technologies on their sleeve?
  • Multiple meanings of “Avatar.” The film’s story, as by now everyone knows, revolves around the engineering of alien bodies in which human subjectivities can ride, a kind of biological cosplay. But on another, artifactual level, avatarial bodies and mechanisms of emotional “transfer” underpin the entire production, which employs performance capture and CG acting at an unprecedented level. In what ways is Avatar a movie about itself, and how do its various messages about nature and technology interact with that supertext?

Paranormal Activity

paranormal-activity-bedroom1

[Some broad spoilers below]

I’ve said it before: these days, seeing certain movies means coming to the endpoint of an experience, rather than its beginning; closing a door rather than opening it. Think of how something like Star Wars in 1977 seeded an entire universe of story (and franchise) possibilities, or how The Rocky Horror Picture Show ignited a subculture of ritual performance and camp remixes of genre chestnuts. By contrast, a new kind of movie, exemplified currently by Paranormal Activity, hits theaters with a conclusive thump, like the punchline of a joke or the ending of a whodunit. After you’ve watched it, there is little more to say.

Such movies sail toward us on a sea of buzz, phantom vessels that hang maddeningly at the horizon of visibility, of knowability. Experienced fannish spotters stand with their spyglasses, picking out details in the mist and relaying their interpretations back to the rest of us. Insiders leak morsels of information about the ship’s construction and configuration. Old salts grumble about the good old days. It’s the modern cinematic equivalent of augury: awaiting the movie’s arrival is like awaiting a predestined fate, and we gaze into the abyss of our own inevitable future with a mixture of horror and appetite.

It sounds like I didn’t care for Paranormal Activity, but in fact I did; it’s as spare and spooky as promised, with a core of unexpected sweetness (due mainly to the performance of Katie Featherston) and consequently a sense of loss, even tragedy, at the end. It occurs to me that we are seeing another phenomenon in low-budget, buzz-driven, scary filmmaking: a trend toward annihiliation narratives. The Blair Witch Project, Open Water, Cloverfield, now Paranormal Activity — these are stories in which no one survives, and their biggest twist is that they disobey a fundamental rule of horror and suspense storytelling by which we understand that no matter how bad things get, at least one person, the hero, will make it through the gauntlet. With this principle guiding our expectations, we can affix our identifications to one or more figures, trusting them to safely convey us through the charnelhouse, evading the claws of monsters or razor-edged deathtraps.

No such comfort in the annihilation narrative, which blends the downbeat endings of early-70s New Hollywood with the clinical finality of the snuff film or autopsy report. Such brutal endings are encouraged by the casting of unknown or non-actors, whose public and intertextual lives presumably won’t be harmed by seeing them dispatched onscreen — though the more important factor, I suspect, is the blurring of the line between the character’s ontological existence and their own.

The usual symptom of this is identical first names: Daniel Travis plays Daniel Kintner in Open Water; Heather, Josh, and Michael are all “themselves” in Blair Witch; Paranormal‘s Katie and Micah are played by actors named Katie and Micah. There is, in other words, no supervening celebrity identity, no star persona, to yank us out of the fiction, to remind us simply by gravitational necessity that there must be a reality outside the fiction. The collapse of actor and character corresponds to the mockumentary mode that all these films share — a mode that itself depends on handheld cameras, recognizable, nonexotic settings, and an absence of standard continuity editing and background scoring.

Taken together, these factors (no-name actors, conscientiously unadorned and “unprofessional” filmmaking) would seem to recall Italian neorealism. But this being Hollywood, the goal is to tell stories that fit into familiar genres while reinventing them: horror seems to be the order of the day. A more subtle point is that, with the exception of Cloverfield‘s sophisticated matchmoving of digital monsters into shakycammed cityscapes, movies in this emerging genre cost almost nothing to make. The budget for Paranormal Activity was $11,000, a datum I didn’t even have to look up, because it’s been foregrounded so relentlessly in the film’s publicity. Oddly, these facts of the film’s manufacture don’t seem to detract from the envelope of “reality” in which its thrills are delivered; for all the textual (non)labor that goes into assuring us this really happened, we are just as entertained by the saga of scrappy Oren Peli and his sudden success as by the thumpings and scarrings inflicted on poor Micah and Katie.

And we are entertained, I think, by our own entertainment — the way in which we willingly give ourselves over to a machine whose cold operations we understand very well. I certainly felt this way as I took my seat at one of the few remaining non-multiplexed moviehouses in Ann Arbor, the tawdry but venerable State Theater. The 7 p.m. crowd was a throng of University of Michigan students, a few clusters of friends packed in with lots and lots of couples. Paranormal Activity is the kind of movie where you want to be able to clutch somebody. More to the point, it’s a genuine group experience: scares are amplified by a factor of ten when people around you are screaming.

Which brings me back to my opening point: we all knew what we were there for, even as the movie’s central mysteries — from the exact nature of its big bad to the specific escalating sequence of its scares — awaited discovery like painted eggs on an Easter-day hunt. (The film’s discretely doled-out shocks, which get us watching the screen with hypnotic attentiveness, are reminiscent of the animated GIFs one finds on the /x/ board of 4chan.) We were there for the movie, certainly, but we were also there for each other, enjoying the echo chamber of each others’ emotions and performative displays of fear. And we were there for ourselves, reverberating happily within the layers of our knowing and not-knowing, our simultaneous awareness of the film as cunning construct and as rivetingly believable bedtime story, our innocence and cynicism so expertly shaped by months of hype and misdirection, viral marketing, rumors and debunkings, word of mouth.

All of which constitutes, of course, the real paranormal activity: a mediascape that haunts and taunts us, foreshadowing our worst fears as well as our fiercest pleasures.

British Invasion

montypython

Ordinarily I’d start my post with a by-now-boilerplate apology for lagging behind the news, but in this case I will leave aside the ritual lament (“I’m just so busy this semester!”) and instead make proud boast of my lateness, boldly owning up to the fact that, although it was forty years ago last week that Monty Python’s Flying Circus had its first broadcast, I’m just getting around to remarking on it today. Seems only (il)logical to do so, given that one of Python‘s most fundamental and lasting alterations to the cultural landscape in which I grew up was to validate the non sequitur as an acceptable conversational — and often behavioral — gambit.

Let me explain. For me and my friends in grade school, the early-to-mid-seventies were a logarithmically-increasing series of social revelations, sometimes depressingly gradual, other times bruisingly abrupt, that we were “weird.” Our weirdness went by several aliases. The labels bestowed by forgiving parents and teachers were things like “smart,” “bright,” “eccentric,” “unusual,” and “creative.” Whereas the ones that arrived not from above but laterally, hurled like snowballs in the schoolyard or graffitied in ball-point across our notebooks, were more brutally and colorfully direct, and thus of course more convincing: “freak,” “spaz,” and — for me in particular, since it vaguely rhymes with Rehak — “retard.”

I see now that almost all of these phrases had their grain of truth, their icy core, their scored ink-line. In our weirdness we were smart and unusual and creative; we were also undeniably freakish, and as our emotional gyroscopes whirled wildly in search of some stable configuration, we were, by turns, spastically overenthusiastic and retardedly slow to adapt. We were book and comic readers, TV watchers, play actors, cartoon artists, model builders, rock collectors. We were boys. We liked science fiction and fantasy. Our skills and deficits were misdistributed and extreme: vastly vocabularied but garbled by braces and retainers; carefully observant but blindered by thick glasses; handsome heroes in our hearts, chubby or skinny buffoons in person. Many of us were good at science and math, others at art and theater. None of us did particularly well on the athletic field, though we did provide workouts for the kids who chased us.

Me, I made model kits of monsters like the Mummy, the Wolfman, and the Creature from the Black Lagoon — all supplied by the great company Aurora, with the last mile from hobby store to home facilitated by my indulgent parents — painted them in garish and inappropriate colors, situated them behind cardboard drum kits and guitars on yarn neckstraps, and pretended they were a rock supergroup while blasting the Monkees and the Archies from my record player. (I am not making this up.)

I was also a media addict, even back then, and when Monty Python episodes began airing over our local PBS station, I was instantly and utterly devoted to it. Which is not to say I liked everything I saw — a nascent fan, I quickly began drawing distinctions between the unquestionably great, the merely good, the tolerably adequate, and the terminally lame paroles that constituted the show’s langue, learning connections between these variations in quality and the industrial microepochs that gave rise to them: early, middle, and late Python. I had my favorite bits (Terry Gilliam’s animations, anything ever done or said by John Cleese) and my “mehs” (Terry Gilliam’s acting and the episode devoted to hot-air ballooning). Although or because I was stranded somewhere in the long latency separating my phallic and genital stages, I found every mention of sex and every glimpse of boob a fascinating magma of hypothetical desire and unearned shame. And, of course, it was all hysterically, tear-squirting, stomach-cramp-inducing funny.

The downside of Monty Python‘s funniness was the same as its upside: it gave all of us weirdos a shared social circuit. The show’s peculiar and specific argot of slapstick and trangression, dada and doo-doo, spread overnight to recess and classroom, connecting by a kind of dedicated party line any schlub who could memorize and repeat lines and skits from the show. In short, Monty Python colonized us, or more accurately it lit up like a discursive barium trace the preexisting nerd colony that theretofore had hidden underground in a nervous relay of quick glances, buried smiles, and raised eyebrows. Suddenly outed by a humor system from across the sea, we pint-sized Python fans stood revealed as a brotherhood of nudge-nudge-wink-wink, a schoolyard samizdat.

A good thing, but also a bad thing. The New York Times gets it exactly wrong when describing the “couple of guys in your dorm (usually physics majors, for some reason, and otherwise not known for their wit) who could recite every sketch”; according to Ben Brantley, “They could be pretty funny, those guys, especially if you hadn’t seen the real thing.” Nope — people who recite every Monty Python sketch are by definition not funny, or rather are funny only within an extremely bounded circle of folks who (A) already know the jokes and (B) accept said recitation as legal tender in their subcultural social capital. In my experience, there was no surer date-killer, no quicker way to get people to edge away from you at parties than by launching into such bonafide gems of genius as the Cheese Shoppe or the Argument Clinic. Yet we went on tagging each other as geek untouchables, comedy as contagion, as helpless before Pythonism’s viral spread as we would be, a few years on, by the replicating errata of Middle Earth and the United Federation of Planets.

Monty Python was merely the first infusion of obsessive-compulsive nerd scholarship into which I and my friends were forced by a series of cultural imports from Britain: grand stuff like The Fall and Rise of Reginald Perrin, The Hitchhiker’s Guide to the Galaxy, Alan Moore, and the computer game Elite. The three movies I like to name as my favorites of all time each have substantial UK components: Star Wars (1977) was filmed partly at Elstree Studios, Superman (1978) at Pinewood and Shepperton Studios, and Alien (1979), with Ridley Scott at the helm, at Shepperton and Bray Studios. And the trend continues right to present day: my favorite band is Genesis, I can’t get enough of Robbie Coltrane’s Cracker, and the science-fiction masterpiece of the summer was not District 9 (which gets high marks nevertheless) but the superb Children of Earth.

I sometimes wonder what to call this collection of British art and entertainment, this odd cultural constellation that seems to obey no organizing principle except its origins in England and its relevance to my development. How do you draw a boundary around a miscellany of so much that is good and essential about imaginary lives and their real social extrusions? Maybe I’m seeking a word like supergenre or metagenre, but those seem too big; try idiogenre, some way of systematizing a group of texts whose common element is their locus in a particular, historically-shaped subjectivity (my own) that is simultaneously a shared condition. The comic tragedy of the nerd, a figure both stranded on the social periphery yet crowded by his peers, lonely yet overfriended, renegade frontiersman and communal sheep, a silly-walking man with an entire Ministry of Silly Walks looming behind him.

I blame, and thank, England.

ministrysillywalks

Predestination Paradox

flash-forward

It would be nice if ABC’s new series, Flashforward, didn’t stylistically model itself quite so slavishly on Lost — which is not to deny a legitimate familial relationship between the two shows. Indeed, it’s largely thanks to Lost that broadcast television now periodically risks acts of serial storytelling with genuine intricacy and depth, sizeable and interesting casts of characters, and generic inflections that flirt with science fiction and fantasy without ever quite falling into the proud but doomed ghetto of, say, Virtuality and Firefly. Nowadays we seem to prefer our fantastic extrapolations blended with a strong tincture of “reality”; while I might privately consider series such as Mad Men and Jericho to be as bizarre in setting and plot machination as Farscape ever was, the truth is it will be a long time before we see a space-set show lasting more than a season or two. (And before you ask, no, I haven’t gotten around to watching Defying Gravity, though some trusted friends have been telling me to give it a try.)

So Flashforward clearly owes a debt to Lost for tutoring audiences in the procedures and pleasures of the complex narratives so deftly dissected by Jason Mittell: in this specific case, the shuttling game of memory and induction by which viewers stitch together a tale told in pieces. Where 24 builds itself around the synchronic, crosscutting among simultaneous story-streams until the very concept of a pause, of downtime, is squeezed out of existence, Lost and Flashforward take as their structuring principle the diachronic, bouncing us backwards and forwards through time until one can no longer tell present from backstory. (I will admit that the most recent season of Lost finally threw off this faithful viewer like a bucking bronco; while I’m all for time-traveling back to the glory years of the 1970s, the show’s intertitled convolutions have become too much for me to keep up with, especially when further diced and sliced by the timeshifting mechanism of my DVR.)

No wonder, then, that David S. Goyer (late of Blade) and Brannon Braga (who in the 1990s both saved and ruined the Star Trek franchise, IMO) felt the moment was ripe to adapt Robert J. Sawyer‘s novel for TV. (Apparently there’s a history involving HBO and a tug-of-war over rights; perhaps a branching feature on the show’s eventual box-set release will as a deconstructive extra interweave this additional knotted plotting, an industrial Z-axis, into the general mayhem.) I remember reading Flashforward-the-book when it first came out, but it took Wikipedia to remind me how it all ended. Now that original ending has of course been jettisoned, in the process of retrofitting the story to serial form.

And a clever adaptation it looks to be. By moving up the collective “flashforward” experienced by the entire human race from twenty-odd years to six months, the TV show embeds its own climax within a different kind of foreseeable future: the conclusion of season one. That is, as the characters catch up with their own precognated fates on April 29, 2010 (in show-reality), so will we the watchers (in audience-reality), making for what I expect to be a delicious and delirious moment of suture. Like the first season of Heroes, Flashforward constructs itself around its own endpoint, arriving like clockwork twenty-odd episodes from now.

Clever, but maybe not smart. Look what happened to Heroes, which did great until collapsing into meaningless narratorhea with the start of its second season. I can think of countless TV series done in by their own cruelly relentless seriality, overstaying their welcome, swapping in cast members and increasingly baroque storytelling gimmicks until the final result is a ghoulish, cyborged facsimile of the show we once knew and loved. People speak of “jumping the shark,” but the truth of a TV show that’s lost its soul is something much more depressing: an elderly parent babbling in the grip of Alzheimer’s, a friend lost to dementia, a young and innocent heart curdled by prostitution or drug addiction. The excitement of Flashforward will consist of watching as it knowingly exploits the feints and deferrals of serial form, doling out clues and red herrings that keep us guessing even as the destination comes inexorably into greater focus — a finale that, by its final arrival, will appear perfectly logical. Good storytelling gets us to the expected endpoint by unexpected means, and I wonder if Flashforward has it in itself to pull off the trick more than once.

In the meantime, let’s sit back and appreciate the tapestry as it emerges for the first, unrepeatable time. The characters have already begun to build a “conspiracy wall,” tacking up photos, scribbled notes, and lengths of string to make a tableau that simultaneously constructs the future as solution while decoding it as mystery. And don’t forget the wonderful opportunity for meta-reflection on the existential whys and wherefores of TV as the first episode ends with another kind of “flashforward” — this one a promotional montage enticing us with glimpses of the season to come. In this sense, of course, the show is a perfect commercial animal, advertising itself and its high concept with every beat of its crass and calculated heart. But in another, purer sense, it is a kind of koan, an invitation to meditate on the deeper patterns of the stories we tell; the time in which we experience them; the nature of narrative consciousness itself.

Flashforward may be, in short, one last chance to live in the media present (even as its central conceit destroys any sense of simple present-ness). Here’s to enjoying the experience before the show is ruined by its own need to respawn in 2010-2011, by the ongoing efforts of the spoiler community and devout Wiki priesthood, or by the aforementioned box sets, downloads, and torrents. A series like this is perfectly engineered for its time, which is to say, paced to the week-by-week parceling of information, the micro-gaps of commercial breaks and the macro-gaps between episodes.

Yet even as we put a name to the temporality of TV, it is already past. For all such gaps are dissolving in the quick waters of new media, and with them the gaps in knowledge (precisely-lathed questions with carefully-choreographed answers) on which a show like Flashforward, and by extension all serial storytelling, thrives. We who are “lucky” enough to straddle this historical juncture — at which the digital is reworking the media forms with which we grew up — face our own version of the predestination paradox: knowing full well where we’re going, yet helpless before the forces that deliver us there.