Worldbuilding avant la lettre in Robert A. Heinlein

“The only mainstream writer to whom Heinlein acknowledges a debt is Sinclair Lewis, and it is not for literary style. Lewis laid out extensive backgrounds for his work which did not directly appear in the story. That way he understood how his characters should react in a given situation, since he knew more about them than the reader did. In Heinlein, this ultimately grew beyond the bounds intended by Sinclair Lewis, whose characters performed against a setting with which the reader might be familiar. The Sinclair Lewis method couldn’t work for science fiction unless an entire history of the future was projected: then individual stories and characters in that series could at least be consistent within the framework of that imaginary never-never land.

“In following just this procedure, Robert A. Heinlein inadvertently struck upon the formula that had proved for successful for Edgar Rice Burroughs, L. Frank Baum, and, more recently, J. R. R. Tolkien. He created a reasonably consistent dream world and permitted the reader to enter it. Heinlein’s Future History has, of course, a stronger scientific base than Burroughs’s Mars, Baum’s Oz, or Tolkien’s land of the ‘Rings,’ but is fundamentally the same device.”

— Sam Moskowitz, Seekers of Tomorrow: Masters of Modern Science Fiction (New York: Ballantine, 1967). 194.

Cover Concept

Below, preliminary cover art for my forthcoming book, due out from NYU Press in Spring 2018:

mtmte cover

I really like the way the designers have integrated artwork from Georges Méliès’s A Trip to the Moon (1902)—tucking it into the corner so that it seems to glare down at the Enterprise refit from the filming of 1979’s Star Trek: The Motion Picture. The story behind the latter image element is a saga in itself, which I will have to wait to address in a future post; right now I’m too busy going over the copyedited manuscript.

Review: Masters of Doom

Masters of Doom: How Two Guys Created An Empire and Transformed Pop Culture
David Kushner
New York: Random House, 2003
Review originally published in January 2004 at the Resource Center for Cyberculture Studies (RCCS)


David Kushner’s Masters of Doom, a spirited, winningly naive history of the personal-computer boom and the offbeat souls who design and play videogames, reminded me a lot of Steven Levy’s Hackers. Published in 1984, Levy’s sprawling book was an early and influential attempt to condense thirty years of computer culture into a form fit for mainstream readers. Between the 1950s and the 1980s, computers went from private to public, from mini to micro. Room-sized ENIACs housed on college campuses and guarded by “priesthoods” of elite technicians were reinvented for the home market as funky desktop boxes, built from kits or purchased from companies like Tandy, Commodore, and Apple. Possessing little in the way of storage capacity or graphic capability, these machines’ meager 8K and 16K memories would nonetheless be jammed with programs written in BASIC by home users eager to code a better mousetrap—or the next big game.

Hackers, which took its title from the term for virtuoso teenage programmers increasingly visible on cultural radar through films like Tron (1982) and WarGames (1983), appealed on multiple mythic fronts: it was a tale of ingenuity (most hackers were self-taught), of nonconformity (many were socially misaligned nerds), and of profitability (for a famous few, like Bill Gates, were able to turn their programs into products, making millions and founding empires). Hackers, in short, told a pleasingly American story, despite its high-tech trappings. And one of the book’s incidental but not insignificant functions was to cast in homely, unthreatening terms the single largest upheaval in society and commerce since the Industrial Revolution. Computers aren’t scary; they’re fun! Just look at all the nutty kids who’ve fallen in love with them!

Masters of Doom picks up where Hackers leaves off, breathlessly detailing the rise to fame (and fall from grace) of two such nutty kids in the late 80s and 90s, John Carmack and John Romero. These “superstars”—if we accept Kushner’s rather streamlined hagiography—dreamed up, produced, and marketed to a ravenous community of joystick jockeys the first-person shooter or FPS. Offhandedly defined by Kushner as “paintball-like contests played from a first-person point of view” (x), FPSs are notoriously violent, fast-moving games in which players peer over the barrels of large weapons (grasped in their own avatarial “hands”) while racing about mazelike levels, locked in combat with opponents dreamed up by the computer or virtually present through local-area networks (LANs) or Internet connection. The latter play mode, called “deathmatch,” is the defining function of FPSs such as standard-bearers Quake 3: Arena and Unreal Tournament 2003. (Ironically, Carmack and Romero created deathmatch almost as an afterthought, an extra feature on their seminal 1993 Doom.) Deathmatch is also, of course, the (anti)social practice that brings FPSs in for condemnation by the likes of U.S. Senator Joseph Lieberman, who argues in high-moral-panic mode that violent videogames produce violent kids—epitomized in school shootings such as the 1999 Columbine High murders. (Dylan Klebold and Eric Harris, who orchestrated the killings, happened to be Doom fans.)

Thankfully, rather than focusing on the FPS’s controversial political status or debating the cultural effects of videogames, Kushner pitches Masters more intimately. Carmack and Romero’s collision of talents and personalities led to interpersonal firefights at least as entertaining as the digital melees engineered by their company, id Software. Tracing the duo’s evolution from colleagues to friends, then from competitors to enemies, and finally to battle-weary but apparently reconciled veterans of the game industry’s incandescent 90s, Kushner characterizes the team in language reminiscent of the Lennon-McCartney partnership: Carmack is the intense, driven genius (the “rocket scientist”), while Romero is the flamboyant, crowdpleasing publicity hound (the “rock star”).

These traits are made clear—perhaps reductively so—in the initial chapters, which join Romero at the age of 11 in 1979, toting up high scores on all the Asteroids arcade consoles in Rocklin, California. Romero went on to write his own games for the Apple II, publishing them in game magazines and eventually landing a job in the industry. Carmack, born in 1970, followed a similar path, though with a technological emphasis. Grasping the promise inherent in the primitive games of the early 80s, a time when arcades crumbled economically and the home-computer and console market had yet to take off, Carmack wanted to push the envelope and create immersive experiences to rival the holodeck on Star Trek: The Next Generation.

This theme runs throughout Masters of Doom: the sense that early gamers such as “the two Johns” were always looking toward the next great thing, whether it be a sequel to a prizewinning game or a world of networked subjectivity. Refreshingly, the book doesn’t bother to defend the importance of videogames; instead, it takes for granted that games are forerunners of virtual reality, fantasy spaces realized within computer hardware and populated by human beings in avatarial masquerade. Describing Romero’s love affair with the role-playing game Ultima, for example, Kushner writes confidently that “gamers overlooked the crudeness for what the games implied: a novelistic and participatory experience, a world” (11). Later on, he assesses an arcade game whose side-scrolling graphics broke new ground: “Compared with the other games in the arcade, Defender felt big, as if the player was living and breathing in a more expansive virtual space” (46).

And against the backdrop of the Internet and World Wide Web, Kushner—presumably emulating Carmack and Romero’s own fascination with the medium’s possibilities—repeatedly invokes the science-fiction constructs of William Gibson’s (1984) cyberspace and Neal Stephenson’s (1992) metaverse. Such lofty allusions have the effect of elevating Romero and Carmack’s “mission” while inoculating Masters against dismissal by a nation tired of get-rich-quick stories. The equation of videogames and cyberspace implies that game designers are, in fact, engineering future society (a claim also put forth by John Gaeta, special-effects supervisor of the Matrix films [1]), and that Carmack and Romero were the visionaries who laid the groundwork for this online world.

If you buy that philosophy, you’ll enjoy the book. Even if you don’t, you will get something out of the insider’s perspective on the game industry, which Kushner portrays as an analog of videogames themselves: colorful, loud, profane, cheerfully violent. The bulk of the book centers on id’s profitable series of FPSs: Wolfenstein 3DDoom, and Quake. Each game, along with its sequels and expansion packs, is presented as a risky undertaking, a dream of a product that might or might not find its audience. Each game, of course, turns out to be enormously popular, lending Masters the feel of a Broadway musical about underdog victories and overnight successes. Come on, kids, let’s put on a show!

Some will find this picture of software development disingenuous—hackers can’t all be scrappy outsiders—but it works well enough to fill 300 pages of fast-moving prose. And among the exuberance of all-night programming sessions, endless pizza deliveries, and the fleet of Ferraris so fetishized by Carmack and Romero, Masters of Doom casually outlines the emergence of a new business paradigm, one keyed to the breakneck rhythms of Moore’s Law. That maxim, coined in 1965, states that the power of microprocessors doubles every 18 months. For id, this meant that not just the outward design of games, but their underlying architecture, had to undergo constant reinvention. While Romero hyped each upcoming release (in terms that often landed him in trouble with the avid but skeptical gaming community), Carmack labored to produce software “engines” capable of rendering 3D graphics with unprecedented speed and realism. And each new version of the FPS encouraged, or forced, gamers to upgrade their computers in order to run the new software. At the same time, id’s practice of releasing its games as shareware (downloadable files whose first levels could be played for free, with the full version requiring purchase) cut distributors out of the circuit, amplifying the profits of developers.

The end result is that id’s games pushed the industry in specific directions. Cyberspace may not yet be here, but according to Kushner, the world of computing in 2003 would be radically different (and a lot less fun) if not for Carmack and Romero.

  1. In a Wired cover story on the Matrix sequels, Gaeta warned of the dangers posed by advanced image-manipulation techniques. “You have these paranoid films about the Matrix depicting how people are put in a mental prison by misusing this technology . . . maybe our technology will become the actual Matrix, and we have inadvertently spilled the vial of green shit out onto the planet.”

Gibson, W. (1984). Neuromancer. New York: Ace Books.
Levy, S. (1984: 1994). Hackers: Heroes of the Computer Revolution. New York and London: Penguin.
Silberman, S. (2003). “Matrix2.” Wired 11.05.
Stephenson, N. (1992). Snow Crash. New York: Bantam Books.

Seaworthy

Back in January when I sent out my book manuscript, I had the weird sense of waving goodbye to a cruise ship I built myself, standing at the pier while this giant, white, overstuffed artifact bellied out to sea.

It was not the first time this particular ship had been launched. In August 2006 I printed out the whole thing, some 350 pages. This was after my dissertation defense but before I dropped off the text at the print shop in Bloomington where Indiana University dissertations are bound. Lots of other stuff was going on at the time—I was in the midst of packing for the move to Pennsylvania, my thoughts mostly focused on coming up with syllabi for the two courses I was contracted to teach at Swarthmore starting in the fall. But I took a moment, amid the mess of cardboard boxes and sorting stacks for the yard sale, to balance the fat block of pages in my hands, marveling that I had managed to produce such a thing.

About a year later I sent it out again, this time as a book proposal. I got polite notes back from two academic presses—saying, essentially, thanks but no thanks—and shelved the project until 2011 or so. It went out again at that point, and this time was met with a yes, just in time for my tenure case.

Then came the reader reports. Mostly positive, with a handful of suggestions for changes, they stopped me in my tracks; it would be almost four more years before I got around to patching holes, updating case studies, and clarifying ambiguities needed to clear the final hurdle.

I should explain, if it isn’t clear from the outline, that I am not a good writer. Process-wise, I mean. Faced with a task, I put it off; encouraged, I dig in my feet and work even more grudgingly. This goes deep with me, all the way back to childhood. Though I have, for the most part, achieved the level of wisdom that involves accepting myself as I am, procrastination is one of the traits I most want to change in myself. As soon as I get around to it.

Anyway, it turns out that publishing a book, at least a scholarly one, involves more than one goodbye; it’s less like Ilsa and Rick lingering heartlost in the fog than like dropping off a child at school, morning after morning. That’s probably the wrong metaphor here, because I adore my children, but have come to detest the book. Still, the other images that spring to mind—repeated skin biopsies, for instance—might express in a Cronenbergian way the connection between writing and excrescence, a putrefaction of words shed like skin dust, but they don’t capture the idea of an object consciously built. A model kit, seams puttied and sanded, paint sprayed and retouched, decals and weathering conscientiously applied. Doomed to show only flaws and mistakes in the eyes of its maker; to everyone else it’s probably, y’know, okay.

My book is looking more okay these days thanks to the copyeditors at NYU Press. I got the manuscript back for review, have been going through the chapters, reviewing changes. There are a few on every page, and I see the wisdom of every single one. That’s generally my response to being edited—gratitude. Harlan Ellison and a mob of similarly perpetually disgruntled writers would kick me out of the Tough Kids Club for saying so. You can find me over by the janitor’s closet, eating lunch with Strunk and White.

Fort

There’s nothing like a suddenly lost object to demonstrate the precarity of our systems for keeping order—the flimsiness of the illusion that the spaces we inhabit are at our mercy, rather than the other way around.

There are many sorts of object, of course, and many sorts of loss. I daily shed millions of dead skin cells without thinking about it, and it doesn’t trouble my world if a Lego block goes missing from the Tupperware footlocker where all our Lego pieces entropically end up. The absence I’m talking about is the shadow cast by a specific kind of item: it must be something so critical to daily function that I need it—at least need easy access to it—almost all the time; by the same token, its ubiquity as both physical item and psychic token must make it easy to take for granted. Glasses, keyring, wallet, phone, various iPods and iPads. Made almost invisible by ritualized use, these small but vital technologies don’t often vanish from the map. But when they do, they threaten to take the map with them.

This week I spent a disturbing and disorienting couple of days searching for my laptop, a silvery sylph of a MacBook Air, which did not disappear so much as slowly slip off my radar—not a jump cut but a slow dissolve. Like Pasteur’s germs, the loss became an official fact only retrospectively. First I had to shamble from spot to spot around the house to check all the places the MacBook tends to get left: the high shelf kids can’t get at, the table beside the wall outlet, under the couch, under the bed. Meanwhile my thoughts probed an alternative theory, treating the missing computer as a theft. Hadn’t I left my car unlocked, work case in the front seat plain for all to see, when I dropped my kids at school? It was only a few minutes. But how long would it have taken, really?

I did not like the feeling of these suspicions germinating and spreading vinelike through my view of the world. Too much of the U.S. is ensnared and immobilized in such thorny psychic tendrils. And just as the presidency is in a way the mass projection of a schizoid populace—a country whose constituent blocs have lost the ability to comprehend each other, an imagined community angry-drunk on its dark and fearful imaginings—my worries about some faceless thief are just a way of externalizing anxiety and disavowing my own responsibility for losing track of something valuable.

The computer finally turned up (isn’t it I who turned up? the laptop didn’t move) in my campus office. It was on a shelf at about shoulder height, a place where books belong. I had no memory of setting it there, but set it there I must have. So now my theoretical thief has become an inferred Bob.

That word: absentminded. Quick flash of Fred MacMurray and an infinitely receding four-dimensional array of old academics wearing one sock and polishing their glasses. A little past that tesseract of cliché is one very real person, my mother, whose memory loss has in recent years become profound. Because of her I suppose I watch my own slips and failings with a diagnostic eye, sifting random problems for systematic ones, signals in the noise that point to a larger noise in the signal.

The computer vanished the instant I put it somewhere it doesn’t usually go. What does that say about where the coordinates and position of any object reside? Is it all and only relational? Are there, in fact, only negative differences, dark matter? I think it’s less important to answer those unanswerables than to note how close they are to the surface, a magma of existential worry coursing under the brightness and rationality of waking life. Note it, remember it, honor it.

Our book is out!

Screen Shot 2015-08-17 at 2.27.36 PM

I couldn’t be more pleased to announce the publication of Special Effects: New Histories/Theories/Contexts, an anthology I co-edited with my good friends and colleagues Dan North and Michael S. Duffy. Inspired by a panel we presented together at the 2008 Film and History Conference in Chicago, the book features essays by the three of us, along with contributions from established and rising luminaries such as Scott Bukatman, Julie Turnock, Chuck Tryon, Lisa Bode, Drew Ayers, Aylish Wood, Angela Ndalianis and … well, read the TOC yourself:

Foreword — Scott Bukatman
Introduction — Bob Rehak, Dan North and Michael S. Duffy
 
PART 1: TECHNIQUES
1. Ectoplasm and Oil: Methocel and the Aesthetics of Special Effects — Ethan de Seife
2. Fleshing It Out: Prosthetic Makeup Effects, Motion Capture and the Reception of Performance — Lisa Bode
3. (Stop)Motion Control: Special Effects in Contemporary Puppet Animation — Andrea Comiskey
4. Magic Mirrors: The Schüfftan Process — Katharina Loew
5. Photorealism, Nostalgia and Style: Photorealism and Material Properties of Film in Digital Visual Effects — Barbara Flueckiger
 
PART 2: BODIES
6. Bleeding Synthetic Blood: Flesh and Simulated Space in 300 — Drew Ayers
7. Blackface, Happy Feet: The Politics of Race in Motion Capture and Animation — Tanine Allison
8. Being Georges Méliès — Dan North
9. The Battlefield for the Soul: Special Effects and the Possessed Body — Stacey Abbott
10. Baroque Facades, Jeff Bridges’ Face and Tron: Legacy — Angela Ndalianis
11. Organic Clockwork: Guillermo del Toro’s Practical and Digital Nature — Michael S. Duffy
 
PART 3: SCREENS
12. Digital 3D, Technological Auteurism and the Rhetoric of Cinematic Revolution — Chuck Tryon
13. Shooting Stars: Chesley Bonestell and the Special Effects of Outer Space — Bob Rehak
14. Designed for Everyone Who Looks Forward to Tomorrow!: Star Wars, Close Encounters of the Third Kind, and the 1970s Expanded Blockbuster — Julie Turnock
16. The Right Stuff?: Handmade Special Effects in Commercial and Industrial Film — Gregory Zinman
17. ‘Don’t You Mean Extinct?’: On the Circulation of Knowledge in Jurassic Park — Oliver Gaycken
18. Inception’s Timespaces: An Ecology of Technology – Aylish Wood
 
Afterword: An Interview with Lev Manovich — Dan North

The book can be ordered from Amazon or directly from the publisher. And if you should ever be in the Philly area, drop by my office and I’ll sign your copy …

Jurassic World

maxresdefault

Driving home from last night’s screening of Jurassic World, I kept thinking back to a similarly humid summer evening in 1997, when I locked horns with a friend over the merits of an earlier film in the franchise, Jurassic Park: The Lost World. We were lingering outside the theater before heading to our cars, digesting the experience of the movie we had just watched together. He’d disliked The Lost World, which (if I remember his stance accurately) he saw as an empty commercial grab, a cynical attempt by Steven Spielberg to repeat the success of Jurassic Park (1993)—a film we agreed was a masterpiece of blockbuster storytelling and spectacle—but possessing none of the spark and snap of the original. As for my position, all I can reconstruct these 18 years later is that I appreciated The Lost World’s promotion of Ian Malcolm to primary protagonist—after The Fly (1986), any movie that put Jeff Goldblum in the driver’s seat was golden in my book—as well as its audacious final sequence in which a Tyrannosaurus Rex runs rampant through the streets and suburbs of San Diego.

Really what it came down to, though, were two psychological factors, facets of a subjectivity forged in the fantastic fictions and film frames of science-fiction media, my own version of Scott Bukatman’s “terminal identity.” The first, which I associate with my fandom, was that in those days I never backed down from a good disagreement, whether on picayune questions like Kirk versus Picard or loftier matters such as the meaning of the last twenty minutes of 2001: A Space Odyssey (1968). (Hmm, both of those links go to WhatCulture.com—maybe I should add it to my reading list.)

The second reason for my defense of The Lost World was simply my overriding wish that it not be a piece of crap: that it be, in fact, great. Because even then I could feel the tug of loyalty tying me to an emergent franchise, the sense that I’d signed on for a serial experience that might well stretch into decades, demanding fealty no matter how debased its future installments might become. I hadn’t yet read the work of Jacques Lacan—that would come a couple of years later, when I became a graduate student—but I see now that I was thinking in the futur anterieur, peering back on myself from a fantasized, later point of view: “I will have done …” It’s a twisty trick of temporality, and if I no longer stress about contradictions in my viewing practice the way I once did (following the pleasurable trauma of Abrams’s reboot, I have accepted the death of the Star Trek I grew up with), I am still haunted by a certain anxiety of the unarrived regarding my scholarly predilections and predictions (I’d hate to be the kind of academic whom future historians tsk-ingly agree got things wrong.)

But the tidal pull of the franchise commitment persists, which why I’m having a hard time deciding whether Jurassic World succeeded or sucked. Objectively I’m pretty sure the film is a muddle, certainly worse in its money-grabbing motivations and listless construction than either The Lost World or its follow-up, the copy-of-a-copy Jurassic Park III (2001). Anthony Lane in the New Yorker correctly bisects Jurassic World into two halves, “doggedly dull for the first hour and beefy with basic thrills for most of the second,” to which I’d add that most of that first hour is rushed, graceless, and elliptical to the point of incoherence. One of my favorite movie critics, Walter Chaw of Film Freak Central, damns that ramshackle execution with faint praise, writing: “Jurassic World is Dada. It is anti-art, anti-sense—willfully, defiantly, some would say exuberantly, meaningless. In its feckless anarchy, find mute rebellion against narrative convention. You didn’t come for the story, it says, you came for the set-ups and pay-offs.”

Perhaps Chaw is right, but seeing the preview for this fall’s Bridge of Spies reminded me what an effortless composer of the frame Spielberg can be–an elegance absent from Jurassic World save for one shot, which I’ll get to in a minute–and rewatching the opening minutes of Jurassic Park before tackling this review reminded me how gifted the man is at pacing. In particular, at putting a story’s elements in place: think of the careful build of the first twenty minutes of Close Encounters of the Third Kind (1977), laying out its globe-jumping puzzle pieces; or of Jaws (1975), as the sleepy beach town of Amity slowly wakes to the horror prowling its waters. Credit too the involvement of the late, great Michael Crichton. His early technothrillers–especially 1969’s The Andromeda Strain and 1973’s Westworld, both of which feed directly into Jurassic Park–might be remembered for their high concepts (and derided for their thin characterizations), but what made him such a perfect collaborator for Spielberg was the clear pleasure he took in building narrative mousetraps, one brief chapter at a time. (Nowadays someone like Dan Brown is probably seen as Crichton’s heir apparent, though I vastly prefer the superb half-SF novels of Daniel Suarez.)

I delve into these influences and inheritances because ancestry and lineage seem to be much on the mind of Jurassic World. DNA and its pandora’s box of wonders/perils have always been a fascination of the Jurassic franchise. In the first film it was mosquitoes in amber; in Jurassic World it’s the 1993 movie that’s being resurrected. Something old, something new, something borrowed, something blue: in front of the camera you’ve got B. D. Wong and a crapload of production design tying the humble beginnings of Isla Nubar to its modern, Disneyesque metastasization, while “behind the scenes” (a phrase I put in scare quotes because, let’s face it, if the material were really meant to stay behind the scenes, we wouldn’t be discussing it), videos like this one work to reassure us of a meaningful connection between the original and its copy:

The “Classic Jurassic Crew” profiled here might seem a little reachy (lead greensman? boom operator?), but the key names are obviously Jack Horner and Phil Tippett, “Paleontology Consultant” and “Dinosaur Consultant” respectively: the former conferring scientific legitimacy upon the proceedings, the latter marking a tie to traditions of special effects that predate the digital. Tippett, of course, made his name in stop-motion animation–the Imperial Walkers in The Empire Strikes Back’s Hoth battle are largely his handiwork–and at first was approached by Spielberg to provide similar animation for Jurassic Park. But when Tippett’s “go motion” technique was superseded by the computer-generated dinosaurs being developed by Dennis Muren, Tippett became a crucial figure, both technologically and rhetorically, in the transition from analog to digital eras. In the words of his Wikipedia entry:

Far from being extinct, Tippett evolved as stop motion animation gave way to Computer-generated imagery or CGI, and because of Phil’s background and understanding of animal movement and behavior, Spielberg kept Tippett on to supervise the animation on 50 dinosaur shots for Jurassic Park.

Tippett’s presence in the film’s promotional field of play thus divulges World’s interest in establishing a certain “real” at the core of its operations, inoculating itself against the argument that it is merely a simulacrum of what came before. It’s a challenge faced by every “reboot event” within the ramifying textualities of a long-running blockbuster franchise, forced by marketplace demands to periodically reinvent itself while (and this is the trick) preserving the recognizable essence that made its forerunner(s) successful. In the case of Jurassic World, that pressure surfaces symptomatically in the discourse around the movie’s visual effects–albeit in a fashion that ironically inverts the test Jurassic Park met and mastered all those years ago. Park’s dinosaurs were sold as breakthroughs in CGI, notwithstanding the brevity of their actual appearance: of the movie’s 127-minute running time, only six contained digital elements, with the rest of the creature performances supplied by Tippett’s animation and Stan Winston’s animatronics. Those old-school techniques were largely elided in the attention given to Park’s cutting-edge computer graphics.

Jurassic World, by contrast, arrives long after the end of what Michele Pierson has called CGI’s “wonder years”; inured to Hollywood’s production of digital spectacle by its sheer superfluity, audiences now seek the opposite lure, the promise of something solid, profilmic, touchable. This explains the final person featured in the video: John Rosengrant of the fittingly-named Legacy Effects, seen operating a dying apatosaurus in one of Jurassic World’s few languid moments. The dinosaur in that scene is a token of the analog era, offered up as emblem of a practical-effects integrity the filmmakers hope will generalize to their entire project. It’s an increasingly common move among makers of fantastic media, one that critics, like this writer for Grantland, are all too happy to reinforce:

J. J. Abrams got a big cheer at the recent Star Wars Celebration Anaheim when he said he was committed to practical effects. George Miller won plaudits for sticking real trucks in the desert in Mad Max: Fury Road. Similarly, [Jurassic World director Colin] Trevorrow gestured to the Precambrian world of special effects by filling his movie with rubber dinos, an old View-Master, and going to the mat to force Universal to pony up for an animatronic apatosaurus. Chris Pratt’s Owen Grady tenderly ministers to the old girl before her death — a symbolic death of the practical effect under the rampage of CGI.

I hope to say more about this phenomenon, in which the digital camouflages itself in a lost analog authenticity, in a future post. For now I will simply note that “Chris Pratt’s Owen Grady” might be the most real thing within Jurassic World. Pratt’s performance here is strikingly different from the jokester he played so winningly in Guardians of the Galaxy: he’s tougher, harsher, more brutish. Spielberg is rumored to want him to play Indiana Jones, and I can see how that would work: like the young Harrison Ford, Pratt can convincingly anchor a fantasy scenario without seeming like’s playing dress-up. But the actor Pratt reminds me of even more than Harrison Ford is John Wayne: his first shot in Jurassic World, dazzlingly silhouetted against sunlight, recalls that introductory dolly in Stagecoach (1939) when the camera rushes up to Wayne’s face as though helpless to resist his primordial photogenie.

As for the rest of Jurassic World, I enjoyed some of it and endured most of it, borne along by the movie’s fitful but generally successful invocations of the 1993 original. “Assets” are one of the screenplay’s keywords, and they apply not only to the dinosaurs that are Isla Nubar’s central attractions but to the swarm of intellectual property that constitutes the franchise’s brand: the logos, the score, the velociraptors’ screeches and the T-Rex’s roar. (Sound libraries, like genetically-engineered dinosaurs, are constructed and owned things too.) Anthony Lane jeers that there is “something craven and constricting in the attitude of the new film to the old,” but I found the opposite: it’s when World is most clearly conscious of Park that it works the best, which is probably why I enjoyed its final half an hour–built, as in Park, as an escalating series of action beats, culminating in a satisfying final showdown–the most.

But that might just be my franchise loyalty talking again. As with The Lost World in that 1997 argument outside the theater, I may be talking myself into liking something not because of its actual qualities, but because it’s part of my history–my identity.

It’s in my DNA.

The Long Flight of the Leif Ericson, Part Two

This is the second in a series of posts tracing the storied path of the Leif Ericson, a spaceship designed in 1968 whose afterlife has carried it through a number of incarnations in different media formats – most notably, plastic. Previous posts can be found here.

Viewing the Leif Ericson as the expression of Matt Jefferies’s singular engineering sensibility is pleasing for at least two reasons. First, in crediting the ship to the work of a “great man” of production design (who himself worked under the direction of another, that Great Bird of the Galaxy Gene Roddenberry) it scratches our auteurist itch—one specific to modes of fandom oriented toward behind-the-scenes makers such as special-effects artists. Second, it invites us to tie the Ericson to a larger fictional system, the storyworld of Star Trek: even if never directly seen or mentioned in the original series, maybe the Ericson was out there regardless, plying the spaceways alongside the Enterprise and other Starfleet vessels.

Both of these satisfactions are, in their way, ideological lures: means of extracting pleasure from the fantasy operations of capitalism. We come nearer the truth, or at least a more complete picture, if we see the Ericson as the product of an industrial relationship between two arms of mass culture: television and toys. For in 1968, the Leif Ericson made its first appearance in public not on screen but in the material form of a plastic model kit.

The Michigan-based manufacturer AMT had enjoyed a mutually beneficial symbiosis with Star Trek since before the show’s premiere, contracted by Desilu—the studio where Roddenberry developed Trek—to build technological exotica as needed for the series. Although the company’s name, AMT, stood for Aluminum Model Toys, its capabilities extended beyond the making of cheap playthings into the fabrication of large commercial items. As detailed on Memory Alpha, the need to make “finished display pieces … for marketing purposes” led AMT to start the Speed and Custom Division Shop, a subsidiary “to build both full-scale and scaled automobile mockups … to promotional ends, as well as to manufacture the templates or masters in order to construct the molds from which the parts for their model kits were extracted or cast.” A third axis, extending outward from these coordinates of showroom spectacle and mass-produced consumer item, connected the items AMT built for Trek: objects ranging from studio miniatures to full-sized sets to be inhabited by actors.

These production artifacts were at one and the same time components of an invented future, simultaneously split and joined by the ontological dividing lines of camera lens, celluloid splice, cathode-ray tube. Take for example the Galileo shuttlecraft: AMT built it as a studio model to be filmed against a bluescreen and matted onto backgrounds of starry space, but also made a full-sized version of the ship’s interior. Episodes like the first season’s “Galileo 7”—written in part to showcase the spacecraft—married together exterior and interior, constructing for audiences a screen reality through the simple yet profound magic of a televisual edit.

To be continued …

The Long Flight of the Leif Ericson, Part One

This is the first of a series of posts tracing the storied path of the Leif Ericson, a spaceship designed in 1968 whose afterlife has carried it through a number of incarnations in different media formats – most notably, plastic.

Origins

Reflecting the many odd waypoints and junctions through which its journey would eventually take it, the Leif Ericson had more than one starting point: as with a quantum particle, its emergence can be fixed in relation to multiple and not always commensurate frames of reference, and our choice of perspective changes the very nature of the object we describe. One the one hand, we can see it as the creation of a single, inspired author; on the other, the product of a set of industrial forces.

Walter "Matt" Jefferies

Walter “Matt” Jefferies

In the first version, the Ericson was born in 1968 in the sketchbooks of Walter “Matt” Jefferies (1921-2003), production artist on the original Star Trek series. [1] Part of a team of designers that included propmaker Wah Ming Chang, costumer William Ware Thiess, and makeup artist Fred Philips, Jefferies—whose background in aviation and mechanical illustration was ideally suited to visualizing futuristic technologies in blueprintable, buildable forms—supplied Trek with its most familiar and recognizable features. These included the exterior of the U.S.S. Enterprise, with its saucer-shaped command module joined to a cigar-shaped engineering section from which two narrow, cylindrical warp nacelles jetted backwards: a configuration of geometrical solids whose basic arrangement has endured throughout fifty years of resculpting and streamlining in one movie, TV series, and videogame after another. Jefferies also designed the Enterprise’s circular bridge, its crew’s quarters, and the transporter room. Built as standing sets and used repeatedly across the seventy-nine episodes of the original series, these fixtures of a future history quickly became as familiar to audiences as the other, smaller details contributed by Jefferies: Starfleet’s golden arrowhead insignia; the instrumental triumvirate of communicator, tricorder, and phaser. But for the model-building fans who play such an important role in this story, Jefferies’s most important creations were his ships: not just the Enterprise, but the submarine-shaped Botany Bay commanded by Khan in “Space Seed”; the turreted, whirligig space station in “The Trouble with Tribbles”; the Klingon’s manta-ray-like battle cruiser in “The Enterprise Incident”; the boxy, three-windowed shuttlecraft in “Galileo 7.”

Early concepts for the Leif Ericson and Scoutship

Early concepts for the Leif Ericson and Scoutship

The Leif Ericson originated as another of Jefferies’s fictional spacecraft, but not one that ever appeared on Trek—or at least not for many years. In 1968, Jefferies sketched a pointed, rocketlike ship along with a smaller vessel whose delta wings and bulbous front section vaguely resembled a baby bird. Designed as a pair—the second craft would ride within the larger vehicle, inside a hangar covered by two hinged doors—the Galactic Cruiser Leif Ericson, together with its “mini scout ship,” were to be the first release in a series intended not for TV but toys: a line of model kits put out by a company called AMT.

To be continued …

[1] http://www.fact-index.com/m/ma/matt_jefferies.html

PotD: 100 Days of Good Luck

FullSizeRenderI found this dollar bill outside Hicks Hall the other day. It’s special to me for reasons I won’t go into here; suffice to say that I singled it out from the endless, anonymous flow of our monetary system (or at least that system’s material tags) as meaningful and worthy of preservation — whatever preservation might mean in the digital era. Just as some business owners frame the first payment they received, public sign of their humble beginnings, I post today’s image to mark my own new start.

Or who knows: maybe I am part of the the anonymous flow and this singular object, stamped as it is with its own serial number, chose me.