Tilt-Shifting Pacific Rim

PACIFIC RIM

Two-thirds of the way through Pacific Rim — just after an epic battle in, around, and ultimately over Hong Kong that’s one of the best-choreographed setpieces of cinematic SF mayhem I have ever witnessed — I took advantage of a lull in the storytelling to run to the restroom. In the air-conditioned chill of the multiplex the lobby with its concession counters and videogames seemed weirdly cramped and claustrophobic, a doll’s-house version of itself I’d entered after accidentally stumbling into the path of a shink-ray, and I realized for the first time that Guillermo del Toro’s movie had done a phenomenological number on me, retuning my senses to the scale of the very, very big and rendering the world outside the theater, by contrast, quaintly toylike.

I suspect that much of PR’s power, not to mention its puppy-dog, puppy-dumb charm, lies in just this scalar play. The cinematography has a way of making you crane your gaze upwards even in shots that don’t feature those lumbering, looming mechas and kaiju. The movie recalls the pleasures of playing with LEGO, model kits, action figures, even plain old Matchbox Cars, taking pieces of the real (or made-up) world and shrinking them down to something you can hold in your hand — and, just as importantly, look up at. As the father of a two-year-old, I often find myself laying on the floor, my eyes situated inches off the carpet and so near the plastic dump trucks, excavators, and fire engines in my son’s fleet that I have to take my glasses off to properly focus on them. At this proximity, toys regain some of their large-scale referent’s visual impact without ever quite giving up their smallness: the effect is a superimposition of slightly dissonant realities, or in the words of my friend Randy (with whom I saw Pacific Rim) a “sized” version of the uncanny valley.

This scalar unheimlich is clearly on the culture’s mind lately, literalized — iconized? — in tilt-shift photography, which takes full-sized scenes and optically transforms them into images that look like dioramas or models. A subset of the larger (no pun intended) practice of miniature faking, tilt-shift updates Walter Benjamin’s concept of the optical unconscious for the networked antheap of contemporary digital and social media, in which nothing remains unconscious (or unspoken or unexplored) for long but instead swims to prominence through an endless churn of collective creation, commentary, and sharing. Within the ramifying viralities of Facebook, Twitter, Tumblr, Reddit, and 4chan, in which memes boil reality into existence like so much quantum foam, the fusion of lens-perception and human vision — what the formalist Soviet pioneers called the kino-eye — becomes just another Instagram filter:

tilt-shift-photography-1

The giant robots fighting giant monsters in Pacific Rim, of course, are toyetic in a more traditional sense: where tilt-shift turns the world into a miniature, PR uses miniatures to make a world, because that is what cinematic special effects do. The story’s flimsy romance, between Raleigh Beckett (Charlie Hunnam) and Mako Mori (Rinko Kikuchi) makes more sense when viewed as a symptomatic expression of the national and generic tropes the movie is attempting to marry: the mind-meldly “drift” at the production level fuses traditions of Japanese rubber-monster movies like Gojiru and anime like Neon Genesis Evangelion with a visual-effects infrastructure which, while a global enterprise, draws its guiding spirit (the human essence inside its mechanical body, if you will) from Industrial Light and Magic and the decades of American fantasy and SF filmmaking that led to our current era of brobdingnagian blockbusters.

Pacific Rim succeeds handsomely in channeling those historical and generic traces, paying homage to the late great Ray Harryhausen along the way, but evidently its mission of magnifying 50’s-era monster movies to 21st-century technospectacle was an indulgence of giantizing impulses unsuited to U.S. audiences at least; in its opening weekend, PR was trounced by Grown Ups 2 and Despicable Me 2, comedies offering membership in a franchise where PR could offer only membership in a family. The dismay of fans, who rightly recognize Pacific Rim as among the best of the summer season and likely deserving of a place in the pantheon of revered SF films with long ancillary afterlives, should remind us of other scalar disjunctions in play: for all their power and reach (see: the just-concluded San Diego Comic Con), fans remain a subculture, their beloved visions, no matter how expansive, dwarfed by the relentless output of a mainstream-oriented culture industry.

Awaiting Avatar

Apparently Avatar, which opened on Friday at an immersive neural simulation pod near you, posits an intricate and very real connection between the natural world and its inhabitants: animus in action, the Gaia Hypothesis operationalized on a motion-capture stage. If this is so — if some oceanic metaconsciousness englobes and organizes our reality, from blood cells to weather cells — then perhaps it’s not surprising that nature has provided a perfect metaphor for the arrival of James Cameron’s new film in the form of a giant winter storm currently coloring radar maps white and pink over most of the eastern seaboard, and trapping me and my wife (quite happily) at home.

Avatar comes to mind because, like the blizzard, it’s been approaching for some time — on a scale of years and months rather than hours and minutes, admittedly — and I’ve been watching its looming build with identical avidity. I know Avatar’s going to be amazing, just as I knew this weekend’s storm was going to be a doozy (the expectation is 12-18 inches in the Philadelphia area, and out here in our modest suburb, the accumulation is already enough to make cars look as though they have fuzzy white duplicates of themselves balanced on their roofs). In both cases, of course, this foreknowledge is not as monolithic or automatic a thing as it might appear. The friendly meteorologists on the Weather Channel had to instruct me in the storm’s scale and implacability, teaching me my awe in advance; similarly, we all (and I’m referring here to the entire population of planet earth) have been well and thoroughly tutored in the pleasurable astonishment that awaits us when the lights go down and we don our 3D glasses to take in Cameron’s fable of Jake Sully’s time among the Na’vi.

If it isn’t clear yet, I haven’t seen Avatar. I’m waiting out the weekend crowds (and, it turns out, a giant blizzard) and plan to catch a matinee on Tuesday, along with a colleague and her son, through whose seven-year-old subjectivity I ruthlessly intend to focalize the experience. (I did something similar with my nephew, then nine, whom I took to see The Phantom Menace in 1999; turns out the prequels are much more watchable when you have an innocent beside you with no memory of what George Lucas and Star Wars used to be.) But I still feel I know just about everything there is to know about Avatar, and can name-drop its contents with confidence, thanks to the broth of prepublicity in which I’ve been marinating for the last several weeks.

All of that information, breathlessly assuring me that Avatar will be either complete crap (the /tv/ anons on 4chan) or something genuinely revolutionary (everyone else), partakes of a cultural practice spotlighted by my friend Jonathan Gray in his smart new book Show Sold Separately: Promos, Spoilers, and Other Media Paratexts. While we tend to speak of film and television in an always-already past tense (“Did you see it?” “What did you think?”), the truth is something very different. “Films and television programs often begin long before we actively seek them out,” Jon observes, going on to write about “the true beginnings of texts as coherent clusters of meaning, expectation, and engagement, and about the text’s first initial outposts, in particular trailers, posters, previews, and hype” (47). In this sense, we experience certain media texts a priori — or rather, we do everything but experience them, gorging on adumbration with only that tiny coup de grace, the film itself, arriving at the end to provide a point of capitation.

The last time I experienced anything as strong as Avatar‘s advance shockwave of publicity was with Paranormal Activity (and a couple of years ago before that with Cloverfield), but I am not naive enough to think such occurrences rare, particularly in blockbuster culture. If anything, the infrequency with which I really rev up before a big event film suggests that the well-coordinated onslaught is as much an intersubjective phenomenon as an industrial one; marketing can only go so far in setting the merry-go-round in motion, and each of us must individually make the choice to hop on the painted horse.

And having said that, I suppose I may not be as engaged with Avatar‘s prognosticatory mechanisms as I claim to be.  I’ve kept my head down, refusing to engage fully with the tableaux being laid out before me. As a fan of science-fiction film generally, and visual effects in particular, this seemed only wise; in the face of Avatar hype, the only choices appear to be total embrace or outright and hostile rejection. I want neither to bless nor curse the film before I see it. But it’s hard to stay neutral, especially when a film achieves such complete (if brief) popular saturation and friends who know I study this stuff keep asking me for my opinion. (Note: I am very glad that friends who know I study this stuff keep asking me for my opinion.)

So, a few closing thoughts on Avatar, offered in advance of seeing the thing. Think of them as open-ended clauses, half-told jokes awaiting a punchline; I’ll come back with a new post later this week.

  • Language games. One aspect of the film that’s drawn a great deal of attention is the invention of a complete Na’vi vocabulary and grammar. Interesting to me as an example of Cameron’s endless depth of invention — and desire for control — as well as an aggressive counter to the Klingon linguistics that arose more organically from Star Trek. Will fan cultures accrete around Avatar as hungrily as they did around that more slowly-building franchise, their consciousness organized (to misquote Lacan) by a language?
  • Start the revolution without me. We’ve been told repeatedly and insistently that Avatar is a game-changer, a paradigm shift in science-fiction storytelling. For me, the question this raises is not Is it or isn’t it? but rather, What is the role of the revolutionary in our SF movies, and in filmmaking more generally? How and why, in other words, is the “breakthrough” marketed to us as a kind of brand — most endemically, perhaps, in movies like Avatar that wear their technologies on their sleeve?
  • Multiple meanings of “Avatar.” The film’s story, as by now everyone knows, revolves around the engineering of alien bodies in which human subjectivities can ride, a kind of biological cosplay. But on another, artifactual level, avatarial bodies and mechanisms of emotional “transfer” underpin the entire production, which employs performance capture and CG acting at an unprecedented level. In what ways is Avatar a movie about itself, and how do its various messages about nature and technology interact with that supertext?

Counting Down Galactica (4 of 4)

[This is the last of four posts counting down the final episodes of Battlestar Galactica. To see the others, click here.]

I’d meant to write my final entry in the “Counting Down Galactica” series before the airing of the finale on Friday night; a power outage in my neighborhood prevented me from doing so. Hence everything I’m about to say is colored by having seen the two-hour-and-eleven-minute conclusion, and spoilers lie in wait.

On the topic of spoilers, I know of a few ambitious souls (hi, Suzanne!) who are holding the finale in reserve, planning to watch it next week. Let me note how sympathetic I am toward, and dubious about the chances of, their or anyone’s ability to navigate the days ahead without having the ending spoiled. I haven’t even dared to visit Facebook yet, for fear of destabilizing my own still-coalescing thoughts on the experience; similarly, I won’t go near the various blogs I read. When I got up this morning, I turned on NPR’s Weekend Edition, only to find myself smack-dab in the middle of a postmortem with Mary McDonnell. It was like coming out of hyperspace into an asteroid field, or — a more somber echo — waking on the morning of 9/11 to a puzzled voice on the radio saying, in perhaps our last moment of innocence, that pilot error seemed to be behind a plane’s freak collision with the World Trade Center.

Comparing BSG’s wrapup to the events of 9/11 might seem the nadir of taste, except that Galactica probably did more in its four seasons than any other media artifact besides 24 — I’m discounting Oliver Stone movies and the Sarah Silverman show — to process through pop culture the terrorist attacks and their corrosive aftereffects on American psychology and policy. It became, in fact, an easy truism about the show, to the point where I’d roll my eyes when yet another commentator assured me that BSG was about serious things like torture and human rights. But then I shouldn’t let cynicism blind me to the good that stories and metaphors can do; I myself publicly opined that the season-two Pegasus arc marked a “prolapse of the national myth,” a moment at which BSG “strode right over the line of allegory to hold up a mirror in which the United States could no longer misrecognize its practices of dehumanization and torture.” And who am I to argue with the United Nations, anyway?

But maybe the more fitting connection is local rather than global, for losing power yesterday reminded me how absolutely dependent the current state of my life is on technology: the uninterrupted flow of internet, television, radio. My wife and I were able to brew coffee by plugging the pot into one remaining active outlet, and our cell phones enabled us to maintain contact with the outside world (until their batteries died). After that, it was leave the house and brave the bright outdoors and actual, face-to-face conversation with other human beings.

I bring this up because, in its final hours, BSG plainly announced itself as concerned, more than anything else, with the relationship between nature and technology — between humans and their creations. In retrospect, this dialectic is so obvious that I’m embarrassed to admit it never quite came into focus for me when the series was running. Sure, the initiating incident was a sneak attack by Cylons, a race of human-built machines who got all uppity and sentient on us. (Or maybe it’s the case that the rebellious Cylons descended from some other, ancient caste of Cylons — I’m not entirely clear on this aspect of the mythology, and consider it the show’s failing for not explaining it more clearly. But more about that in a moment.) Even in that first, fateful moment of aggression, though, the lines between us and them were blurred; in “reimagining” the 1970s series that was its precursor, Ronald D. Moore’s smartest decision — apart from scuffing up the mise-en-scene — was to posit Cylons who look like us; who think, feel, and believe like us. As the series wore on, this relationship became ever more intimate, incestuous, and uncomfortable, so that finally it seemed neither species could imagine itself outside of the other. It was differance, supplement, and probably several other French words, operationalized in the tropes of science fiction.

A more detailed textual analysis than I have the patience to attempt here would likely find in “Daybreak” an eloquent mapping of these tense territories of interdependent meanings. One obvious starting point would be the opposition between Cavil’s Cylon colony, a spidery, Gigeresque encrustation perched in a maelstrom of toxic-looking space debris, and the plains of Africa, evoked so emphatically in the finale’s closing third hour that I began to wonder if the story’s logic could admit the existence of any sites on Earth (or pseudo-Earth, as the story cutely frames it) that aren’t sunny, hospitable, and friendly. In this blunt binary I finally saw BSG’s reactionary (one might say luddite) ethos emerge in full flower: a decision on the undecidable, a brake on the sliding of signifiers. For all the show’s interest in hybrids of every imaginable flavor, it did finally come down to a rejection of technology, signaled most starkly in Lee Adama’s call to “break the cycle” by not building more cities — and the sailing of Galactica and her fleet into the sun. Even as humans and Cylons decide to live together (and, it’s suggested in the coda, provide the seed from which contemporary civilization sprouted), it seems to me the metaphor has been settled in humanity’s favor.

That’s fine; at least the show had the courage to finally call heads or tails on its endless coin-flipping. Interesting, though, that the basic division over which the narrative obsessed was reflected formally in the series’ technical construction and audience reception. I refer here to a dialectic that emerged late in the show’s run, between visual effects and everything else — between space porn and character moments. Reading fan forums, I lost count of the number of times BSG was castigated by some for abandoning action sequences and space battles, only to be countered by another group tut-tutting along the lines of This show has never been about action; it’s about the people. For what it’s worth, I’m firmly in the first camp (as my post last week demonstrates): the best episodes of Galactica were those that featured lots of space-set action (the Hugo-winning pilot, “33”; “The Hand of God”; most of the first season, for that matter, and bright moments sprinkled throughout the rest of the series). Among the worst were those that confined themselves exclusively to character interaction, such as “Black Market,” “Unfinished Business,” and most of the latter half of season four.

It’s not that the show was ever poorly written, or the characters uninteresting. But it did seem for long stretches to develop an allergy to action, with the result a bifurcated structure that drove some fans crazy. Much like the pointless squabbles around Lost, whose flashback structure still provokes some to shout “filler episode!” where others cry “Character development!”, debate on the merits of BSG too often devolved into untenable assertions about the antithetical relationship between spectacle and narrative, with space-porn fans lampooned as short-attention-span stimulus junkies and character-development fans mocked as pretentious blowhards. Speaking as a stimulus junkie and pretentious blowhard, I feel safe in pointing out the obvious: it’s hard to pull off compelling science fiction characters without some expertly integrated shiny-things-go-boom, while spaceships and ‘splosions by themselves get you nowhere. You need, in short, both — which is why BSG’s industrial dimension neatly homologized its thematic concerns.

I’m relieved that last night’s conclusion managed to reconcile the show’s many competing elements, and that it did so stirringly, dramatically, and movingly. I expected nothing less than a solid sendoff from RDM, one-half of the writing team behind perhaps the greatest series finale ever, Star Trek: The Next Generation‘s “All Good Things …” — but that’s not to say he couldn’t have screwed it up in the final instance. Indeed, if there is a worm in the apple, it’s my sneaking suspicion that the game was fixed: the four episodes leading up to “Daybreak” were a maddening mix of turgid soap opera and force-fed exposition, indulgent overacting and unearned emotion. It’s almost as though they wanted to lower our expectations, then stun us with a masterpiece.

I don’t know yet if “Daybreak” deserves that particular label, but we’ll see. In any case, there is something magical about so optimistic an ending to such a downbeat series. If the tortured soul of this generation’s Battlestar Galactica was indeed forged in the flames of 9/11 and the collective neurotic reaction spearheaded by the Bush administration, perhaps its happy ending reflects a national movement toward something better: the unexpected last-minute emergence, through parting clouds, of hope.

Timeshifting Terminator and Dollhouse

I was struck by these promising numbers regarding the number of viewers using DVRs to timeshift episodes of Terminator: The Sarah Connor Chronicles and Dollhouse, FOX’s ratings-challenged Friday-night block of science fiction.

I started off as a big fan of TSCC, a series which, especially as it hit its stride at the start of season two, seemed on its way to assuming the mantle of the nearly-departed Battlestar Galactica. Reflecting the new tone of SF on television, Chronicles is moody, nuanced, and — with its tangled motifs of time travel and maternal distress — introspective to the point of convolution. I have nowhere near the same appreciation for Dollhouse, which seems to me the very definition of misbegotten: a rather obvious, emptily sensational concept yoked to an unimaginatively-cast lineup of unlikeable characters. As this Penny Arcade comic and accompanying commentary observes, Dollhouse is interesting primarily for what it reveals about the changing author function in serial television: we’re now to the point where we measure the quality of certain shows based on hypothetical extrapolations about how good the text would have been had network execs (those good old go-to villains) not interfered with the showrunner’s divine inspiration.

Significantly, though, I’ve only watched the first episode of Dollhouse; the second and third installments await on my DVR, along with the most recent Chronicles. Placing the shows together on Friday night seemed like a certain death sentence — cult TV fans will never forgive the sin NBC committed against the original Star Trek in 1968-1969, leaving its third season to wither on the ice-floe slot of Fridays at 10 p.m., well away from its target audience — except for one thing. Cult TV has cult viewing habits associated with it, and one of the things we fans do is relocate episodes to spaces in our schedule better suited to focused, attentive viewing. In a word, we timeshift. The sagging numbers for Dollhouse and Chronicles both received a giant boost when DVR statistics were factored in, suggesting not just that the shows might have some life in them yet, but that new technologies of viewing may make the difference.

Of course, we’ve been timeshifting TV for years, first through the VCR and now through any number of digitally-based tools for spooling, streaming, and stealing video. The new technology I refer to is monitorial: the ability to track and quantify this collective behavior. That a once private, even renegade practice is now on the broadcasters’ radar is dishearteningly panoptic in one sense; in another sense, impossible to separate from the first, it may represent a new kind of power — a fannish vox populi to which the producers of beleaguered but promising series might listen.

Quick Thoughts on the Oscars

Last night’s Academy Awards ceremony was more enjoyable than I expected, though it’s always this way: each year I watch with a kind of low-level awe, impressed not only by the show’s general lack of suckiness but by how the pageantry, with its weird mix of cheesy spectacle and syrupy sentimentality, manages to distill that specific tincture of pleasure that is the movies’ signature structure of feeling. Mind you, I’m not talking about the cinematic medium’s larger essence (its extremes of Godardian play and Tarkovskian tedium) but its particular, peculiar manifestation via the churning industrial engine of Hollywood, so helplessly eager to please and entertain that it puts us in the position of that poor guy in Brainstorm (Douglas Trumbull, 1983) — the one hooked up to an orgasm machine that nearly lands him in a coma.

Next day, the script is always the same: I listen to NPR and read the Times and discover that, in fact, the show was a big, boring disappointment, full of the same old lame old. (Here too the ceremony replicates the experience of filmgoing, the light of day too often revealing the seams and strings that ruin the previous night’s magic.) So before I proceed to my necessary disillusionment, that temporary discharge of cynicism by which I prepare the poles of my pleasure centers for their next jolt, a few points of interest I noticed in the 81st Awards:

1. Popularity and its discontents. At several points during the broadcast — Hugh Jackman’s great opening number, Will Smith’s comments following the action-movie montage — we were reminded that the movies everyone went to see (Iron Man, The Dark Knight) were not necessarily the ones that received critical accolades and, by definition, Academy attention. In fact, an allergically inverse relation seems to apply: the more popular a film is, the less likely it is to receive any kind of respect, save for its technical components. That’s why crowdpleasers are so often relegated to categories like Editing, Sound Mixing, and, of course, Visual Effects. (The one place where popular, profitable movies are granted the privilege of existing as feature-length artworks, rather than Frankensteinian assemblages of FX proficiency, is in the category of Animated Feature, where Bolt, WALL-E, and Kung-Fu Panda are rightly celebrated as marvels.) Last night, the Academy got called on its strange standards, with Jackman asking why Dark Knight hadn’t been nominated and Smith dryly remarking that action movies actually have audiences. Only he called them “fans” — and this year, it seems, fans are realizing that they are audiences, perhaps the only real audiences, and they’re rising up to demand equal representation at the spotlit top of the cultural hierarchy.

2. Mashups and remixes. Others, I’m sure, will have much to say about Slumdog Millionaire‘s sweep of the major awards and what this indicates about the globalization of Hollywood. The mingling of film traditions and industries seems to me an epiphenomenal manifestation of a more atomistic and pervasive transformation, over the last few years, into remix culture: we live in the era of mashup and migration, a mixing and matching that produced the wonderful, boundary-crossing hybrid of Slumdog (a media artifact, let us note, that is as much about television and film’s mutual remediation as it is about Bollywood and Hollywood). This was apparent at a formal level in the composition of the Awards’ Best Original Song performances, which garishly and excitingly wrapped the melodies of “Down to Earth,” “O Saya,” and “Jai Ho” around each other.

3. The Curious Case of Brad Pitt. Although he didn’t win, Pitt’s nomination for Best Actor in a Leading Role marks the continuing erosion of prejudices against — and concomitant trend toward full acceptance of — what I have elsewhere termed blended performance: the creation of dramatic characters through special and visual effects. More precisely, blended performance involves acting that depends vitally on visual machination: think Christopher Reeve in Superman, Jeremy Irons in Dead Ringers, Andy Serkis in The Lord of the Rings (and for that matter, Serkis as the big ape in King Kong). Each of these characters came alive not simply through their anchoring in particular bodies, faces, voices, and dramatic chops, but the augmentation of those traits with VFX methods, from bluescreen traveling mattes to motion capture and animation. I’m not saying there’s a strict dividing line here between “real” and “altered” performance; every star turn is a spectacular technology in its own right. But good acting is also a category that prizes authenticity; we do not want to be reminded that we are being tricked. Blended performances don’t often get Oscar attention, but when they do, there’s a historical bias toward special (on-camera) effects versus postproduction contributions: John Hurt received a Best Actor nomination for The Elephant Man (1980), a film he spent buried in pounds of prosthetic devices. In Benjamin Button, of course, Pitt wore plenty of makeup; but a large portion of his performance came about through an intricate choreography of matchmoving and CG modeling. As digital phenomenologies become the inescapable norm, we’ll see more and more legitimacy accorded to blended performance, a trend that will dovetail, I expect, with the threshold moment at which a CG animated film gets nominated for best live-action feature. Don’t laugh: many thought it would happen with WALL-E, and it’s in the taxonomies of AMPAS that such profound distinctions about the ontologies of cinema and technology get hammered out.

Digital Dogsbodies

It’s funny — and perhaps, in the contagious-episteme fashion of Elisha Gray and Alexander Graham Bell filing patents for the telephone on the very same date, a bit creepy — that Dan North of Spectacular Attractions should write on the topic of dog movies while I was preparing my own post about Space Buddies. This Disney film, which pornlike skipped theaters to go straight to DVD and Blu-Ray, is one of a spate of dog-centered films that have become a crowdpleasing filmic staple of late. Dan poses the question, “What is it about today that people need so many dog movies?” and goes on to speculate that we’re collectively drowning our sorrows at the world’s ugliness with a massive infusion of cute: puppyism as cultural anodyne.

Maybe so. It seems to me, though, that another dynamic is in operation here — and with all due respect to my follow scholar of visual effects, Dan may be letting the grumbly echoes of the Frankfurt School distract him from a fascinating nexus of technology, economics, and codes of expressive aesthetics driving the current crop of cinematic canines. Simply put, dogs make excellent cyberstars.

Think about it. Nowadays we’re used to high-profile turns by hybrids of human and digital performance: Angelina Jolie as Grendel’s goldplated mother in Beowulf, Brad Pitt as the wizened baby in The Curious Case of Benjamin Button. (Hmm, it only now strikes me that this intertextual madonna-and-child are married in real life; perhaps the nuclear family is giving way to the mo-capped one?) Such top-billed performances are based on elaborate rendering pipelines, to be sure, but their celebrity/notoriety is at least as much about the uniquely sexy and identifiable star personae attached to these magic mannequins: a higher order of compositing, a discursive special effect. It takes a ton of processing power to paint the sutured stars onto the screen, and an equivalent amount of marketing and promotion — those other, Foucauldian technologies — to situate them as a specific case of the more general Steady March Toward Viable Synthespianism. Which means, in terms of labor and capital, they’re bloody expensive. Mountains of data are moved in service of the smallest details of realism, and even then, nobody can get the eyes right.

But what of the humble cur and the scaled-down VFX needed to sell its blended performance? The five puppy stars of Space Buddies are real, indexically photographed dogs with digitally-retouched jaw movements and eyebrow expressions; child voice actors supply the final, intangible, irreplaceable proof of character and personality. (To hell with subsurface skin scatter and other appeals to our pathetically seducible eyes; the real threshold of completely virtual performance remains believable speech synthesis.) The canine cast of Beverly Hills Chihuahua, while built on similar principles, are ontologically closer to the army of Agent Smiths in The Matrix Reloaded’s burly brawl — U-Capped fur wrapped over 3D doll armatures and arrayed in Busby-Berkeleyish mass ornament. They are, in short, digital dogsbodies, and as we wring our hands over the resurrection of Fred Astaire in vacuum-cleaner ads and debate whether Ben Burtt’s sound design in Wall-E adds up to a best-actor Oscar, our screens are slowly filling with animals’ special-effects-driven stardom. How strange that we’re not treating them as the landmarks they are — despite their immense profitability, popularity, and paradoxical common-placeness. It’s like Invasion of the Body Snatchers, only cuddly!

I don’t mean to sound alarmist — though writing about the digital’s supposed incursion into the real always seems to bring out the edge in my voice. In truth, the whole thing seems rather wonderful to me, not just because I really dug Space Buddies, but because the dogsbody has been around a long time, selling audiences on the dramatic “realism” of talking animals. From Pluto to Astro, Scooby Doo to Rowlf, Lady and the Tramp to K-9, Dynomutt, and Gromit, dogs have always been animated beyond their biological station by technologies of the screen; we accept them as narrative players far more easily than we do more elaborate and singular constructions of the monstrous and exotic. The latest digital tools for imparting expression to dogs’ mouths and muzzles were developed, of all places, in pet-food ads: clumsy stepping stones that now look as dated as poor LBJ’s posthumous lipsynching in Forrest Gump.

These days it’s the rare dog (or cat, bear, and fish) onscreen whose face hasn’t been partially augmented with virtual prosthetics. Ultimately, this is less about technological capability than the legal and monetary bottom line: unlike human actors, animal actors can’t go ballistic on the lighting guy, or write cumbersome provisions into their contracts to copyright their “aura” in the age of mechanical reproduction. Our showbiz beasts exist near the bottom of the labor pool: just below that other mass of bodies slowly being fed into the meat-grinder of digitization, stuntpeople, and just above the nameless hoardes of Orcs jam-packing the horizon shots of Lord of the Rings. I think it was Jean Baudrillard, in The System of Objects, who observed that pets hold a unique status, poised perfectly between people and things. It’s a quality they happen to share with FX bodies, and for this reason I expect we’ll see menageries in the multiplex for years to come.

Getting Granular with Setpieces

Dan North has published an excellent analysis of the Sandman birth sequence in Spider-Man 3, using this three-minute shot as springboard for a characteristically deft dissection of visual-effects aesthetics and the relationship between CG and live-action filmmaking. His concluding point, that CGI builds on rather than supplants indexical sensibilities — logically extending the cinematographic vocabulary rather than coining utterly alien neologisms — is one that is too often lost in discussions that stress digital technology’s alleged alterity to traditional filmic practices. I’d noticed the Sandman sequence too; in fact, it was paratextually telegraphed to me long before I saw the movie itself, in reviews like this from the New York Times:

… And when [The Sandman] rises from a bed of sand after a “particle atomizer” scrambles his molecules, his newly granulated form shifts and spills apart, then lurches into human form with a heaviness that recalls Boris Karloff staggering into the world as Frankenstein’s monster. There’s poetry in this metamorphosis, not just technological bravura, a glimpse into the glory and agony of transformation.

I don’t have anything to add to Dan’s exegesis (though if I were being picky, I might take issue with his suggestion that the Sandman sequence simply could not have been realized without computer-generated effects; while it’s true that this particular rendering, with its chaotic yet structured swarms of sand-grains, would have taxed the abilities of “stop-motion or another kind of pro-filmic object animation,” the fact is that there are infinitely many ways of designing and staging dramatic events onscreen, and in the hands of a different creative imagination than Sam Raimi and his previz team, the Sandman’s birth might have been realized in much more allusive, poetic, and suggestive ways, substituting panache for pixels; indeed, for all the sequence’s correctly lauded technical artistry and narrative concision, there is something ploddingly literal at its heart, a blunt sense of investigation that smacks of pornography, surveillance-camera footage, and NASA animations — all forms, incidentally, that share the Spider-Man scene’s unflinching long take).

But my attention was caught by this line of Dan’s:

This demarcation of the set-piece is a common trope in this kind of foregrounded spectacle — it has clear entry and exit points and stands alone as an autonomous performance, even as it offers some narrative information; It possesses a limited colour scheme of browns and greys (er … it’s sand-coloured), and the lack of dialogue or peripheral characters further enforces the self-containment.

I’ve long been interested in the concept of the setpiece, that strange cinematic subunit that hovers somewhere between shot, scene, and sequence, hesitating among the registers of cinematography, editing, and narrative, partaking of all while being confinable to none. Setpieces can be an unbroken single shot from the relatively brief (the Sandman’s birth or the opening to Welles’s Touch of Evil) to the extravagantly extended (the thirteen-minute tracking shot with which Steadicam fetishist Brian DePalma kicks off Snake Eyes). But we’re perhaps most familiar with the setpiece as constituted through the beats of action movies: hailstorms of tightly edited velocity and collision like the car chases in Bullitt or, more humorously, Foul Play; the fight scenes and song-and-dance numbers that act as structuring agents and generic determinants of martial-arts movies and musicals respectively; certain “procedural” stretches of heist, caper, and espionage films, like the silent CIA break-in of Mission Impossible (smartly profiled in a recent Aspect Ratio post). Setpieces often occur at the start of movies or near the end as a climactic sequence, but just as frequently erupt throughout the film’s running time like beads on a string; Raiders of the Lost Ark is a gaudy yet elegant necklace of such baubles, including one of my favorites, the “basket chase” set in Cairo. Usually wordless, setpieces tend to feature their own distinctive editing rhythms, musical tracks, and can-you-top-this series of gags and physical (now digital) stunts.

Setpieces are, in this sense, like mini-movies embedded within bigger movies, and biological metaphor might be the best way to describe their temporal and reproductive scalability. Like atavistic structures within the human body, setpieces seem to preserve long-ago aesthetics of early cinema: their logic of action and escalation recalls Edison kinetoscopes and Keystone Cops chases, while more hushed and contemplative setpieces (like the Sandman birth) have about them something of the arresting stillness and visual splendor of the actualite. Or to get all DNAish on you, setpieces are not unlike the selfish genes of which Richard Dawkins writes: traveling within the hosts of larger filmic bodies, vying for advantage in the cultural marketplace, it is actually the self-interested proliferation of setpieces that drives the replication — and evolution — of certain genres. The aforementioned martial-arts movies and musicals, certainly; but also the spy movie, the war and horror film, racing movies, and the many vivid flavors of gross-out comedy. The latest innovation in setpiece genetics may be the blockbuster transmedia franchise, which effectively “brands” certain sequences and delivers them in reliable (and proprietary) form to audiences: think of the lightsaber duels in any given phenotypic expression of Star Wars, from film to comic to videogame.

On an industrial level, of course, setpieces also signal constellations of labor that we can recognize as distinct from (while inescapably articulated to) the films’ ostensible authors. One historical instance of this is the work of Slavko Vorkapich, renowned for the montages he contributed to other peoples’ movies — so distinctive in his talents that to “Vorkapich” something became a term of art in Hollywood. Walt Disney was a master when it came to channeling and rebranding the work of individual artists under his own overweening “vision”; more recently we have the magpie-like appropriations of George Lucas, who was only in a managerial sense the creator of the Death Star battle that ends the 1977 Star Wars: A New Hope. This complexly composited and edited sequence (itself largely responsible for bringing setpieces into being as an element of fannish discourse) was far more genuinely the accomplishment of John Dykstra and his crew at Industrial Light and Magic, not to mention editors Richard Chew and Marcia Lucas. Further down the line — to really ramify the author function out of existence — the battle’s parentage can be traced to the cinematographers and editors who assembled the World War II movies — Tora! Tora! Tora!, The Bridges at Toko-Ri, The Dam Busters, etc. — from which Lucas culled his reference footage, a 16mm reel that Dykstra and ILM used as a template for the transcription of Mustangs and Messerschmitts into X-Wings and TIE Fighters.

Thirty years after the first Star Wars, sequences in blockbuster films are routinely farmed out to visual effects houses, increasing the likelihood that subunits of the movie will manifest their own individuating marks of style, dependent on the particular aesthetic tendencies and technological proficiencies of the company in question. (Storyboards and animatics, as well as on-the-fly oversight of FX shots in pipeline, help to minimize the levels of difference here, smoothing over mismatches in order to fit the outsourced chunks of content together into a singularly authored text — hinting at new ways in which the hoary concepts of “compositing” and “continuity” might be redeployed as a form of meta-industrial critique.) In the case of Spider-Man 3, no fewer than eight FX houses were involved (BUF, Evil Eye Pictures, Furious FX, Gentle Giant Studios, Giant Killer Robots, Halon Entertainment, Tweak Films, and X1fx) in addition to Sony Pictures Imageworks, which produced the Sandman shot.

When we look at a particular setpiece, then, we also pinpoint a curious node in the network of production: a juncture at which the multiplicity of labor required to generate blockbuster-scale entertainment must negotiate with our sense of a unified, unique product / author / vision. Perhaps this is simply an accidental echo of the collective-yet-singular aura that has always attended the serial existence of superheroes; what is Spider-Man but a single artwork scripted, drawn, acted, and realized onscreen by decades of named and nameless creators? But before all of this, we confront a basic heterogeneity that textures film experience: our understanding, at once obvious and profound, that some parts of movies stand out as better or worse or more in need of exploration than others. Spider-Man 3, as Dan acknowledges, is not a great film; but that does not mean it cannot contain great moments. In sifting for and scrutinizing such gems, I wonder if academics like us aren’t performing a strategic if unconscious role — one shared by the increasingly contiguous subcultures of journalists, critics, and fans — our dissective blogging facilitating a trees-over-forest approach to film analysis, a “setpiece-ification” that reflects the odd granularity of contemporary blockbuster media.

Replicants

I look at Blade Runner as the last analog science-fiction movie made, because we didn’t have all the advantages that people have now. And I’m glad we didn’t, because there’s nothing artificial about it. There’s no computer-generated images in the film.

— David L. Snyder, Art Director

Any movie that gets a “Five-Disc Ultimate Collectors Edition” deserves serious attention, even in the midst of a busy semester, and there are few films more integral to the genre of science fiction or the craft of visual effects than Blade Runner. (Ordinarily I’d follow the stylistic rules about which I browbeat my Intro to Film students and follow this title with the year of release, 1982. But one of the many confounding and wonderful things about Blade Runner is the way in which it resists confinement to any one historical moment. By this I refer not only to its carefully designed and brilliantly realized vision of Los Angeles in 2019 [now a mere 11 years away!] but the many-versioned indeterminacy of its status as an industrial artifact, one that has been revamped, recut, and released many times throughout the two and a half decades of its cultural existence. Blade Runner in its revisions has almost dissolved the boundaries separating preproduction, production, and postproduction — the three stages of the traditional cinematic lifecycle — to become that rarest of filmic objects, the always-being-made. The only thing, in fact, that keeps Blade Runner from sliding into the same sad abyss as the first Star Wars [an object so scribbled-over with tweaks and touch-ups that it has almost unraveled the alchemy by which it initially transmuted an archive of tin-plated pop-culture precursors into a golden original] is the auteur-god at the center of its cosmology of texts: unlike George Lucas, Ridley Scott seems willing to use words like “final” and “definitive” — charged terms in their implicit contract to stop futzing around with a collectively cherished memory.)

I grabbed the DVDs from Swarthmore’s library last week to prep a guest lecture for a seminar a friend of mine is teaching in the English Department, and in the course of plowing through the three-and-a-half-hour production documentary “Dangerous Days” came across the quote from David L. Snyder that opens this post. What a remarkable statement — all the more amazing for how quickly and easily it goes by. If there is a conceptual digestive system for ideas as they circulate through time and our ideological networks, surely this is evidence of a successfully broken-down and assimilated “truth,” one which we’ve masticated and incorporated into our perception of film without ever realizing what an odd mouthful it makes. There’s nothing artificial about it, says David Snyder. Is he referring to the live-action performances of Harrison Ford, Rutger Hauer, and Sean Young? The “retrofitted” backlot of LA 2019, packed with costumed extras and drenched in practical environmental effects from smoke machines and water sprinklers? The cars futurized according to the extrapolative artwork of Syd Mead?

No: Snyder is talking about visual effects — the virtuoso work of a small army headed by Douglas Trumbull and Richard Yuricich — a suite of shots peppered throughout the film that map the hellish, vertiginous altitudes above the drippy neon streets of Lawrence G. Paull’s production design. Snyder refers, in other words, to shots produced exclusively through falsification: miniature vehicles, kitbashed cityscapes, and painted mattes, each piece captured in multiple “passes” and composited into frames that present themselves to the eye as unified gestalts but are in fact flattened collages, mosaics of elements captured in radically different scales, spaces, and times but made to coexist through the layerings of the optical printer: an elaborate decoupage deceptively passing itself off as immediate, indexical reality.

I get what Snyder is saying. There is something natural and real about the visual effects in Blade Runner; watching them, you feel the weight and substance of the models and lighting rigs, can almost smell the smoky haze being pumped around the light sources to create those gorgeous haloes, a signature of Trumbull’s FX work matched only by his extravagant ballet of ice-cream-cone UFOs amid boiling cloudscapes and miniature mountains in Close Encounters of the Third Kind. But what no one points out is that all of these visual effects — predigital visual effects — were once considered artificial. We used to think of them as tricks, hoodwinks, illusions. Only now that the digital revolution has come and gone, turning everything into weightless, effortless CG, do we retroactively assign the fakery of the past a glorious authenticity.

Or so the story goes. As I suggest above, and have argued elsewhere, the difference between “artificial” and “actual” in filmmaking is as much a matter of ideology as industrial method; perceptions of the medium are slippery and always open to contestation. Special and visual effects have always functioned as a kind of reality pump, investing the “nonspecial” scenes and sequences around them with an air of indexical reliability which is, itself, perhaps the most profound “effect.” With vanishingly few exceptions, actors speak lines written for them; stories are stitched into seamless continuity from fragments of film shot out of order; and, inescapably, a camera is there to record what’s happening, yet never reveals its own existence. Cinema is, prior to everything else, an artifact, and special effects function discursively to misdirect our attention onto more obvious classes of manipulation.

Now the computer has arrived as the new trick in town, enabling us to rebrand everything that came before as “real.” It’s an understandable turn of mind, but one that scholars and critics ought to navigate carefully. (Case in point: Snyder speaks as though computers didn’t exist at the time of Blade Runner. Yet it is only through the airtight registration made possible by motion-control cinematography, dependent on microprocessors for precision and memory storage for repeatability, that the film’s beautiful miniatures blend so smoothly with their surroundings.) It is possible, and worthwhile, to immerse ourselves in the virtual facade of ideology’s trompe-l’oeil — a higher order of special effect — while occasionally stepping back to acknowledge the brush strokes, the slightly imperfect matte lines that seam the composited elements of our thought.

Soul of a New Machine

Not much to add to the critical consensus around WALL-E; trusted voices such as Tim Burke’s, as well as the distributed hive mind of Rotten Tomatoes, agree that it’s great. Having seen the movie yesterday (a full two days after its release, which feels like an eternity by the clockspeed of media blogging), I concur — and leave as given my praise for its instantly empathetic characters, striking environments, and balletic storytelling. It’s the first time in a while that tears have welled in my eyes just at the beautiful precision of the choices being made 24 times per second up on the big screen; a happy recognition that Pixar, over and over, is somehow nailing it at both the fine level of frame generation and the macro levels of marketplace logic and movie history. We are in the midst of a classic run.

Building on my comments on Tim’s post, I’m intrigued by the trick Pixar has pulled off in positioning itself amid such turbulent crosscurrents of technological change and cinematic evolution: rapids aboil with mixed feelings about nostalgia for golden age versus the need to stay new and fresh. The movies’ mental market share — the grip in which the cinematic medium holds our collective imaginary — is premised on an essential contradiction between the pleasures of the familiar and the equally strong draw of the unfamiliar. That dialectic is visible in every mainstream movie as a tension between the predictability of genre patterns and the discrete deformations we systematize and label as style.

But nowadays this split has taken on a new visibility, even a certain urgency, as we confront a cinema that seems suddenly digital to its roots. Hemingway (or maybe it was Fitzgerald) wrote that people who go bankrupt do so twice: first gradually, then all at once. The same seems true of computer technology’s encroachment on traditional filmmaking practices. We thought it was creeping up on us, but in a seeming eyeblink, it’s everywhere. Bouncing around inside the noisy carnival of the summer movie season, careening from the waxy simulacrum of Indiana Jones into the glutinous candied nightmare of Speed Racer, it’s easy to feel we’re waking up the morning after an alien invasion, to find ourselves lying in bed with an uncanny synthetic replacement of our spouse.

Pixar’s great and subtle achievement is that it makes the digital/cinema pod-people scenario seem like a simple case of Capgras Syndrome, a fleeting patch of paranoia in which we peer suspiciously at our movies and fail to recognize them as being the same lovable old thing as always. With its unbroken track record of releases celebrated for their “heart,” Pixar is marking out a strategy for the successful future of a fully digital cinema. The irony, of course, is that the studio is doing so by shrugging off its own cutting-edge nature, making high-tech products with low-tech content.

Which is not to say that WALL-E lacks in technological sublimity. On the contrary, it’s a ringing hymn to what machines can do, both in front of and behind the camera. More so than the plastic bobbles of Toy Story, the chitinous carapaces of A Bug’s Life, the scales and fins of Finding Nemo or the polished chassis of Cars, the performers in WALL-E capture the fundamental gadgety wonder of a CG character: they look like little robots, but in another, more inclusive sense they are robots — cyborged 2D sandwiches of actors’ voices, animators’ keyframes, and procedural rendering. There’s a longstanding trope in Pixar films that the coldly inorganic can be brought to life; think of the wooden effigy of a bird built by the heroes of A Bug’s Life, or the existential yearnings of Woody and Buzz Lightyear in the Toy Story films. WALL-E, however, calibrates a much narrower metaphorical gap between its subject matter and its underlying mode of production. Its sweetly comic drama of machines whose preprogrammed functionalities are indistinguishable from their lifeforce is like a reassuring parable of cinema’s future: whether the originating matrix is silicon or celluloid, our virtual pleasures will reflect (even enshrine) an enduring humanity.

I’ll forgo commentary on the story and its rich webwork of themes, except to note a felicitous convergence of technology’s hetero gendering and competing design aesthetics that remap the Macintosh’s white curves onto the eggy life-incubator of EVE — juxtaposed with a masculine counterpart in the ugly-handsome boxiness of PC and LINUX worlds. I delighted in the film’s vision of an interstellar cruise liner populated by placid chubbies, but was also impressed by the opening 30-40 minutes set amid the ruins of civilization. It says something that for the second time this year, a mainstream science-fiction film has enticed us to imagine ourselves the lone survivor of a decimated earth, portraying this situation on one level as a prison of loneliness and on another as an extended vacation: tourists of the apocalypse. I refer here of course to the better-than-expected I Am Legend, whose vistas of a plague-depopulated Manhattan unfold in loving extended takes that invite Bazinian immersion and contemplation:

Beyond these observations, what stands out to me among the many pleasures of WALL-E are the bumper materials on either side of the feature: the short “Presto,” which precedes the main film, and the credit sequence that closes the show. Such paratexts are always meaningful in a Pixar production, but tend to receive less commentary than the “meat” of the movie. Tim points out accurately that “Presto” is the first time a Pixar short has captured the antic Dionysian spirit of a Tex Avery cartoon (though I’d add that Avery’s signature eruption of the id, that curvaceous caricature of womanhood Red, was preemptively foregrounded by Jessica Rabbit in 1988’s Who Framed Roger Rabbit; such sex-doll humor seems unlikely to be emulated any time soon in Pixar’s family-friendly universe — though the Wolf could conceivably make an appearance). What I like about “Presto” is the short’s reliance on “portal logic” — the manifold possibilities for physical comedy and agonistic drama in the phenomenon of spatial bilocation, so smartly operationalized in the Valve videogame Portal.

As for the end credits of WALL-E, they are unexpectedly daring in scope, recapitulating the history of illustration itself — compressing thousands of years of representational practices in a span of minutes. As the first names appear onscreen, cave drawings coalesce, revealing what happens as robots and humans work together to repopulate the earth and nurse its ecosystem back to health. The cave drawings give way to Egyptian-style hieroglyphs and profiled 2D portraiture, Renaissance perspective drawings, a succession of painterly styles. Daring, then subversive: from Seurat’s pointillism, Monet’s impressionism, and Van Gogh’s loony swirls, the credits leap to 8-bit computer graphics circa the early 1980s — around the time, as told in David A. Price’s enjoyable history of the studio, that Pixar itself came into existence. WALL-E and his friends cavort in the form of jagged sprites, the same as you’d find in any Atari 2600 game, or perhaps remediated on the tiny screens of cell phones or the Wii’s retrographics.

I’m not sure what WALL-E‘s credits are “saying” with all this, but surely it provides a clue to the larger logic of technological succession as it is being subtextually narrated by Pixar. Note, for example, that photography as a medium appears nowhere in the credits’ graphic roll call; more scandalously, neither does cinematography — nor animation. In Pixar’s restaging of its own primal scene, the digital emerges from another tradition entirely: one more ludic, more subjective and individualistic, more of an “art.” Like all ideologies, the argument is both transparently graspable and fathoms deep. Cautionary tale, recuperative fantasy, manufactured history doubling as road map for an uncertain digital future: Pixar’s movies, none more so than WALL-E, put it all over at once.

Digital Day for Night

A quick followup to my recent post on the new Indiana Jones movie: I’ve seen it, and find myself agreeing with those who call it an enjoyable if silly film. Actually, it was the best couple of hours I’ve spent in a movie theater on a Saturday afternoon in quite a while, and seemed especially well suited to that particular timeframe: an old-fashioned matinee experience, a slightly cheaper ticket to enjoy something less than classic Hollywood art. Pulp at a bargain price.

But my interest in the disproportionately angry fan response to the movie continues. And to judge by articles popping up online, Indiana Jones and the Kingdom of the Crystal Skull is providing us, alongside its various pleasures (or lack thereof), a platform for thinking about that (ironically) age-old question, “How are movies changing?” — also known as “Where has the magic gone?” Here, for example, are three articles, one from Reuters, one from The Atlantic.com, and one from an MTV blog, each addressing the film’s heavy use of CGI.

I can see what they’re talking about, and I suppose if I were less casual in my fandom of the first three Indy movies, I’d be similarly livid. (I still can’t abide what’s been done to Star Wars.) At the same time, I suspect our cultural allergy to digital visual effects is a fleeting phenomenon — our collective eyes adjusting themselves to a new form of light. Some of the sequences in Crystal Skull, particularly those in the last half of the film, simply wouldn’t be possible without digital visual FX. CG’s ability to create large populations of swarming entities onscreen (as in the ant attack) or to stitch together complex virtual environments with real performers (as in the Peru jungle chase) were clearly factors in the very conception of the movie, with the many iterations of the troubled screenplay passing spectacular “beats” back and forth like hot potatoes on the assumption that, should all else fail, at least the movie would feature some killer action.

Call it digital day for night, the latest version of the practice by which scenes shot in daylight “pass” for nighttime cinematography. It’s a workaround, a cheat, like all visual effects, in some sense nothing more than an upgraded cousin of the rear-projected backgrounds showing characters at seaside when they’re really sitting on a blanket on a soundstage. It’s the hallmark of an emerging mode of production, one that’s swiftly becoming the new standard. And our resistance to it is precisely the moment of enshrining a passing mode of production, one that used to seem “natural” (for all its own undeniable artificiality). By such means are movies made, but it’s also the way that the past itself is manufactured, memory and nostagia forged through an ongoing dialectic of transparency and opacity that haunts our recreational technologies.

We’ll get used to the new way of doing things. And someday, movies that really do eschew CG in favor of older FX methodologies, as Spielberg and co. initially promised to do, will seem as odd in their way as performances of classical music that insist on using authentic instruments from the time. For the moment, we’re suspended between one mode of production and another, truly at home in neither, able only to look unhappily from one bank to another as the waterfall of progress carries us ever onward.