Livestream Follies: Part Two

As You Like It

The following is the second installment of a four-part piece discussing contemporary IRL livestreamers, the cross-breeding of cinematic and video game aesthetics, and various related topics. The remaining installments will post in intervals through the summer of 2021. As with all of these missives, the reader is advised to read the piece at the Substack website rather than in the email format. Support and editorial advisement was offered by Aria Dean, Michael Connor, Anton Haugen, and Celine Wong Katzman at Rhizome, Andrew Chan at Criterion, and my dear friend, Justin Stewart. This piece, inspired by conversations with filmmaker and aesthete Michael M. Bilandic, who has a small role within it, and grew out of two separate commissions from Rhizome and Criterion. The final result, which appears after approximately a year of writing and research, was something too sprawling and scurrilous to taint either upstanding masthead, so it is emerging here with the generous sponsorship of Rhizome.

My first sighting of an IRL streamer in the wild came in February of 2020, in a theater space downstairs from a bar that I frequent, where on most Sundays in those prelapsarian times some friends would project movies for mingy, motley crowds. The streamer in question was Billy the Fridge, on a foray into New York. Smooth Sanchez, not yet internet famous and therefore tagging along as a borderline stream sniping sidekick, malingered outside, too young to enter the establishment. Billy had been lured over by a more-than-vigilant observer of the community in my acquaintance, who’d invited Billy to introduce the evening’s film, a 1990 Claude Berri vehicle directed by Serge Gainsbourg called Stan the Flasher, which of course Billy hadn’t seen, though this didn’t dissuade him from accepting the offer. He left the theater immediately after his introduction, barreling down the aisle with selfie stick in hand, and for one exhilarating moment I was caught in Billy’s stream. Upstairs later at the bar I scrubbed back through his feed on my phone and struggled to grab a screenshot proving that, before the stream was forever lost to posterity, I’d briefly passed through it. The image that you see above was the best I could manage. I’m the guy in the light-colored shirt with my phone in front of my face, trying to get a picture of Billy. You’ll have to take my word for it.

Through the months that followed, Billy the Fridge, like many Americans, mostly stayed home, following the orders of the CDC and Tom Nook in Animal Crossing: New Horizons. Sanchez, meanwhile, began his meteoric rise, running roughshod over NYC during a period when socially sanctioned communal gatherings to watch films, including fare far less esoteric than Stan the Flasher, faded into distant memory. The phrase “meteoric rise” is best avoided, as “rising” is the opposite of what meteors tend to do, but given the legal crater in which Sanchez has since found himself, I’m making an exception. That the pairing of speaker and Gainsbourg’s film that night in February of last year was rather inspired only occurred to me upon finding this single-sentence synopsis of Stan the Flasher, logged online by some anonymous poet: “The unhappy end of an impotent exhibitionist.”

In absence of anything convincingly resembling contemporary cinema—chin-wags in the VR bar at streaming Sundance don’t count—I’d become interested in writing about IRL streamers, because they seemed to me to represent something really new in the world of moving images, and something really new doesn’t come along all that often. Their novelty was doubly emphasized by the novelty of 2020, a year that lasted around eighteen months, when cultural events that I might otherwise have felt some professional obligation to stand up and take notice of—the prestige pictures du jour, let’s say—felt impossibly remote from my life and the lives of the people I knew, while the free-floating malice of the streams seemed to suit the tenor of our tense times.

Part of the IRL streamers’ unwholesome allure, too, was the combination of attraction and repulsion that I felt towards them. I didn’t “like,” in the main, what the IRL streamers that I watched in action were doing, but that hardly matters. Some of their streams, like Sanchez’s “George Foreman” outing, made me laugh; some of them made me despair for humanity, but they rarely left me indifferent. They cut me, and I couldn’t stop picking the scab.

THE GLOBAL PILLAGE

The frantic pandemic-prep dynamiting of relied-upon support columns of routine and social obligations as they’d been understood before spring of last year left plenty of us with unaccustomed empty hours and, in some cases, expendable income with which to pursue insalubrious habits, IRL livestream-viewing and otherwise. As the world has recommenced to uncertainly turning and long-sidelined face-to-face relationships have been restored, we now can silently make note one another’s weight fluctuations and make our educated guesses at what variety of cope they indicate: coke, smoke, or cake. Some people went out every night and mostly kept mum about this fact and others stayed home and mostly didn’t. Both factions were united by a firm conviction that the other was responsible for destroying their lives, and neither went to the movies, because there wasn’t a showtime in sight. “Communal” viewings of any number of attendees beyond that which could comfortably fit in a living room happened online, via Twitch.tv and Google Hangouts and the like, and virtual movie nights abounded, most of which presumably will fall away like shed skin, if they haven’t already.

It was through one such group that, after some arduous hours spent riding along in RV6 one night last fall, I encountered a 1988 film called High Frequency, aka Qualcuno in ascolto, an Italian movie by one Faliero Rosati made with English-speaking actors and a partially American setting. It’s about Danny (Oliver Benny), an 11-year-old ham radio enthusiast living on an island off the coast of Maine, who unexpectedly makes contact over the airwaves with Peter (Vincent Spano), an American technician working alone at a satellite relay station in the Alps. Released in the midst of Perestroika and a year before the fall of the Berlin Wall, it’s among the last of the contemporary-set Cold War thrillers, its drama set in motion when Peter, via the parabolic dish at his station, inadvertently taps into a spy camera frequency and, through it, witnesses an assassination in a posh D.C. apartment. Working in concert with Danny from thousands of miles away, Peter will attempt to warn a fetching young woman who he sees inside the apartment of what he believes to be the imminent danger she’s in—a little spin on the Peeping Tom thrillers that were a Cornell Woolrich specialty.

High Frequency isn’t any kind of all-timer, but after passing hours with ONLYUSEmeBLADE and Baked and company, I found it rather touching. It seemed to represent certain reassuring assumptions of a bygone era, including the idea that an average person’s impulse upon seeing someone far away who is potentially in distress or danger would be to try to find a way to help them, rather than to smash the Keemstar dono. Implicit in the film, too, is the lovely idea that communication technology might go some way towards rebuilding a shattered sense of community. Peter is what might be called the “typical neoliberal subject,” a tech worker far from home cut off from all friends and family, condemned to purgatorial isolation by the demands of his gloomy gig. Danny is lonely, too, his father having disappeared at sea a year prior, though he hopes against hope to contact dad via radio. Nothing doing, but hanks to the miracle of amateur radio, a means for interpersonal communication across vast distances, Danny and Peter not only collaborate to foil the enemies of their country but, it’s heavily implied in the movie’s closing, have a chance to fill the voids in their respective lives. 

It’s a story of global telecommunication acting as a tool to forge community and generate empathy—even if, as it is here, this empathy proves misplaced, the damsel in distress revealed to be part of the conspiracy, not its victim. Global telecommunication makes the wide world smaller, and helps to make sundered families whole again. Not a great movie, High Frequency, but a nice movie, and that was enough at the moment I watched it, when for a second I related to the critic who proposed that the artistic counter to Trumpism was something called “Nicecore”—the idea, if I’m not mistaken, was that if we all watched Paddington 2 (2017), Elizabeth Warren would become president.

The Global Village, like the fall of the Iron Curtain, didn’t deliver the promised panacea of peace, love, and understanding. Per Nietzsche, “Under peaceful conditions, a warlike man sets upon himself,” and few peoples in history have been so belligerent as we aggy Americans. Robbed of a cherished chew toy nemesis to sharpen our teeth on we went all Ouroboros, encouraged in our self-cannibalization by 24-hour news cycles and pundits of all political leanings whose business models depended on keeping their audiences at a steady simmer of outrage. The occasional orphaned moppet may have found a substitute father via amateur radio, but even back in 1988 the format was better known for fringe conspiracists broadcasting via shortwave like the legendary Bill Cooper, who begat the present day’s more mainstreamed Alex Joneses, cult creations of the internet era. Red-faced men sweated and fumed, smirking amen corner liberal consensus became the staple of prime-time comedy, and a new age of borderless connection created a new way for the like-minded to assemble and fester together, defending one another from any IRL interaction that might penetrate and disturb their shared worldview, like Roman Legionnaires in testudo formation. There was nothing you really needed to know about the “other,” who was only a lowly NPC, anyways—and who gives a fuck if you serve an NPC with some content spray?

Which brings us back to RV6, timed to correspond to the buildup and aftermath of the 2020 U.S. presidential election, initially pitched by Baked as a sort of mission of national consciousness-raising. If you wanted to assign some kind of “constructive” reading to the whole clown car pile-up, it could be that Baked and his band of brigands, the last free men in America, hoped to lead by example of their self-assured belligerence and awaken a cucked nation. Scales would fall from eyes anointed with blasts of Mace, and the meekly masked SJWs would become both based and pilled, raped into consciousness, as Michael Haneke once defined the mission of his films as being.

The long hours I’ve spent with the IRL streaming community would be a little easier to justify if I happened to believe that the community, as I’ve come to know it, was generating earth-shaking art. As it is, I basically agree with kiwifarms poster Taco’s assessment of RV6: “A van full of boring loser addicts going on a coast to coast McDonald’s tour with Baked Alaska and screaming the n-word until their YouTube channels get banned.” The internet didn’t have to result in things like this—technology has no inherent morality—but it did, and this fact doesn’t amount to a ringing endorsement for humanity. The story of IRL streaming might be taken as the story of the Global Village writ brief: Utopian promise, demeaning outcome; High Frequency to lowlife. The disillusioned title of Haneke’s “internet movie,” Happy End (2017)—inspired in part by a newspaper item about a 14-year-old in Japan who documented her attempt to kill her mother online—nods to Kurt Weill, Elisabeth Hauptmann, and Bertolt Brecht’s 1929 musical bearing the same name, but given the eagerness with which each fresh evidence of the apocalypse circulates today, one gets the impression that many have concluded that, really, extinction is the best thing that could possibly happen to mankind. Still others, amidst an epidemic of convinced helplessness, can only hold out for a hero.

#BRINGBACKTONYSTARKTOLIFE

In trying to single out the aspects of IRL streaming that seemed truly novel, two stood out. The first was their real-time immediacy, a monstrous fulfillment of Coppola’s Live Cinema or Vertov’s Kino-Pravda. The second was their element of pay-to-play interactivity, however flawed, which relates to a broader longing for some interactional aspect to 21st century entertainments.

On April 24th of this year, with Covid numbers on the wane in the United States, a billboard appeared over Sunset Blvd. in Los Angeles reading “FOR OUR BELOVED HERO, PLEASE BRING BACK TONY STARK,” introducing below this the curiously phrased hashtag #BringBackTonyStarkToLife. The petition, presumably, was directed towards Marvel Studios head Kevin Feige, “Tony Stark” referring to the character portrayed in nine Marvel Studios productions by Robert Downey Jr., killed off in the climax of 2019’s Avengers: Endgame.

This plaintive advert was widely ridiculed online as an instance of fan entitlement gone too far, of Dorito-dusted mouth-breathers, emboldened by the recent success of the #ReleasetheSnyderCut campaign, now attempting to encroach upon the prerogative of the storytelling “creatives” who’d labored to provide them with high-stakes drama in which choices have consequences, and the damage dealt out by the reaper’s scythe cannot be undone.

Try though I might, I never managed to understand what was so risible about the billboard’s request, which is but a pale echo of the outcry after a burnt-out Sir Arthur Conan Doyle dropped Sherlock Holmes into Reichenbach Falls back in 1893, and after all falls squarely within the established parameters of what is possible in the Marvel Comics Universe. It doesn’t go so far as to make an unreasonable demand for the return of Downey Jr. to the role of Stark, whose fate was sealed with the expiration of Downey Jr.’s Marvel contract, only for the return of the character Stark, and given how many different Peter Parkers I’ve seen suit up in my adult lifetime, this hardly seems out of turn.

Stark died on-screen in Endgame, yes, but it may be remembered that he died wresting the Infinity Gauntlet away from baddie Thanos, undoing that steroidal California Raisin’s “Snap,” a split-second Holocaust which exterminated half of all life across Creation, including that of Tom Holland’s Peter Parker, who gulped out something corny like “Gee, Mr. Stark, I don’t feel so hot…” before dissolving into leaves of ash fluttering off on the digital wind. Death in the MCU, then, isn’t necessarily the end, nor is an expired contract. We know that there’s a whole multiverse of demographically diverse Spiderfolx out there, and Tom Hiddleston’s Loki—who I think died in one of these movies, but who can remember—has emerged in an “alternate version” streaming on Disney+, so what, precisely, is the problem with politely putting in a little ask with the guys upstairs? You never know what you might find next Christmas in your stocking, or if Downey Jr. might have a pricey divorce coming around the corner…

Should art be what the public wants to see, or thinks it wants to see, or what someone—let us call them artists, just for old time’s sake—thinks the audience needs to see? The push-pull between these positions is ongoing and eternal, particularly so when you’re dealing with an artform that, like cinema in its most high-profile manifestations, requires the marshalling of significant cash and labor, with the expectation that whoever had put that money down or otherwise secured the necessary resources was going to get something for their buck—either money, prestige (in the form of social capital or the favor of a state sponsor), or a buy-in to afford them proximity to in-crowd hip and glamor, a potential peek down a lusted-after actress’s décolletage at some droning awards dinner.

The necessity of such guarantors was avoidable in many creative undertakings. You could publish your obscene screeds in small batches like Peter Sotos or put out a noise track under the name Rape Warrior on a tiny cassette release or make non-commercial, one-man-band films of unblemished creative integrity like solo act James Benning, in full knowledge that whatever you did was likely going to be limited to reaching only a few hundred people, but because you kept the overhead low you could do it on your own terms, and you could worry about pleasing no-one but yourself, but if anybody else went in for what you’d done, well, so much the better.

Most of the time, though, making a movie means putting someone else’s chips down on the table, which means strings attached and a certain amount of concession to whatever was at that particular moment perceived to be public taste. One advantage enjoyed through the years by those stubborn weirdos trying to satisfy private convictions rather than an anxious bean counter’s concept of What The People Want was that, even when the moneymen hedged their bets based on their sense of what had worked in the past, they knew there was no assurance that it was going to continue to work, because public taste was a moving target, and no-one could predict with any certitude what, exactly, those fickle, fatuous audiences would be in the mood for tomorrow—so the artist, so long as they didn’t hit a Saharan dry spell or hemorrhage unforgivable amounts of money in pursuit of their vision, could sometimes round the basepaths off of just a single hit or, if they were really “good in a room,” with no hits at all, just the ability to convince, cajole, rouse enthusiasm, and generate a spirit of jolly conspiracy around a project.

This was an inexact science, and its inexactitude encouraged a certain amount of experimentation, a freedom to tweak existing formulas or try new ones. Sometimes you had a hit and sometimes you took a hit, but show business was known to be a bad business, with no such thing as reliable return on investment. But late one sleepless night in a bedroom in Brentwood, perhaps a young studio executive tossed and turned after a sure thing had flopped ferociously, and wondered why he’d left behind the thriving, steady jukebox business in Flatbush to work for his cousin out west. And when slumber finally came, maybe our young executive dreamt of an algorithm which could calculate the cravings of audiences down to the last decimal, or the movie that would finally please everyone, because they could customize it exactly to their taste…

WHAT YOU NEED

There have always been attempts by popular cinema to cater to the customer, and a certain amount of receptivity to the public’s desires hasn’t necessarily been a bad thing for the great pop movie traditions—the Golden Ages of Hollywood and Japan and Mexico and Hong Kong, to take but a few examples. Sometimes tragedies did emerge of front office interference in the name of “Will it play in Peoria?”, as in the infamous 1942 Pomona and Pasadena test screenings which sealed the fate of Orson Welles’s The Magnificent Ambersons, but many Hong Kong filmmakers have seemed to welcome the opportunity to take notes and make down-to-the-wire trims based on week-of-release screenings, and the impulse to kowtow to public desires cannot be called an absolute evil—you could even make a cogent argument that we could use a bit more of this in our daily lives, which largely consist of piloting indistinguishably designed automobiles between equally indistinguishable eyesore slabs of corporate architecture that everyone hates.

Commercial concessions create strictures, but then any artist, even one operating in apparent perfect independence, is perforce working within a set of parameters, either self-imposed or pressed upon them by economy. Benning may not feel an overwhelming pressure to add a love interest or a car chase or a wisecracking best friend character to his Ten Skies (2004), comprised of ten ten-minute shots taken by a camera gazing towards the firmament, but he’s still limited to the 16mm stock he can muster on grant money and a CalArts professor’s salary, and he’s still only able to film the skies that he happens to find above him, not the ideal skies of his mind’s eye. Any film, even the film that glories in its artifice, is the result of a compromise or collision between the world and the will of the artists who are arranging elements within it, and though we tend to exclusively refer to filmmakers working within the constraints of The System as “playing the game,” this fails to acknowledge that The System is but one game of many. 

Cinema hasn’t often been able to convincingly pretend to be innocent of money and power, and depending on your perspective this makes it either a singularly corrupt artform or a singularly honest one, in which the buried economic subtext of other mediums is become text. The game played is usually conceived of as a game of chance, played for money, though Robert Aldrich, in a 1972 interview, offers a few variations, explaining of his own tenacious, prolific practice that “staying at the plate or staying at the table, staying in the game, is the essential. You can’t allow yourself to get passed over or pushed aside.”

As indicated by the case of Conan Doyle, that audiences should feel themselves to have a rooting interest in popular culture is nothing new and hardly unique to cinema, though, perhaps because of the public knowledge of the money at stake in movies, the movies are exceptional in the desire they’ve awakened in their fans to play along with the game at home. For some while the most obvious manifestation of this was through the scanning of box-office receipts in the United States, something like a second national pastime. While the average punter might have some idea of what was currently on the Best-Sellers list or the fact that a Picasso had recently sold to someone somewhere for a very impressive amount, this was as nothing compared to the lure of the week’s top grossers, which the newspapers printed daily like box scores. The fascination these numbers held for people always escaped me, but I missed them a little during a year-plus without tickets sold, for at least those figures represented a dollar-as-ballot democracy, however flawed, which you couldn’t say of the clandestine operations of the streaming services, cooking up whatever numbers they wanted.

An average freelancer’s understanding of the workings of the economy isn’t far in advance of that of the 19th‑century farmer taking produce to market: You put goods and services on offer alongside colleagues and competitors, haggle to fetch a price sufficient to keep the wolves from your door, then head out for a quick drunk before starting on the next bit of seeding, all while dreaming of a windfall that might break the cycle. This is just about enough to allow for a rudimentary reading of box-office numbers—a lot of dirty pool goes on and a lot of bad movies get made, but the gist of the business can still be grasped in terms of widgets. But the streaming services, like Netflix and their fellow Silicon Valley disruptors? Their obfuscations and their loss-leader price traps and their seemingly bottomless wells of investor cash which allowed them to, without ever once turning a profit, grow fat while the competition starved and withered away? This was something insubstantive, hard to get a grip on.

These are daily disorientations of the digital age, other signature sensations of which, for good or ill, include feelings of dematerialization and impermanence. The disorientation can be attributable to the fact that the workings of the world, specifically technology, have gone beyond the comprehension of the layman. If ever you should visit the Edison Laboratory Complex in West Orange, New Jersey— which, incidentally, has on its grounds a reproduction of the original tar-paper “Black Maria” film studio where W.K. Dickson made the first Kinetoscopes beginning in 1893—you can look into the restored machine room and get at least the gist of how the factory floor ran: a massive turbine supplying power to the various drills and presses and lathes by way of a network of leather belts hung down from an overhead line shaft, sure, okay, got it. Do you know how your cell phone works? Or your computer? If something went wrong with either, could you fix it yourself?

This is not to make the undoubtedly enormous disorientation experienced by those who lived through the Industrial Revolution an afterthought; after all, a mind as fine as that of Henry Adams was fairly blown by an encounter with a Daimler motor at the Hall of Electrical Machines at the Paris World’s Fair of 1900, an encounter described in a chapter, “The Virgin and the Dynamo,” in his posthumously published The Autobiography of Henry Adams. But humanity, after having had a little time to acclimate to the jagged incursion of the Industrial Revolution into daily life and blunt some of its sharper edges, has had to start back at square one with the Digital Revolution, which has a great many of us now dividing our time between the physical and digital worlds: the former concrete and reshaped only through great exertion; the latter flexible, weightless, and sculpted with the click of a button.

This experience of being two places at once, and therefore fully in neither, is what I mean by dematerialization, though the proper word would be “bilocation.” It may be stated, not inaccurately, that this isn’t a new development, and that for generations now people in the industrialized world have spent ample time staring at glowing boxes of one sort or another, and did not generations of the devout follow Christ’s exhortation to be “in the world but not of it”? (Adams’s expressed fear was that the clamor of the Dynamo would drown out the sweeter call of the Virgin—Mary, that is—and, supplanting a spiritual religion with the material religion of science, cut off humankind from the source of their greatest artistic accomplishments.) 

There is, nevertheless, something unprecedented in the ubiquity of today’s pocket-sized glowing boxes, and in the total or near-total absence of any indexicality between many of the images viewed on them and the physical world, and that this has something to do with the Digital Revolution and the fact that, for many, labor now consists solely of staring at glowing boxes and regarding their inbox with anxiety is something I’ve been writing about for more than a decade, and I could prove this if not for the fact that visiting many of the URLs that once took you to the places where these things were written will now greet you with the following image:

That is one aspect of what I mean by impermanence. The Age of Faith that produced Chartres, so beloved by Adams, has passed in much of the the west, as has the Industrial Age that produced the cathedrals of the late 19th and 20th centuries, the train terminals and movie palaces—much of today’s great building is now confined to the digital realm, which is to say, built on shifting sands. One hears grim estimates of the percentage of silent cinema, as turned out by Dickson and those that came after him, that have survived to the present day, and I have sometimes thought that the percentages of early internet must be similar if not grimmer, and who knows but that lost Angelfire sites might not be viewed someday as missing pieces of cultural heritage on par with Griffith’s lost 1918 The Great Love. This impermanence is one aspect of the extraordinary pliability of the digital, the very pliability that may bring us ever closer to making real the fever dream of that junior studio executive in Brentwood: a the-customer-is-always-right cinema that gives audiences exactly what they want, a cinema of receptivity and interactivity, a Dream Factory that offers custom-fit tailoring.   

I’ve gotten hung up on identifying how these sensations of the digital age have made themselves felt in cinema, because movies are my bailiwick after all, and in how they’ve made themselves felt in life, because life is the whole cloth that cinema is cut from, or was, and also because I’m planning to remain alive for the foreseeable future, and I’d like to have a clue what’s going on. TK. Pursuing the garment comparison from another angle, movies were always necessarily a weave of natural and synthetic fibers, but of late the latter has predominated, and popular cinema has approached a pure polyester period. That isn’t intrinsically a negative development—polyester is durable as all hell and holds colors remarkably well—but it is a major development. That, too, is something I’ve been trying to describe.

SHATTERED MASS

The eternal bottom line and the inevitability that filmmakers must attempt to find ways to limbo under it having been two inviolable rules of moviemaking since the artform began, it’s often suggested that our present popular cinema, cosmetic differences and marketplace shifts aside, is still recognizably a descendent of yesteryear’s, that, for example, the superhero spectacle is to contemporary American audiences what the western was to those of a previous generation. The fashions may have changed, says the wizened voice of experience, but squint your eyes a little and you’ll see that the boy of today has old granddad’s familiar gait, and that the distance between decades is not so great and gaping as it seems. If you look past technological innovations, the voice drones on, you will find that the movies have through the years been continuously engaged in a pursuit of storytelling that, as treacly tributes to the transportive powers of the cinema never tire of telling us, originated around some flickering Paleolithic campfire, as mastodons trumpeted off in the distance.

Fanboys love these appeals to history, which serve the purpose of confirmation bias, validating their taste by shellacking with a patina of historical provenance the shiny tentpoles they’re predisposed to honor and jealously protect—though one might be forgiven for wondering if many fanboys spend much time with those pokey old movies of which their beloved box-office blitzkriegs are purportedly descendants, or how anyone can take straight-faced their doofus directors’ evocations of Ozu and Antonioni. This “long view” also appeals to quite a few critics and scholars who should know better yet ache terribly to appear “with it,” and thus not doomed to the fuddy-duddy irrelevance they consider to be the rightful territory of academics. 

After all we must remain abreast of the popular cinema, mustn’t we? The debate over the potential for legitimate artistic expression in popular entertainments was long ago decisively settled, and the pro-pop partisans carried the day in an unconditional surrender rout. To make any further mention of the matter at this late date is to risk sounding like Dwight Macdonald or, worse still, “pretentious,” a burning brand the threat of being marked for life by sufficient to get any red-blooded, demotic American to pipe down and watch Black Widow in respectful, chin-stroking silence.

The strongest advocacies for pop back when it really needed articulate defenders, however, didn’t pursue an ideological project of reclaiming the masscult or the middlebrow simply for the sake of doing so: it was about the work. If today we can speak of George Herriman or Agatha Christie or Jo von Sternberg or Jelly Roll Morton as artists of the first order without fear of looking like a rube from Parma in nipple-high shorts and black socks, it’s not because some wag wrote “Let people enjoy things” in the pages of Vanity Fair—it’s because they were artists of the first order, and the evidence existed in their work to support claims for their being so, and because pugnacious, impassioned sorties were made to explain how, by virtue of the enormity of their gifts and their sense of creative mission and the visceral impact or pure pleasure of what they brought forth, these popular artists were not in fact living in and creating a world removed from highbrow-approved figures like Henri Matisse, Virginia Woolf, Alexander Dovzhenko, and Charles Ives.   

Such incisive arguments for present-day pop cinema have been, tellingly, rather thin on the ground, replaced by vague honorifics of uncertain significance (“superhero movies are modern-day myths…” etc.) or a priori assumptions which invert yesteryear’s snobberies: where once you were assured you didn’t need to pay attention to mass culture because it was mass culture, now you simply must, for the same reason. What goes absent, in both cases, is the work itself.

There was undoubtedly some element of upstart piss-taking in the proselytizing for pop by the smart set of the last century’s first half, young punks tousling the hair of pucker-lipped upholders of the Classics and Frankfurt School killjoys, but the most rousing defenses of commercial pop bypassed the highways of generality to instead lazily wend their way, stopping at any and every point of interest, along the country roads of specificity, which more often than not is where the stands selling the plumpest produce are to be found. I tell ya, you gotta keep an eye out for those generalists—which is itself a generalization, but if you’re against generalizations, that includes being against the idea that they’re never, ever true.

The adoption of certain generalities is, it should be said, inevitable in a world where our hourglass is turned and its contents are running out from birth, and certain guidelines that one sets up constitute something called personal taste: I like tripe dishes, I don’t like science-fiction novels, I like the post-On the Corner output of Miles Davis, I don’t particularly care about Sketches of Spain, and so on. You can try as best you can to find the pleasure in things—new things or things that have failed to give pleasure in the past but which for whatever reason suddenly seem to invite another go-around—but for practical purposes hope ought not spring eternal, and there is a well-known definition of doing the same thing over and over again with the expectation of different results.

I’ve returned to the multiplex sporadically since the multiplexes returned, out of habit more than anything, like George Romero’s walking dead shuffling towards Monroeville Mall, but I stopped expecting much fun of it a while back, and I expect that many moviegoers find themselves in the same position. Often I find I don’t even know what I’m looking at up there on the screen: for all intents and purposes Sonic the Hedgehog (2020) or Chaos Walking or whatever fulfill the definition of being “feature films,” but they feel embalmed and alienated from life in a way that even the most glacéed, overstuffed, plywood-and-tulle studio confabulations of yesterday somehow didn’t. They’re huge and lumbering and loud and seem on the brink of extinction, a bit like those mastodons.

This is not for lack of insistent innovation. A few years back—at least this is my best guess at when it began—you started hearing the word “immersivity” bandied about by exhibitors, as premium ticket RPX and 4DX “experiences” appeared on offer, the latter boasting a motion-chair and other haptic inducements to complement the movie’s already overpowering audio-visual shock-and-awe, advertised with the tagline “Don’t Just Watch the Movie; BE IN IT .” Movies would no longer only imitate life but supplant it, and the Mia Farrow character in Woody Allen’s 1985 The Purple Rose of Cairo might someday no longer need her fantasy lover to miraculously emerge from the silver screen—she could come to him! Anyways the plush new recliners they’ve put in are coaxingly comfy and the sound systems roar with authority and filmgoing is still a cheap way to knock off from life for a couple of hours, one of the last reasonably affordable outings that there is left, and since it has been stridently stated since the birth of cinema that escapism is what it’s all about, well, what’s the problem, Jack?

The association of popular cinema with escapism is most probably rooted in the medium’s early appeal to immigrant and working-class audiences, whose inner lives were presumably dominated by fantasies of flight—at least this is how they’re often represented by those higher up the social ladder who’ve deigned to depict them, including working-class escape-artist climbers like Allen. These audiences of cab drivers and stenographers and sandhogs and mill girls and bartenders who lived uninsulated from the violent jostle of the industrialized world latched early onto a medium that was based in the photographic depiction of reality, but that they might be going to the movies to increase their fill of life and their understanding of the world was little considered. Aesthetics was the domain of Newport neurasthenics, not Hoboken bricklayers, and it took ten-cent words from the smart set to win the five-cent recreation the privilege of being called art.          

The Culture Wars of the 20th century that validated pop cinema never ended, as nothing ever seems to, and sometimes, for as much as we seem to have learned from it, I have the distinct impression that the 20th century never happened at all. The Culture War’s scrimmages have usually been broken down politically along left and right lines, with several of the highbrow criticisms of pop in the earlier part of the century in the United States emanating from intellectuals who owed some debt to Marx and who might’ve agreed with Vertov’s declaration that “Film drama is the opiate of the masses.” That no work of genuine oppositional spirit, not to speak of artistic value, could flourish in the repressive tolerance of middle-class mass culture and western consumer society was an article of faith for Herbert Marcuse, the implicit exception to this rule, as ever, being the treatises produced by infallible Freudo-Marxists of the Frankfurt School who, by virtue of magical theory, floated miraculously above the fray, like Hindu Yogis.

The unending din of that front of the Culture War has never interested me as much as another clash happening alongside or within it: that between the generalists against the specifists, between the clarifying catchalls of theory and the complicating convolutions of experience. I advocate for seeking out the company of the specifists whenever possible, though one’s sense of direction may sometimes become confused in the smoke of the melee. Some partisans would have it otherwise, but there is no clear political corollary in this conflict, no side with a franchise on close-reading or close-seeing. The Frankfurters were leftists, yes, but as a body they weren’t always so damning of mass culture as was the Marcuse of 1964’s One-Dimensional Man, and they could hardly be said to have invented sweeping denunciations of pop cinema, which through the years absorbed far more damaging salvos from Puritanical politicians and pulpit-pounders on the other side.

Artists thankfully tend to be too busy tending to nuts-and-bolts and persnickety problems of process to consider their role in perpetuating American or Soviet hegemony or in the corruption of youth, and therefore less prone to absolutist dogma. Artistic affinities can and should transcend political affiliation, though this may be a source of consternation among those who confuse the latter with the former—why, for example, would a filmmaker with immaculate leftist credentials like Jean-Marie Straub perpetuate the legend of D.W. Griffith by quoting the latter’s gloomy diagnosis that “What the modern movie lacks is beauty—the beauty of moving wind in the trees”? I would suggest that this is because this is the same Straub who said, by way of explaining the interminable shots of an Alfa Romeo cruising through Rome in he and partner Danièle Huillet's History Lessons (1972), “To understand the street, you must see the street,” and that they are essentially saying the same thing, and that this thing is very… specific.

SEEING THE STREET

I have been taking the long way around the barn again, aimlessly joyriding like Straub and Huillet’s Alfa Romeo driver, but passing through these history lessons is the only route I know to take that will arrive at describing what has gone missing in popular cinema: the wind in the trees, and the street. To issue such plaints is a common pastime of the aging—Griffith’s was issued at the dissipated end of his life, in a bourbon bottle-strewn suite at the Knickerbocker Hotel on Hollywood Boulevard—and they should be taken with a grain of salt, for the mask of principled, haughty critical discernment is a convenient disguise for the gassed, graying spectator, huffing and puffing as culture, unconcerned, trucks on beyond their myopic purview.

The director of The Birth of a Nation (1915), in his disenchantment with the modern movie, has a somewhat unlikely companion at the old folks’ home in the person of Rudolf Arnheim, who, in his 1935 “The Film Critic of Tomorrow,” adjudged that “the decay of the art of expression in film” began almost immediately with the appearance of talking pictures. Arnheim sings the blues for the critics of the talkie era, seeing in them delusional cases sitting through hours upon hours of chatty dross waiting, like Didi and Gogo and with as much hope of satisfaction, for the return of the glory days, addicts struggling to find that first high again in vain, who think they are “seeing bad films instead of understanding that what [they see] is no longer film at all.”

Arnheim’s axiomatic verdict that word and image “were each unto themselves such all-encompassing means of representation that they could not supplement each other artistically when applied simultaneously” is a statement suggesting so many exceptions that it can be safely done away with as a rule—are we also throwing out the comic strip, born around the same time as cinema? The whole history of word art? And was any theatrical stage truly stripped clean of “images”?

Griffith, too, was addressing the changes wrought by sound cinema, though in a manner both more lyrical and grounded more deeply in practical experience. The medium that he had helped usher into adulthood was one in no small part made en plein air, in the elements, with open shirt collars against desert heat or mufflers against cutting flurries. The blizzard in Way Down East (1920) was a true nor’easter, and that really was star Richard Barthelmess scrambling between drifting ice floes breaking apart in spring thaw. Now sync-sound shooting had trapped the cinema on insulated soundstages, dampering the unpredictable din of the street and quieting the pesky songbirds, now mimicked by library sounds. Any wind in the trees was provided by an oscillating fan, and the foliage was artificial, in the main. Griffith made only two sound films, his last, The Struggle (1931), shot almost entirely in the Great Depression era Bronx, including some bracingly bleak and modern outdoor scenes. To understand the street, you must see the street.

Neither Griffith nor Arnheim were wrong, exactly, for nobody ever is in matters of taste, though it may be ventured that their mourning what had been lost rendered them insensate to new possibilities gained, that process sometimes presented as “progress.” The primped and primed artifice that Griffith seemed to deplore could be the further fulfillment of Arnheim’s analysis of cinema’s early, striding development, as an invention “initially purely mechanical” acquired the tools to “present reality in an artistic manner.” What I believe he means by presenting reality in an “artistic manner” is presenting something other than unvarnished reality, which it was considered is what Edison and the Lumières had captured in purely mechanical fashion, though the medium’s pioneers were composing within a frame in a manner that might not be wholly alien to Pierre-August Renoir, or to Max Reinhardt blocking action under the proscenium arch, and if it isn’t this frame that distinguishes Arnheim’s “image” from life in the raw, than what is it?

The studio provided greater control of what happened inside the frame: snow fell in fat fluffy flakes whenever you wanted it to and for as long as the chrysotile held out, there was no need to ask a thespian to risk hypothermia for your thriller-diller finale, and if you really needed that added touch of verisimilitude you could still drag cast and crew out on location—in making his Way Down East of 1935, director Henry King employed a combination of footage filmed out in Watersbury, Maine with stunt doubles and rear-projected studio shots back in Hollywood with stars who were too valuable to be lost beneath the black waters of the Kennebec River. Unpredictable Creation had been made to conform to schedule, and powerful producers and directors became as Gods upon the set, commanding “Let there be light” when firing up the kliegs, and corralling shadows with a grip’s tweak of the flags.       

The difference wasn’t so stark as all that, of course: even early silent shoots weren’t innocently Arcadian larks, the studio stages had come into the picture long before the mics had, and the subservience of the image to the word was never total, and not particularly long-lasting. One man’s doldrums is another’s Golden Age, and ample evidence of some of the finest minds of bygone eras’ willfully blinkered obliviousness to the enormous pleasures that were available to anyone not immured from them should act as a precaution to we, the living. Still, there was a difference in the before and after, and if Arnheim’s failure to perceive much of merit in even the talking pictures that had been made up to 1935 seems to me like a failure of discernment and taste, the discernment and taste are nevertheless his own. Technological innovation may be inescapable, but that doesn’t mean one has to like the results of it, and maybe film art really did die with the iris-in, and we’ve been unknowingly ingesting a NutraSweet substitute ever since.

Resting in the sedate knowledge that there is such a thing as precedent and continuity in cinema and the manner it is discussed can also act as an impediment to recognizing the truly unprecedented, the jaded historian drowning out the noise of tectonic rumblings outside the doors of his study with the contented yawn of “Nothing new under the sun.” The idea of culture as an unbroken continuum is comforting in the way that all absolutes are, and the churning Dynamo of Discourse must not by slowed by spanners in the works: contemporary pop cinema demands serious attention, we’ve had this discussion already, now get back on the line and get your ass to work.

To lionize today’s board-approved mass cultural products on the grounds that various items consigned to the categories of pop and pulp have in the past been treated to sniffy, cursory dismissal unworthy to their merits, and that it thereby follows that the same oversight is necessarily happening today is to follow a rather specious logic, a perverse gambler’s fallacy: pop and art have coalesced with some consistency, they necessarily have continued to do so. This is Bayesian probability—what has happened can be expected to continue happening—and it’s often enough to carry a conviction in court: if the defendant has a long record of doing the thing they’re accused of, it’s very possible they’ve done it this time. The problem here being that in this case we’re dealing with generations of turnover, and at this point even a positive ID of the perp is difficult. Is that really cinema up there? I don’t remember it looking like that. Did it get its face lifted? Do something to its hair? 

Still, a vague uneasiness that one might be missing out on unknown pop pleasures can overwhelm gut-level enmity, to listen to the scuttlebutt telling you that, actually, you need to check out WandaVision and that, actually, K-pop is genuine volksmusik. There’s always an “actually,” as though the culturati who once warned against masscult were still around and still issuing a deafening chorus of disapproval, as though encouraging people to skip their cultural vegetables and gorge on popular culture instead was some kind of embattled, minority position.

The left intelligentsia of today, to the degree that such a thing exists, seems by and large to have made peace with pop along redrawn theoretical lines: sometimes through a morbid accelerationist relish in the crassest, loudest, ugliest productions of the culture industry, which can be said to embody the catastrophe of late capitalism; sometimes through a newly awakened faith that international corporate mega-productions several thousand degrees further removed from the presence of working-class artisans and the authentic folk culture that the Frankfurters mourned having been overtaken by mass culture, were, in fact, pure expressions of the people’s spirit.  (“Authenticity” is today an endangered species, rarely sighted outside a cage of scare quotes.) The right, inasmuch as they trouble themselves with such things, seem content to wade into dustups over female Ghostbusters and “W.A.P.” and the libtard propaganda machine. Such convictions are usually found on the far fringes of the main body of film journalism, dominated by a centrist liberal consensus whose key planks have been “Dump Trump” and “representation,” a cabal whose interest in films lies in extracting their single overriding “message” and determining as to if said message is hostile or friendly to these causes, and therefore good or “problematic.” By all available evidence they have been well served, as every mainstream movie today costs $90M and scores 90% on Rotten Tomatoes. 

This has been a grand age for messaging, but the movies, the big ones not exclusively but particularly, are unprecedentedly awful. A couple of years ago Martin Scorsese stirred up controversy by stating, in an Empire magazine interview, that the superhero movies which have dominated the box-office over the last decade were closer to theme park rides than cinema. The idea that a film might not necessarily be a film echoes Arnheim, but carries more weight coming from a figure like Scorsese, noted for his omnivorous movie consumption and passion for pop cinema—I recall seeing Joe Bob Briggs, introducing a screening of Brian De Palma’s Carrie (1976) in Columbus, Ohio in the late ‘90s, fielding an audience question about working with Scorsese on Casino (1995) by describing his director’s surprisingly extensive knowledge of women-in-prison films.

Clarifying his position in a New York Times op-ed, Scorsese enumerated the elements he found missing in “market-researched, audience-tested, vetted, modified, revetted and remodified” modern film franchises, these elements beingrevelation, mystery or genuine emotional danger.” I would go further than Scorsese: what I believe is seen with increasing scarcity in large films is nothing less than the world itself, and if the absence seems to go unnoticed, it’s perhaps because we’re spending less and less time in it ourselves.

THE RISE OF THE MACHINES

There is no single culprit responsible for this disappearance, but several contributing factors. A pronouncement from Jia Zhangke points to one: “If cinema is going to show concern for everyday people, one must first have respect for everyday life. One must follow the slow rhythm of life and empathize with the light and heavy things of ordinary life.” Jia’s work belongs to a lineage of cinematic realism that stretches back to the Lumières, and though I don’t take this tradition to be the only valid use of the medium, the ability to convey weight that Jia refers to—the light and heavy things—encapsulates something that cinema has exhibited a extraordinary capacity for.

That weight could make itself felt in many ways: through Straub and Huillet’s commitment to conveying a piece of the world with total fidelity, yes, but also through something as simple as the relationship between camera and human subject. Even a total fantasy played out before constructed environments was a documentary of its performers, embodied on the screen with such immediacy that you felt you could almost reach out and touch them. Here was another kind of weight, conveyed in the term “screen presence”: the performance of incredible physical feats, the communication of an actor’s naked existential self, and a celebration of the human form unequalled by any since the Italian Renaissance

Any film that gives the appearance of lustful ogling is out of favor today, but there are still actors—though they, too, often seem disconnected from the world, which is to say from a paying public. The studios based in Los Angeles and Tokyo and Mexico City and Hong Kong that were the backbone of national industries that made many more movies than they do now, and the movies they made usually came to market faster, and one result of this was an ability to respond to what audiences were responding to with the fast turn-around suppleness and reflexes of a spry shortstop: when receipts anointed a new star, you weren’t going to be made to wait long to see more of them; when one underlit crime thriller brought in beaucoup bucks, very soon the soundstages would be drenched in inky dark. A rigged system comprised of publicists and columnists and fan magazines did more than its share in pushing certain actors and certain narratives, but at end of day audiences held veto power through the box-office, and you couldn’t make the punters take to just anybody, and on the whole the process seemed a little less top-down and a little more organic than that of today, when various English public schoolboys and upper-middle-class white women of pristine pulchritude are handed down from on high as the latest thing. It’s hard to believe you could actually touch them, any more than you could actually join the Avengers.

I don’t want to over-romanticize these periods in the life of popular cinema or suggest that during these times artists’ instincts weren’t routinely stymied or compromised by the demands of commerce—they were, and if they weren’t being curtailed by market pressure there was always something else to get in the way of creating in absolute liberty, because cinema is a tainted art even in its purest form, a game played between nature and machine—one unreliable, the other a like-clockwork recording device. But in these days, and up until rather recently in fact, a certain amount of wiggle room was allowed to the crafty creator who knew how to play the game, because the money men still needed the occasional artist around, not yet having put a system into place to make filmmaking a proper industry smoothly run along Fordian lines, as Adorno and Max Horkheimer had that it was, rather than a messy atelier which, as specifist par excellence David Bordwell has argued convincingly, was closer to the truth of matters.

As a result of that grudging leeway allowed by the front office and of the inherent unpredictability of moviemaking, you saw a tremendous variety and number of good or very good movies, a much broader and deeper and livelier middle range than exists today, which is one reason that I return time and again to the inexhaustible bounty of Hollywood in the ‘30s, so pettishly waved off by Adorno, or Hong Kong in the ‘80s. But any proper industry prefers systems to surprises, and the front office, doing as best they could with the tools at their command—polls and audience response cards and the like—struggled after a formula to reproduce success, dreaming of turning out product as perfect as the Model T, maybe even in customizable colors, and waiting for the supercomputer that could tell them what their inconstant audiences wanted, that could build the perfect film.

If social media prompts are to be taken as any indication, the desire for “perfect” films—as if such an accomplishment were possible in the subjective realm of art—doesn’t belong to producers alone. Fans zealously defend the Tomatometer scores of their favorites with a zeal once reserved for defending the honor of one’s bride-to-be from ruffians, and there are many instances of filmmakers who are notorious “perfectionists,” by which we mean those who struggle mightily to bring the unruly elements of a shoot under their control and produce a not-a-hair-out-of-place product that as nearly as possibly corresponds to their notion of what the thing they are endeavoring to make ought to be.

Among the most notorious of these control freaks was Stanley Kubrick, of whom Jacques Rivette, an artist with an infinitely greater interest in happenstance operations, once said: “Kubrick is a machine, a mutant, a Martian. He has no human feeling whatsoever. But it’s great when the machine films other machines, as in 2001 (1968).” It may be noted, however, that Kubrick’s precision-engineered films often reflected a control freak’s fear of unavoidable human fallibility—take his 1956 The Killing, in which a racetrack robbery timed to the last second by mastermind Sterling Hayden comes up lame in the home stretch due to a combination of avarice, stupidity, lust, and a second-hand suitcase with a broken clasp.

Man or woman may aspire to the perfection of the machine, but the human being is ultimately an unreliable organism, and the dispassionate machine has several distinct advantages over its creator—which of course is part of the drama, such as it is, of Kubrick’s Space Odyssey. In addition to his enthusiasm for the game of cinema, Kubrick was a keen chess fan—a seedy “Academy of Chess and Checkers” is one of The Killing’s more memorable locations—and as such doubtless observed with interest the first victory of IBM supercomputer Deep Blue over world chess champion Garry Kasparov in 1997, two short years after an opus called Industrial Society and Its Future, written by a Harvard-educated former mathematics professor named Ted Kaczynski, was published in the pages of the New York Times and Washington Post.

Kaczynski’s argument—that technological innovation shouldn’t necessarily be regarded as a net positive for humanity—fell on mostly deaf ears, arriving at a moment when techno-utopianism was riding high on the arrival of the commercial internet and that promise of a Global Village, and it didn’t help that he was considered a bit of a crackpot due to his habit of blowing off people’s fingers, which generally has been considered a bit beyond the pale, in fact downright inhuman.

LIQUID METAL

In Terminator 2: Judgment Day, the top-grossing film of 1991—the year the Commercial Internet eXchange was founded—Linda Hamilton’s Sarah Connor is presented with the chance to put a bullet in the fecund brain of Joe Morton’s Cyberdine Systems engineer Miles Dyson, whose research will pave the way for artificial intelligence that will trigger an apocalypse and strive to totally decimate what remains of rebel humanity, but mother-love makes Connor flinch at the sight of Dyson’s family, and she chokes at the free-throw line. This moment of human frailty doesn’t necessarily lose the game for the species, for as it happens the forces aligned in a lockstep march towards Skynet and Armageddon are bigger than any one man. And admittedly, I doubt time traveling back to the 1980s to drop a dime on either Tim Berners-Lee or James Cameron would have halted the eventual appearance of the WorldWideWeb or DCP theatrical projection, respectively.

With Terminator 2 Cameron offered another man vs. machine face-off after the style of the 1984 film that preceded it, but with fresh additions. The heavy of The Terminator, the hulking Model 101 played by Arnold Schwarzenegger, whose weight room-forged physique suggested a humanity rebuilt by technology, now returned reprogrammed to fight for the future of humanity, an allegiance suggesting that some détente between man and machine—and therefore a future in which we might thrive—was yet possible. To arrive at that happy day, however, the Model 101 and Connors mère et fils must contend with a new top-of-the-line nemesis straight from the Skynet showroom, Robert Patrick’s greyhound-sleek T-1000, a fleet-footed fellow still harder to kill than the pokey titanium steel Model 101 because he didn’t provide a solid target, being forged of something called “liquid metal” which, in addition to providing this state-of-the-art product with shapeshifting abilities, could absorb and repair damage from just about any earthly ordinance that you threw at it. (Re-watching the movie, I had to think of the chameleonic code-switching that the IRL streamers use to create conflict without commitment.)

The new contest that Cameron had set up in Terminator 2, then, was analog vs. digital—the effects concerning the T-1000’s liquescent “mimetic poly-alloy” were achieved with groundbreaking CGI courtesy of Industrial Light & Magic, with the analog practicals provided by Stan Winston, who’d also worked on the first Terminator film, and thus on the Model 101. This was among the first glimpses that quite a large number of moviegoers, including this writer as a preadolescent, would receive of what you would perpetually hear referred to in the following years as “the magic of CGI,” which differed from any effects ever seen in cinema in that it allowed audiences to see things that never actually physically existed in any form, save as data.

The history of effects is nearly as old as cinema, yes, and some may point to matte paintings and celluloid animation and rear projection and the Schüfftan process and even shot-on-film photography as various forms of cinematic deceit, so many steps along the way to CGI, but I’ll insist that there was something fundamentally new about this technology, showing things that only ever occupied physical space when they were printed onto a film strip, and that for not too many years longer.

We know how the competition between analog and digital technology has played out on cinema sets and in projection booths and in many aspects of our lives—the T-101 of 35mm has been vanquished by the T-1000 of DCP—but Cameron allows the tactile a final victory over the impalpable foe. With more than a little help from analog tech, humanity pulls off the ‘W’ in the final minutes of Terminator 2 as the 101 renders his opponent obsolescent by yeeting him into a vat of molten steel from whence no tech, however sophisticated, can return. The scene was shot at the 23-story-high Kaiser Steel Plant in Fontana, a former mill town in California’s Inland Empire, built in 1942, defunct since 1983 and, in 1994, painstakingly disassembled by three hundred workers from Beijing’s Shaogang Steel Corporation, so that all 55,000 tons of the mill could be reassembled in southern China.

It is tempting to suggest a relationship between this piecemeal dismantling of American manufacturing, hysterically literal in the case of Kaiser Steel, and the consequent departure of American popular cinema from its last connections to photographic realism. An artform born into the industrial age fades into the ghostly half-life of the digital dusklight upon entering the “information economy”—very neat, though this proposal also presents certain problems. A little more than a quarter of China’s 1.4 billion souls toil in the industrial sector today, and it cannot be said that the moviegoers of the Middle Kingdom seem any less susceptible to the charms of whirling pixels, or that their homegrown popular cinema—in my admittedly limited experience somehow improbably even worse than that of the United States—is any more rooted in the physical world.

In films like his Terminators and 2009’s Avatar, which use the latest in effects technology to warn against the life-quashing dangers posed by the encroachments of technology, Cameron epitomizes the paradox faced by the technophobe cineaste or cinephile. The art of cinema, at the moment of inception, is inextricably tied up with industrial process—as is every contemporary artform, yes, but cinema arguably moreso. Even when a filmmaker-artisan like Stan Brakhage eschews the camera entirely to etch into the emulsion of unprocessed, unexposed 16mm film, the film he’s using had to first roll off of an assembly line in Rochester, coated with an emulsion made of abattoir waste.

Cinema may be put to work in the observation of the natural world, and very often has been, but the plein air film represents an incursion on nature by industry, the sound of a sylvan glade interrupted by the whir produced by even the most light-footed druid-filmmaker’s Bolex. (An argument can be made that digital filmmaking is cleaner and greener, but natural it is not, and I would recommend the reader who believes that microchips leave a dainty ecological footprint to read up on Agbogbloshie, an e-waste dump located outside the Ghanaian capital of Accra, or the still larger heap in Giuyu, China.) There can be no real Barbizon school equivalent in cinema, and if Jean Renoir thought perchance of his famous father or of Corot when shooting his first feature La Fille de l’eau (1925) in the Forest of Fontainebleau, he can’t have imagined he was following their lead, exactly, when he tromped his troupe through those same woods.

WOULD YOU LIKE TO PLAY A GAME?

The digital hum has now all but replaced the analog clatter. Arnheim’s figurative proclamation that the films the talkie viewer saw were no longer film at all has, in the great technological shift of the last fifteen years, become literal truth. The first-run audiences for Tay Garnett’s Her Man in 1930 may not have been seeing something that lived up to Arnheim’s nonverbal ideal of cinematic art, but they were still watching a ribbon of celluloid judder through a projector whose basic mechanisms had remained unchanged from the time of the Tsars to the beginning of the long Putin Era. The same cannot be said for the paying customer watching The House Next Door: Meet the Blacks 2.

The materiality of cinema, even in its analog form, is a tricky subject—introducing her Girl Head: Feminism and Film Materiality, Genevieve Yue notes the “virtual” aspect of even celluloid, writing that “the materiality of the film strip does not resemble the image that is projected onscreen,” an observation that could just as soon be applied to the DCP drive and the images that spring forth from it. This doesn’t, however, diminish the enormity of the change that has taken place as a result of the industry’s standardization of digital exhibition over analog.

The liberation of cinema from tactile film accompanied, roughly, the end of its de facto indexical relationship with reality—CGI tampering with the image, novel in the time of T2, can now be assumed to have occurred in any multiplex release, and usually to have occurred extensively. For many observers, such as André Bazin and Frankfurter Siegfried Kracauer in his 1960 Theory of Film: The Redemption of Physical Reality, the unprecedented miracle that cinema brought to the world, that of the photorealistic moving image, was its core value—the thing it could do that had never been done before.

Though this depiction of physical reality may have been cinema’s defining value, this magpie medium borrowed freely from literature or theater or music, for no artform can exist in a vacuum. Illustrating this point in his “The Ontology of the Photographic Image,” Bazin wrote that the appearance of the daguerreotype freed painting from its last obligations to objectively represent the world, allowing the revolution of the medium undertaken by the elder Renoir and his fellow Impressionists. When moving photographs came onto the scene, this new Seventh Art would lift freely from painting—Cecil B. DeMille’s signature “Rembrandt lighting,” Renoir’s attempted “study of French gesture as reflected in the paintings of my father and the other artists of his generation” in his 1926 Nana, Murnau’s cribbing from Caspar David Friedrich and others in his Faust of the same year—but cinema’s roots in photography meant that it was only in animation that it could most nearly approach the graphic arts.

Ours, however, is an age that has given us a “live-action” The Lion King (2019) consisting entirely of photorealistic CGI, and most tentpoles are now more than half cartoon. Here, now, was a level of post-production control that Mr. Kubrick could only have hoped to wield—or perhaps dreaded the coming of, for it was the magic of CGI that dropped screens of onlookers into the orgy sequence of his Eyes Wide Shut (1999), obscuring scenes of choreographed rutting deemed too racy for an ‘R’ rating. As in the studio-bound days, the movies would no longer be at the mercy of the elements, even those shot on location, and the ductility of the digital image was infinitely greater. A painted backdrop of a stormy sky, once impressed on celluloid, was a fixed element in the image, but now that sky could be infinitely reworked, finessed to perfection. Actors, often shot emoting alone in a vast field of greenscreen, would remain as remnants of the medium’s relationship to physical reality, though in many cases they, too, had acquired a new plasticine, airbrushed sheen. Always running against the grain of Kubrick’s perfectionism was his predilection for unusual and eccentric actors—the Police Gazette rogue’s gallery of pitted, pocked, middle-aged faces in The Killing, for example—and this makes him seem downright humanist today.

The overworked and cluttered images found in contemporary blockbuster films have more in common with the tight, high-finish blockbuster paintings of the late 18th and 19th century than they do to those of Beverly Hills Cop (1984), which might as well be a neorealist tract. The imagined heaven of oil-on-canvas images in 1998’s What Dreams May Come today is a dream fulfilled; compare the maximalist images of roiling stormfronts trotted out in discussions of MCU “cinematography” to Joseph Wright of Derby’s paintings of Mount Vesuvius or the overpowering bravura landscapes of John Martin and you will find a kissing-cousins affinity. As much digital cinematography today adopts an almost painterly turn, the touted “aura” of physical painting deliquesces into the digital impalpable, turnstiles spinning steadily admitting entrance to a travelling “Van Gogh: The Immersive Experience” which promises to allow the curious tourist to enter the mind of a troubled, gifted Dutchman who died penniless in 1890 after shooting himself in the chest with a 7mm Lefaucheaux pinfire revolver.

The cross-pollination of film, painting, and digital technology doesn’t necessarily result in such unsightly hybrids as these—a few high-toned exceptions include Éric Rohmer’s L’Anglaise et le Duc (The Lady and the Duke, 2001), Andy Guérif’s Maestrà (2015), the animate abstract painting films of T. Marie, and various plastic and digital “works at the intersection of art and technology” by Rachel Rossin, and I value W.S. Anderson’s Pompeii (2014) as much as any Wright of Derby Vesuvius. When I look over the names of the movies of the last fifteen years that have left a deep impression on me, however, I confess to finding that a significant majority lay their scenes in environments that were largely found, hand-made, or rented, rather than rendered.

As cinema at birth drew from media that preceded it, so in later life it would draw on new arrivals. When another moving image medium, television, joined cinema on the scene, influence ran both ways. In the case of cinema, this included jealous responses to this new, cutthroat competition for audience’s eyes and ears—the marketing of CinemaScope and 3D innovations, which might be considered early bids at immersivity—as well as occasional adaptation of television’s working methods: it is doubtful, for example, that Robert Aldrich would have taken up the two-camera system that he used on all his film shoots from Sodom and Gomorrah (1962) onward had he not spent some time using multi-camera setups doing episodes of China Smith ten years earlier.

Some of the young rival’s tricks, however, weren’t so easily cadged. Television, through live broadcast, could boast a news-in-the-making immediacy that cinema, its stories foregone conclusions suspended in aspic on celluloid spools, could not. And TV was not without attempted innovations of its own to counter the widescreen: for example, the 1953 CBS children’s program Winky Dink and You encouraged kids to purchase a 50 cent “Magic Window,” a thin sheet of plastic that affixed to the screen of a tube television, on which children were instructed to draw various objects using erasable “magic crayons,” the objects then to be incorporated into an on-screen narrative. Interactive programming was a tempting lure to dangle in front of kids, but the craze burned out quickly, to the relief of the embattled movie men, and before long neglected Magic Windows were gathering dust in attics across the country.    

Though certain visual clichés in the digital-era blockbuster have their provenance in the graphic arts—the most grandiose excesses of 19th-century Romanticism, the covers of science-fiction paperbacks and Advanced Dungeons & Dragons modules, the closeted muscle-mag homoeroticism of superhero comics—it’s a new moving image-based digital medium, heavily drawing on the same set of images, that has posed a new threat to pop cinema’s long-dwindling market share. Effects houses aren’t scouting most of their talent from painting programs and Broadway’s backstages. Hollywood’s chief challenger for snapping up VFX program graduates tells you who the real rival is now, and most newly matriculated Maya wizards face a clear conundrum: are you going to Disney, or are you going into video games?  

With video games, the Magic Window and the promise of audience interactivity was reborn, and during their long march from fringe nerd fetish to culturally central prominence, cinema has struggled to respond—as during the rise of television, through a combination of counter-innovation and imitation. “Don’t Just Watch the Movie; BE IN IT” boast the ads shilling 4DX, a glorified Nintendo 64 Rumble Pak, but the exhibitors must suspect that hitting audience members in the face with a puff of stale air every time someone dies in Spiral: From the Book of Saw was a far cry from true immersion, and that customers promised that they could be in a movie might, once inside, yearn for autonomy within it.

SERVICING THE SERVANTS

Lamenting the fixed dimensions or limitations of films and cinema as I came to know it never seemed a particularly productive use of my time—though in the act of writing about films, however far from the experience of their creation, there is an implicit desire, however fanciful and however self-deprecatingly masked, to exhibit some influence over them. This may take the form of a weekly reviewer’s consumer guide advisement, or of proselytizing for an idea of cinema, in the effervescent, pugnacious Cahiers du cinéma style. Praising a film is in some sense to ask of the cinema “More, please”; condemning a film, the opposite.

The practical result of such requests are, in all but very few cases, nearly nil, but all the same the impotent impulse remains. The effort to describe and diagnose a film can’t change the finished film; it can at best change the way that certain people view it, even perhaps those involved in its making. Criticism at its best acts as not only a record of engagement with a film but as a sort of adornment to it, as the images in a Catholic church adorn the Word, although here it’s Word in service to Image.  

The critic is at bottom a spectator, and a spectator takes what he or she is given—but this doesn’t mean that the act of watching is without a measure of participation. In her recent monograph on Benning’s Ten Skies, author Erika Balsom writes “Why does the notion of the passive spectator sadly persist in some cinephobic writing, when there is truly no such thing?” It seems to me that an alert viewer or reader or listener is still allowed quite a bit of free range to roam even without sharing Barthes’ desire to wrest the “text” away from its known author or authors, who stubbornly remain the most obvious persons of interest to put under scrutiny if you are trying to reverse engineer the thing and figure out how it came to be.

Using the word “fixed” to refer to a work that has been finally sent off to market seems, in this context, more appropriate than “finished” or “completed,” because these imply finite resources, and no artwork can ever be really exhausted so long as it continues to exist in some form, even in memory. Each revisitation will not necessarily bring fresh revelations, and we all have our individual criterion for what we believe rewards revisitation, knowing full well that the thrill of initial discovery can never be recaptured, but that it may give way to subtler shadings of pleasure no less rewarding. Much of the work of criticism—rather than consumer-report reviewing—involves this teasing out of qualities not immediately surrendered, be it Manny Farber’s attention to the plastic, tactile visual qualities of narrative films, or Parker Tyler’s descrying of hidden subtexts which may exist independent of conscious authorial intent.

While one is left the liberty to read between the lines, one still had to contend with the ineradicable fact of those lines. You might make the closely reasoned case, as Tyler did, that the secret story of Billy Wilder’s Double Indemnity (1944) was the repressed homosexual relationship between Fred MacMurray’s Walter Neff and Edward G. Robinson’s Barton Keyes, and your inferences couldn’t be proven definitively wrong or definitively correct, but there were still certain immovable facts: that Neff breaks the neck of the husband of his lover, Barbara Stanwyck’s Phyllis Dietrichson, in order to cash in on an insurance policy, and that after things go south between Neff and Dietrichson she shoots him once, and that he plugs her twice at close range with the same gun, presumably fatally, then heads back to his office to bleed out and record his confession. Now, maybe Phyllis was wearing a very form-fitting bulletproof vest the whole time, and maybe Keyes covers for Neff with the cops, and maybe they all wind up together as a happy throuple soaking up sun together in Puerto Vallarta, but at this point we have passed beyond the known world of the film and into the speculative realm of fanfiction, a D.I.Y., grassroots manifestation of a yearning for interactivity that belongs to a heritage leading through knockoff sequels to the first volume of Don Quixote and Tijuana Bibles.

When cinema was released from its physical fetters through digitization, films became newly pliable, ready to re-mix. There was nothing new in the concept of the collage film, of course, but to make his Rose Hobart (1936), Joseph Cornell had first to get his mitts on a junk shop 16mm copy of George Melford’s jungle adventure East of Borneo (1931) and sit down to a painstaking job of work. Home video and video editing—first on linear, videotape-reliant mixers, then on non-linear software—made things quite a bit more efficacious. As a result, one now sees things like the recent Super Mario Bros.: The Morton Jankel Cut, in which Ryan Hoss and Steven Applebaum, obsessive enthusiasts of Max Headroom creators Rocky Morton and Annabel Jankel’s 1993 adaptation of the Nintendo video game—the first such feature film ever made—sat down with editor Garrett Gilchrist to patch twenty minutes of unused footage into what they apparently regarded as a mutilated masterpiece.

The Morton Jankel Cut is a work of fan-produced interactivity rejiggering an existing film, and as such counts Rose Hobart as a kind of distant relation. Hoss and Applebaum stretch out the fetishized source material rather than compressing it, rearranging it, and interpolating outside material into it, as Cornell had, in the process turning East of Borneo into a reverent contemplation of Hobart, its lead actress. The fealty to Morton and Jankel’s “original vision,” even if semi-ironic, places the achievement of The Morton Jankel Cut, such as it is, in a different category than that of Rose Hobart. Part of an uncoordinated, eccentric contingent of American Surrealists, like Tyler, Cornell was practicing, as the Bruce Conner of newsreel bricolage A Movie (1958) or the Ken Jacobs of Star Spangled to Death (1957-2004) would later, a version of what was dubbed détournement. The term, connoting the kidnapping of materials of an existing cultural object to produce meanings contrary to those intended, was coined by French thinkers associated with the Letterist International (LI) and then offshoot Situationist International (SI) collectives, both groups which drew inspiration, albeit sometimes of an antagonistic, Oedipal variety, from the Surrealist and Dada legacies. Guy Debord, a leading theoretician in both the LI and SI, employed détournement in his own filmmaking activities, which began with 1952’s Hurlements en faveur de Sade, but picked up significantly after the disbanding of the SI in 1972.

Tyler’s “queering” of Double Indemnity, Cornell’s lingering contemplation of Hobart, Conner and Debord’s détournements—all are manifestations of the same desire on the part of figures operating outside of the process of industrial production to participate in some way of the writing or rewriting of cinema, to explore subterranean passages running beneath official or commercial cinema in which the traces of an Other Cinema can be found. They are among the more dramatic, anarchic forms of tussling with cinema, in each case inspired at varying distances by the Surrealists, famous for their theatre-hopping and consequential fragmentary viewing of commercial films, these disruptions effectively avant-garde-izing the narrative—a practice taken up by Debord during his lycée years in Cannes, the holiday home of the French film industry, though apparently quite a bit of his was just storming out of the theaters in disgust.

The Cahiers critics, too, were branded by the Surrealists, and much writing from the Young Turks wing of the magazine posited that a certain insubordinate, unbreakable quality was inherent to cinema even in its commercial or industrial form. In rhapsodies tinged with mysticism they described filmmaking as an alchemy irreducible to formula, a collision of deliberation and chance operation, human spontaneity and the cold automation of the camera. Those who understood the medium best didn’t try to wrest it under their control, they just hopped on the mechanical bull and tried to stay loose and roll with it.

Gilbert Seldes included the cinema among “The Seven Lively Arts” in his 1923 study of that title, subtitled “A Classical Appraisal of the Popular Arts,” and it is a feeling for this liveliness, this vitality, that hooked me many years ago, and which has kept me hanging around at an age when most men have found a respectable métier. I’ve long bridled at the idea of filmgoing as a kind of escapism and of cinephilia as a means to opt out of life, for this is close to the opposite of the role that I’ve felt that cinema plays in my own experience, even though years of sitting in repertory theaters surrounded by pale, drawn faces and smelling the unmistakable damp musk of unlived lives suggests that there is a cogent argument to be made for this case, and I have noted that it has been in moments of isolation or emotional distress that the movies have taken on an outsized importance to me. Accepting that cinema does operate as a substitute for experience, though, it can also operate as an augmentation, a fragmentation or abstraction of the world which, showing us some aspect of things in a sharper key, brings us back to the world. This feeling is something like that described by the character Pangzi in Edward Yang’s Yi Yi (2000), when he says “My uncle says we live three times as long since man invented movies… movies give us twice what we get from daily life.”

This sense of surfeit, of greedily bellying up to the buffet until the last bit of bone has been sucked clean of marrow, is key to everything that I discovered outside of school culture when I was young, which is everything that still comprises my life today—that is, films, music, books outside of the curriculum, aimless perambulations, and various kinds of intoxication, natural and otherwise. Where school culture is designed to improve through testable metrics, preparing the child to pursue career advancement and benefit to society, any improvement that comes of the items I have listed is wholly ancillary to the pleasure that they provide. A keen reader may note that I spend much of my time rhapsodizing about films that are variously austere, eggheaded, miserabilist, slapdash, ugly, obnoxious, slow as molasses, and downright punitive, to which I can only reply that the connoisseur with a discerning palette can discover many varietals of pleasure. 

Because attaching a measure of guilt to any expenditure of time that doesn’t yield up some quantifiable profit has long been popular among castigating Puritans of all political persuasions, periodic efforts have been made to prove that these sources of what might be perceived as empty pleasure provide intangible benefits to those that partake in them. Cinema, we’ve been informed, is “an empathy machine”—the more movies that you watch, presumably, the more empathy you accrue, so by Pangzi’s uncle’s formulation the average man and woman born in the 20th century enjoys three times the empathy that their great-great-grandparents did. If cinema’s function is to act as an empathy generator, it’s a pretty wonky machine. I have watched many thousands more movies than has anyone else in my immediate family, and most if not all of them are demonstrably kinder, more selfless, and altogether more considerate human beings. It’s possible that I’ve incidentally acquired a little learning through moviegoing, and even picked up a few things that I could probably stand to unlearn, but this was incidental—the purpose was the pleasure of seeing the game played, and playing along myself. I don’t view this time spent as time wasted, and I don’t think anyone else should either.      

All of my present interests, typically enough, gelled during my early adolescence, and this corresponded to a rejection of most of the things that had kept me amused as a kid: fantasy novels, role-playing and video games, and comic books. These comprise nearly the full set of accoutrements of what’s commonly known today as “geek culture,” which now might just as well be called “culture,” because it’s everything and it’s everywhere. So, because it was a landmark event of my nerdy preadolescence, I can tell you that the “Infinity Gauntlet” plotline which heavily inspired both 2018’s Avengers: Infinity War (2018) and Avengers: Endgame first played out in a six-issue limited series written by Jim Starlin, published from July to December, 1991, in which a scrotum-chinned alien supervillain named Thanos collected six Infinity Gems in a stupid glove and used their powers to disappear half of the universe’s population. And I can tell you that the Infinity Gauntlet appeared at the same time that the SEGA Genesis owner was newly able to guide SEGA mascot Sonic the Hedgehog through the collection of Chaos Emeralds, the function of which I can’t really recall, though I know that there were seven of them, and that if you got all seven you got to see a different ending when you won the game. Why this was considered desirable, again, has slipped my mind.

At times I’ve formulated my break with the kid’s stuff of my preadolescence as a conscious “putting aside of childish things,” though realistically most of society doesn’t view the acquisition and viewing of ever more exotic breeds of arthouse smut as the pinnacle of maturity. It is probably nearer to the truth to say that I merely exchanged one set of games for another, because the old games had ceased to amuse me, had ceased to speak in any way to the person I had become, or was in the process of becoming. Those old games all seemed to revolve around quests and missions and side missions, gems and rings and points and inventory, and this got pretty exhausting when you already had homework every night. These games, even though they were sometimes described as “wastes of time,” were as goal-oriented as school and life itself. If you really wanted to waste time, to be truly, totally truant, there was no place for you to turn but to art, a game with no rules that nobody will ever win.

I left behind the games of my youth, but they are not done with me—Pixar’s Toy Story tetralogy may be viewed as symbolic of the plight of Americans born at the end of the last century, doomed to spend a lifetime stuck with the junk culture of their childhood, as I fully anticipate expiring quietly in an assisted living community while in the midst of cavorting through city sewers with Raphael, Michelangelo, Donatello, and Leonardo by virtue of some sinister chip implant in my frontal lobe.

The most successful movies are based on comic books, and, as the liquid metal sheen of digital cinema has smothered the analog, the images in these films have grown ever nearer to those of video game cut-scenes. Gaming is positioned to become the dominant cultural force in the 21st century that cinema was in the 20th, to cast a shadow over cinema as Hollywood once did Book of the Month Club literature. The augmented Super Mario Bros.: The Movie appeared within days of the announcements of a “Director’s Cut” of Hideo Kojima’s divisive Death Stranding for PlayStation 5, and Pendulo Studios’ Alfred Hitchcock – Vertigo—not an adaptation of the 1958 film that in 2012 climbed to the dizzying peak of Sight & Sound’s venerable “Greatest Films of All Time” poll but, per the official synopsis, “a new kind of psychological thriller, and walk on a thin line between reality and fantasy,” inspired by “Hitchcockian” themes and incorporating “Hitchcockian” camera movements.

My prejudices must be clear at this point. There is not even the slimmest chance of my ever sitting down to play twenty-plus hours of Alfred Hitchcock – Vertigo. I have only the faintest idea who Hideo Kojima is, an idea that might be summarized as “The Important Video Game Guy.” I have not owned a gaming console of any kind since losing track of a used Genesis acquired for the sole purpose of playing NBA Jam in the Williamsburg, Brooklyn railroad apartment where I ran out the string on my twenties, and I do my writing on what is very nearly the cheapest commercially available laptop, an “Acer Aspire 5” which would possibly burst into spluttering green flames if asked to run the 1989 Prince of Persia. But as someone who is acutely interested in cinema—where it has come from, where it’s going—and who draws a pay envelope through publicly pontificating on the subject, evidences of the hybridization of film and video games are of acute interest.

It wasn’t any of the armada of video game-to-film adaptations that turned me to thinking about this ongoing cross-breeding, and where it might someday lead, but rather the hard-sell of immersivity, and the possibility that this might one day turn to interactivity. Today’s video game adaptations are still recognizably narrative films, after all, some of them interesting, most of them not—but what of tomorrow’s? Let’s recall the hue and cry over the initial, upsettingly humanoid design of Sonic the Hedgehog, replete with articulated orthodontia, as seen in the trailer for his “live-action” outing, and the subsequent frantic fit of pixel-pushing commanded by Paramount Pictures brass in order to produce something more pleasing to the concerned consumer: doesn’t this represent some kind of interactivity? The results could hardly be counted as a tragedy for cinema—there was never any risk of Sonic the Hedgehog being anything other than awful—so it was little noted at the time that the result of this invaluable consumer feedback was to produce something almost entirely toothless.