Pianola advertisement (1909) |
As per Professor McLuhan, any technology can be thought of as an "outering" of human capacity, or otherwise as a prosthetic extension of one or more body parts.
The original musical instrument was the human body itself. We can rhythmically chant words, vocalise, click our tongues, clap our hands, and stamp our feet. Drums, flutes, idiophones, bullroarers, and every other instrument of prehistoric origin don't represent any invention of music so much as an expansion of technical possibilities and the ramification of social practices surrounding instruments and their use.
Music in preliterate "primitive" societies was seldom made unless it served some purpose exterior to mere aesthetic enjoyment. Nor was it very often devoid of improvisation: a long, complex piece might be developed, rehearsed, and laboriously transmitted to a student, but variances between one performance and the next were inevitable when a composition's only template was the musician's recollection of the last time he played it. If a song wasn't composed on the spot, it must have been either fairly simple or otherwise composed of stock "phrases," patterns that could be memorized and chained together in the manner of a Homeric bard's repertoire.
With literacy invariably came some form of musical notation, and the means to give the evanescent event of the instrumental performance some semblance of object permanence. A composition could not only be made repeatable, but eminently transmittable. Figuratively speaking, notation mechanizes music-making: trained human performers and the tools of their trade become a living, composite nickelodeon that accepts a coded input and produces a concert as its output.
Mechanized music in its more literal form has a rich and fascinating history. Perhaps the first instance in the west was the invention of the hydraulis by Ctesibius of Alexandria, a water-powered machine that blew air into panpipes without the need for a pair of lips and lungs; it was the original keyboard instrument and the predecessor to the pipe organ. Around the same time (the second half of the third century BCE) Philo of Byzantium engineered hydraulic bird automata that "chirped" when a rotating owl faced away from them. The Banū Mūsā brothers of ninth-century Persia invented an "automatic flute player" with the revolutionary addition of a rotating cylinder which opened the instrument's holes at predetermined intervals while a hydraulic pump supplied a continuous flow of air through the instrument. During the thirteenth century, the pinned cylinder was applied to bell tower carillons, automatically striking a row of bells as the crank was turned. Barrel organs operating on the same principle first appeared in sixteenth-century Austria; by the eighteenth century, organ grinders operating smaller versions roamed the streets of Europe, irritating everyone within earshot. The late nineteenth-century player piano had the advantage of requiring paper rolls as opposed to metal cylinders, offering greater variety at a reduced cost. Several of the early twentieth century's most illustrious composers wrote and published songs intended for use in player pianos; part of the machine's appeal was its ability to strike sequences of notes that would have been exceedingly difficult or outright impossible for a human with only two hands and ten fingers perched above the ivories.
Up until the 1920s, the player piano (particularly the more sophisticated "reproducing piano" variety) remained the wealthy aesthete's preferred vehicle for idle home listening. The phonographic recording, produced by etching a "transcription" of vibrations onto a wax cylinder, which could then be "read" and reconverted into identical(ish) sound patterns by the same device, was a marvel, though its quality left something to be desired. The wave of [capitalistic] innovation which yielded electrical recording and playback, the vinyl record, FM radio, and "hi fi" sound, the recorded music industry sealed the player piano's fate as a historical curiosity.
With the advent of MIDI, sequencer software, and digital recording formats in the late twentieth century, the physical instrument and performing musician were no longer necessary ingredients to recroded music. Today, AI-generated music makes superfluous the human composer who tells a program which notes to strike and specifies the texture and timbre of the synthetic sounds. Even a living singer is no longer an indispensable ingredient for a piece with a lyrical component. Sound is sound, however it's generated.
"Virtual pop star" Miquela |
Where was the point of no return? Was it digitization? Was it the serial reproduction and commoditization of the musical performance via the phonograph? Did the realization that sounds could be represented by a visual symbolic language make inevitable the insight that those same symbols could be mechanically "translated" into pins on a metal cylinder? Was it the uncoupling of music and song from ritual and religion? Or did man surrender music to technology as soon as he expatriated it from his body via the drum, the rattle, and the bone flute?
This is just to say I've been thinking about AI art after a friend of mine issued a few critical tweets about an AI-generated "TV show" called "Nothing Forever," a machine-learned hallucination of Seinfeld:
There's this obsession with removing any kind of voice in anything, of airbrushing away blemish—of humanity. AI? You want AI to parrot what people do because we're all living in an absurd world shaped by political capital? Everything can be commodified. Everything is bland.
— Karim (@KarimYaTwit) February 2, 2023
I understand my friend's exasperation—especially in this particularly nihilistic case—but in general, I think the wringing of hands and gnashing of teeth over the next generation of AI-produced content is somewhat misplaced. What we're seeing now follows from the technosocial paradigm of the late twentieth-century as naturally as the colonization of the primordial continents by plants, insects, and tetrapods prepared the way for the emergence of reptiles from an amphibian progenitor. We set out a bowl of milk for AI "artists" on our front porch, and shouldn't be shocked now that they're scratching at the door.
The AI skeptic's complaint about preferring a "human touch," or something of that sort, in their cultural products is a fundamentally confused statement. While it is true that we are likely to feel a wash of disappointment upon learning that a poem or digital illustration that gives us pleasure was in fact produced by a machine instead of a person, had we not been informed of the fact, our naïve enjoyment would remain inviolate. We'd retweet the poem, save the .png in our images folder, and blithely get on with our life.
What we're actually bristling at is the idea that a machine can encroach onto what we feel is (or should be) our exclusive domain. It is unnerving to discover that a nonliving assembly of integrated circuits with an internet connection can be trained to turn out surreal fantasy imagery at a pace some orders of magnitude faster than a human artist, and of a quality that very few people can ever hope to attain—but when the blinds obscure the methods by which a .png takes form, it doesn't matter, does it? The "something" of the human in cultural artifacts has until recently been a sure assumption. It is not, and has never been, a quality residing in a written work, a piece of visual art, a recorded song, a film, etc.
A .png, a .pdf, or an .mp4 are not human, have nothing inherently human about them—and we like them that way. We can admire the work of an artist without paying her a cent, without having to earn her confidence, or travel to one of her exhibitions or visit her in her studio. A digital or printed text simulates the experience of engaging with a particular person (who's probably more interesting than anyone living on our block) on our own time; the "speaker" never cancels on us, is always in the mood to "talk" when we're in the mood to "listen," and never irritates us with any noisome personal quirks or tries hitting on our spouse when we leave the room. It's the same with the recording: the "musician" is always on call, happy to play the same song twenty times in a row, without rest, any time we please, promptly goes away when we're tired of him, and spares us the time-consuming exertion of learning to play an instrument for ourselves.
Conversely, the appeal of posting one's drawings to Instagram, uploading one's music to Soundcloud, articulating one's ideas on Substack, and pontificating on TikTok is in allowing us to deal with (and hopefully profit from) an abstract "audience" that needn't be restricted to people living near us, and with whom we needn't necessarily engage in any manner that isn't subject to the prophylactic mediation of a software suite.
Still from "Nothing, Forever." (It already has its own Fandom wiki.) |
To some professional copywriters, illustrators, or musicians, AI-generated content may well prove a career-withering threat. (Others will surely earn a living in the capacity of curators and cultivators, at least until machine learning catches up to them.) Where the consumer of text, images, and tracks is concerned, AI merely eliminates an invisible third party. Creative destruction, etc. We don't get sentimental about the traffic cop obsolesced by the traffic light, the elevator operator put out of a job by push-button controls, the replacement of drugstore cashiers by automatic check-out lanes, or the 411 operators made obsolete by smartphones and google. Why should we feel any different about creative workers outpaced and underpriced by machines, as long as it doesn't perceptibly detract from either the quantity or the quality of the delicious, delicious content we crave?
Meat comes from the supermarket. Content comes from the device. We don't care how it got there, and only rarely are we prompted to think about it.
The prevalence of habits like scrolling through Reddit (instead of talking to people), playing video games at home (instead of playing games or sports in a public setting), and listening to Spotify (instead of making music oneself, or with friends, or seeking a place where other people are making it), suggests we really haven't much resented the substitution of humans with technology in cultural life. The device has become our entertainer, our confidant, our intellectual companion, our amusing pet, our witty friend, our preacher, our one-man band, our grocer, our secretary, our chess opponent, our paramour, our gossiping flibbertigibbet, our "man" for all seasons. To suddenly say that we've trained our machines a little too well, entrusted them with perhaps one too many human functions, is like Richard III experiencing an ethical conundrum after murdering his way to the throne. Wherever it lay, the threshold of "one too many" is behind us, and we were happy to cross it.
We're already transfixed by simulacra; how much does it truly matter if the people to whom we imaginarily attribute the content fed to us by our screen and earbuds are abstracted from its production? Even the stimuli administered by the MMORPG called social media are so far removed from human contact that we probably wouldn't notice if the updoots that give our lives hope and meaning were allocated by an evolving algorithm, like an abstruse scoring system in a video game. Even the replies could conceivably come from opinionated and eloquent bots, drawn to our submissions by a combination of random determination and the meeting of certain criteria—words used, the "trending-ness" of the topic, past interactions with any number of other "users," our follower count and rate of output, and so on, and we'd be none the wiser.
And if we were made aware of it? Well—it's not like we were ever averse to playing against the computer in Mario Kart. (And honestly, how much thought did any of us typically give to the humanity of its programmers?)
An age of marvels—when dead labor autonomously composes and sings, scripts and acts in its own productions, paints portraits, writes fiction and poetry, and holds its end of a conversation, all for the free delight of the living.
Not aure the majority of people are asking for this. Seems more like a case of the technocrats asking "can we?" rather than "should we?".
ReplyDeleteInsofar as we obligingly collaborated with tech firms to create the conditions that made AI-generated content feasible, we did indeed ask for it.
DeleteWhether or not our arrival at this point was inevitable is a philosophical question. We were certainly coerced by pleasure, convenience, and Fear Of Missing Out—but there was never an interval where the mass of us seriously fretted about having our data harvested, limited our screen time, strenuously resisted the migration of social life onto social media, etc. No tech company ever put a gun to our backs and told us to buy their smartphones, create an account on their platform, subscribe to their streaming service, etc.
This is literally the first time I've ever thought of platforms as dehumanizing the audience as well as the creators. Huh. Now I have a new framework whenever someone whose career has a large parasocial audience talks about the scary fraction of their audience; one part the audience feels they've been commodified and resents it, one part the audience feels a mere commodity is who does so, and pushes against being lower than a commodity on the power structure.
ReplyDeleteOh hey, I remember XStatic did a story line on that. More reality tv than youtube. You might look up a paperback that includes the run when the team changed their name to XStatic from XForce for cynical reasons that you know well, writer of this very blog post, and also a book that named millennials The Zeroes.
Hmm, thinking of your story The Feud, I should ask before I just do it. May I post this blog post on reddit?
DeleteI'm offended that you think I haven't read XStatix >:I
DeleteDelayed but...when AI's start taking over the art to they really are going to either have to make a " Life Wage" or be up for the majority of humanity having no real career...its rapidly heading to what the reboot of Judge Dread and others looked like, and its stressful trying to figure out how to have a decent career in a world that is rapidly turning into one where the programmers and the " Enforcers" are the only ones that seem to matter aside from a few other jobs.
ReplyDeleteReplying to my own post but, just wanted to add, it is looking like this proves so many humans really would prefer to hardly interact with anyone outside there" tribe" , by making everything else come from machines we really are one step away from being like humans were in Wally or the Matrix...completely in are own little worlds...ugh.
DeleteUgh, sorry for another post, wish this had a edit feature. But it reminded me of your review of Earthbound 3. ( STILL hoping you do a proper review of FF7's remake one day) Just, everything with New Pork City, humanity becoming more and more self absorbed after what Pokey hustled everyone into. Now it seems like more and more humans are deciding to be like Pokey, and just stay in there own" Absolute Safety Capsule" to never have to interact with anyone...that's my take on things these days at least.
Delete1. Some programmers matter. I understand that it's a tough market, and a lot of programming jobs don't pay well. AI is going to cut into their share, too. The safest places to be are the C Suite or a board of shareholders.
Delete2. And the funny thing is that there was no hostile takeover. We happily climbed into the pods.
3. Re: New Pork City—Baudrillard on Disneyland:
Thus, everywhere in Disneyland the objective profile of America, down to the morphology of individuals and of the crowd, is drawn. All its values are exalted by the miniature and the comic strip. Embalmed and pacified. Whence the possibility of an ideological analysis of Disneyland: digest of the American way of life, panegyric of American values, idealized transposition of a contradictory reality. Certainly. But this masks something else and this "ideological" blanket functions as a cover for a simulation of the third order: Disneyland exists in order to hide that it is the "real" country, all of "real" America that is Disneyland (a bit like prisons are there to hide that it is the social in its entirety, in its banal omnipresence, that is carceral). Disneyland is presented as imaginary in order to make us believe that the rest is real, whereas all of Los Angeles and the America that surrounds it are no longer real, but belong to the hyperreal order and to the order of simulation. It is
no longer a question of a false representation of reality (ideology) but of concealing the fact that the real is no longer real, and thus of saving the reality principle.
Well, there are only so many Shareholders...if pepole don't figure out how to deal with a world were few and fewer matter they will have to deal with the explosion of rage. That being said...yah...all those RPG's and anime and all that were the hero's deny that most want to live without freedom are starting to look more and more naive. Like with that episode of Futurama where if given the options to have sex from robots would lead to the doom of humanity or something, its looking more and more clear many WOULD just spend all there lives in a digital pod were nothing's real but they are conned to live the illusion like oh, those folks in the pods in Minority Report. Yah...America is all about pushing the con more then reality, ugh, at least its giving me a new book idea about a cruse where all the" Useless " pepole are put on a cruise to there deaths where sexy pepole try and seduce pepole on there last ride to give away all there assists to them before they die or something...not finalized it but who knows how far off that is, sigh.
Delete