AI is arose is arose is arose
A sideways meditation on the is-ness of things, the joy of reasoning things out, and that thing everyone's talking about.
Welcome to Holocene Homesick Blues, a newsletter about the strangeness of life at the end of an epoch—something humanity has only experienced once before: the end of the last ice age and beginning of the Holocene, some 11,600 years ago. Today, I’ll be following the pattern I set out in the previous post, and exploring the strangeness of Gertrude Stein’s famous line, ‘Rose is a rose is a rose is a rose’, and its relevance to our remarkably bizarre moment with artificial intelligence, which seems to be coming to a head.
If you’re just joining in, welcome, and I hope you find these meditations useful, thought-provoking, or at the very least an entertaining diversion. Each week or two, I reflect on an artifact—some thing or things from our past or present—and its connection to our moment. It’s not an analytical process, and I’m not after any particular conclusion or argument. I just sit with the artifact, and I allow it to be strange. And see what that does to the strangeness within me. And then, things happen—sometimes surprising things I wasn’t intending. If that sounds like the kind of thing you’d like to hear more about, please consider subscribing. Once subscribed, newsletters will come directly to you through email.
All content on Holocene Homesick Blues is available for free. That said, this newsletter is one among several ways that I try to make a living, and if you find my writing to be of value to you, a small monthly donation allows me to spend more time on this. There is an option to do so on the subscription page, and if you are so inclined, I deeply appreciate it!
Let’s away then, to a little meditation on Gertrude Stein’s famous verse, which first appeared in her poem ‘Sacred Emily’ (1913). Here is a small selection from the poem, in which sits the rose verse. I recommend reading it slowly, and aloud. Try it a few times in a row, and pay attention to what happens within your mind as you do so.
Cunning saxon symbol.
Symbol of beauty.
Thimble of everything.
Cunning clover thimble.
Cunning of everything.
Cunning of thimble.
Cunning cunning.
Place in pets.
Night town.
Night town a glass.
Color mahogany.
Color mahogany center.
Rose is a rose is a rose is a rose.
Loveliness extreme.
Extra gaiters.
Loveliness extreme.
Sweetest ice-cream.
Page ages page ages page ages.
Wiped Wiped wire wire.
Sweeter than peaches and pears and cream.
Wiped wire wiped wire.
Extra extreme.
The Recursive Rose
‘Sacred Emily’ is a lengthy work, and like most of Gertrude Stein’s Avant Garde writing, quite opaque to the casual reader (I include myself here). Stein tried to make language that was akin to abstract artwork—that evoked responses in the way a painting can move us, through imagery and connections beneath the level of language. And sure enough, as I read this small section aloud, images begin to connect. Things that appear unrelated develop relations with one another. Recursive, experimental wordplay that appears gibberish on a first reading reveals a deeper sub-structure containing hidden meanings. Whether what I’ve detected is ‘correct’ is a matter for poetry critics, but I’m more interested here in the process itself—how the mind does what it does when it starts to figure something out, and how there is a kind of joy that arises as it does so.
The ‘cunning saxon symbol’, for example, only becomes a rose to me on my second read-through: the national flower of England. And there’s a rush of recognition, of sudden fellowship between reader and poet, beneath the language and across a century. And now I suspect that everything in and around this section is about that recursive spiral at its center: the rose that’s a rose that’s a rose that’s a rose.
And then I recite it again, testing that notion. The rose is ‘cunning’, deceitful. In that it’s a ‘symbol’. Not only a practical referent to a real, physical rose, but a container—a ‘thimble’—for all sorts of other abstractions that are not physical roses. Like romantic love. Or socialism, an association that would have been more familiar in Stein’s time. Or the political construct that we refer to as England.
As the semiotician1 Umberto Eco explains of his novelThe Name of the Rose(1980): ‘the rose is a symbolic figure so rich in meanings that by now it hardly has any meaning left.’ And while the rose may be an extreme example, the implication of Stein’s Rose is that all language contains the potential for such a trajectory: toward diffusion, and ultimately incomprehension. Language’s mutability makes it both functional and unstable. A rose can be a flower, an emotion, a statement, a country, a philosophy. And if it can be all of those things, what can’t it be?
In this sense, ‘rose is a rose is a rose’ reflects the outward journey of the word through time. But the section’s second half reveals an oppositional, inward journey, back to its ‘color mahogany center’, back through the succession of symbols, rose after rose, ‘page' after ‘page’ through the ‘ages’, to a time immemorial. And there is the real rose, the original signified object, the rose first captured in the word itself: ‘loveliness extreme’. The ‘wire’—the connection between all the symbols as each new use of ‘rose’ built upon all its priors—is ‘Wiped’. And we have, for a moment, just the rose again.
If you like, read the section one more time. Do you get the same sense? Or perhaps a different one? Either way, there’s a rush to it. This making of sense, decoding symbols into meanings, to understandings of something behind them. And in this instance, the meanings are recursive: meanings about the making of meanings.
There’s something to that, isn’t there? This desire to understand difficult or mysterious things in a coherent and satisfying way. And that, perhaps, the most alluring—and vexing—object of that inquiry is the faculty of understanding itself. I have no idea whether my interpretation would pass muster with a poetry critic. But the recursive rose would crop up again and again in Stein’s work in decades hence, and I get the sense that even she was trying to figure out what it really meant. There’s a peripheral-ness to the rose—to consciousness and its awareness of itself: sentience. Turn to it directly, and it vanishes.
The Electronic Poetry Center at the University of Pennsylvania, from which I’ve also linked the full poem above, has a helpful collection of other instances where Stein referenced the rose, a glimpse into her own process of working out its meaning. A few stand out to me.
Here the rose verse forms an infinite loop:
. . . she would carve on the tree Rose is a Rose is a Rose is a Rose is a Rose until it went all the way around. (The World is Round)
And here it propagates forward in time, but as a collective, constructive act:
Civilization begins with a rose. A rose is a rose is a rose is a rose. It continues with blooming and it fastens clearly upon excellent examples. (As Fine as Melanctha)
And here she seems almost frustrated by it:
Now listen! I'm no fool. I know that in daily life we don't go around saying is a is a is a. Yes, I'm no fool; but I think that in that line the rose is red for the first time in English poetry for a hundred years. (Four in America (New Haven: Yale University Press, 1947)).
Stein acknowledges, here, that the consciousness question she’s teased out of the rose is rarified and abstract. But she seems also to suggest that it’s fundamental to our ability to grasp reality, to get at the is-ness of things. That sentience doesn’t just reflect our inner conception of self, but also transforms our experience of the outside world as a kind of Truth.
Let’s a take a short intermission, with one of my favorite iterations of Stein’s Rose: Poe’s brilliant and sultry ‘A Rose is a Rose’ (2004, lyrics here):
Joys and Simulacrums of Sentience
In the past month, it seems every hack on the internet has expressed an opinion about artificial intelligence (AI). AI itself is not new, but the public release of large language models (LLM) like ChatGPT and art-generating deep learning models like DALL-E have caused a stir, in that their outputs—conversations, stories, essays, videos, images—appear as though they were created by a conscious, reasoning intelligence. This is not the case, at least not yet, but they can be quite convincing—one AI researcher at Google became convinced that the model he was working on had not only developed sentience, but had a soul.
I’m not an AI expert. I can barely work the Uber Eats app. But I did some legwork so you don't have to, and this is the gist of what I've learned as a reasonably intelligent layperson.2 LLMs and deep learning models do not think or reason in the way we do, but instead combine elements from existing data in order to optimize outputs that are relevant to a set of parameters. When you ‘ask ChatGPT a question’, you are providing a set of parameters for an optimization problem, and the bot is probabilistically predicting which new word will be most relevant when placed after the one it just placed, over and over again, based on massive amounts of material the model has been ‘trained’ on. Think of how Outlook or your smartphone provides you with the next five words for the sentence you’re already typing, just on a much larger and more complex scale.
There is much speculation about such models leading to the development of so-called artificial general intelligence (AGI)—a capacity for thought (or something that is indistinguishable in its outputs) that would be equivalent to or better than human thought. But as far as the technology companies involved have revealed, no such thing currently exists. I won’t speculate here on that, as I don’t know enough about AI to do so meaningfully, and I also don’t trust Google or Microsoft to tell us what they really have under wraps. Let’s just say the strangeness of AGI—and what a strangeness it would be—has yet to emerge, if it is to emerge at all.3 For readers interested in in-depth discussions around potential AGI, the always-excellent L.M. Sacasas has a fantastic ongoing discussion over at The Convivial Society.
But I am terribly curious about the strangeness of our reactions to current LLMs and art generators: the combined allure and revulsion that these simulacrums of conscious thought evoke within us. On one level, I marvel at the sheer computing power being used to rapidly create things that would take a human weeks or months. On the other, there is a dread at my own work becoming professionally outmoded. In time, will we be able to distinguish between words produced by rational thought and careful consideration, and a string of probability devoid of conscious intent or self-awareness?
There is also, beyond the basic job-worries, something unsettling about an ‘intelligence’ that is bereft of thought. It manipulates signifiers with no notion of what is being signified. The AI is not producing meaning as we do, when we write or interpret a poem. It is instead arranging words and phrases in a way that simulates the production of meaning. This isn’t exactly ‘nonsense’, as the strings of text it produces do typically make sense: they are coherent, and conform to our norms regarding understandable language.
A better word might be ‘nonthought’: coherent language (or other outputs) that appears to be the product of thinking, but that does not, in fact, involve rational thought. I’m not an expert on cognition. But as a sentient human, I’m familiar with the elation that comes with logically or rationally connecting one thing to another, not on the basis of producing something that sounds true, but on the basis of producing something that in itself purports to be true (even if I might be mistaken). The string of aha’s and yes’s and no’s that occurs as we figure something out, and brings us to know that thing at a deeper level. And if we come to the wrong conclusion, it’s possible to identify where we went wrong, and correct course. For that matter, it’s possible to identify something that sounds-true-but-isn’t as definitively wrong, something LLMs often fail to do, a process experts call ‘hallucination’.
Perhaps LLMs will get better at that problem with time. In more structured environments, AI nonthought has proven very effective, particularly in games like chess or poker, where it does not produce language but makes strategic decisions on the basis of iterative model runs—scenarios played out thousands of times in simulation. These model runs allow the computer to identify strategies that are mathematically optimal, but that would never have occurred to a rational intelligence working off a coherent understanding of theory or strategy related to the game. Francisco Toro wrote an interesting piece last month in Persuasion, likening LLMs to the triumph of IBM’s chess robot Deep Blue over Garry Kasparov in the late 90s. Toro explains that the development of machine intelligence in the chess world did not make human chess players obsolete, only better at playing chess, as future chess masters had to train with machine intelligence aids, and now would handily beat the masters of the bygone human-intelligence only era. The implication is that LLMs and art generators could do a similar thing for writers and artists.
I don’t much like Toro’s argument. Not because it’s wrong, but because I think it misses the point. I am not a skilled chess player. I enjoy playing chess because I can think in a structured and systematic way about the game, and that’s what makes it enjoyable. I suspect that chess masters playing at a level that requires machine-intelligence training lose a bit of that joy as a consequence. For that matter, are they even ‘playing chess’ anymore? I don’t know enough about that level of chess to know.
But if I analogize that experience to writing, and use a bot to develop a plotline, or frame a scene, or draft an essay, I have a clearer sense. AI-produced or even AI-assisted art is not only ‘cheating’ in the plagiarism sense, but feels to me like a desecration of the creative act itself, an abdication of sentience in the one place where it is most valuable, most necessary. I’m not going to argue why, as it’s not my goal here to persuade. It’s just where I come down on it.
And maybe I’m wrong. Perhaps my hardline on this will in time consign me to a bygone era, as a new generation of writers churn out material faster and cleaner than what my muddled, ponderous mind can produce. But I will not let go my rose. A rose is a rose is a rose, from here back to the first word, from the first word to the last conscious thought. The Machine will take what it can, but it can’t have that.
Semiotics is the academic study of symbols and their usage: effectively, how we manipulate symbols in order to generate meaning. It’s the core of what I’m musing about here, but approached from a systematic perspective rather than the intuitive one I’m lazily employing.
To AI-knowledgeable readers: If you spot a mistake or oversight in this description, please let me know in the comments and I’ll make appropriate corrections!
Many AI experts themselves have expressed considerable concern about such a future development, and one in particular has flatly stated that AGI would lead to the extinction of the human race. While he’s a lone voice, no one has really stepped up to fully debunk him, or to say that he’s definitely wrong, which is more than a little unsettling. As our science fiction films have predicted for decades, those concerns have thus far not halted attempts to create AGI. AI labs are progressing with all deliberate speed, and no one seems willing to stop them.
First time hearing Poe’s jazzy ‘Rose’ and it’s a keeper. Then reading the lyrics made it even better.
On the AI subject it looks to me that the potential benefit to humanity is overshadowed by the corporate race to capitalize (and copyright) on the technology at the expense of said humanity. A dilemma the world still struggles with in nuclear technology.
Hmm. If the development of AI prioritizes the optimal survival of humanity (i. e., dealing successfully with the looming catastrophe of runaway global warming) then there might be a humanity to serve in the future. But the history of corporate greed makes that trajectory a toss-up. Hence the deepening anxiety of the AI inventors. Think we’ll all have to try harder and not give up.
I understood this a little better after your explanation. Very enjoyable topics!