Discover more from 7 Bridges
Weekend Edition: No more blank pages
Some will see that declaration as a rubicon of achievement in the arrival of generative AI and be elated. But what might we be giving up?
“I’m sorry, Dave. I'm afraid I can't do that.” *
Intermingled across the enormous spectrum from the breathless next-generation cyberuptopianism to doomsday robot apocalypse fatalism (and everything in between) that makes up our current debate about new AI technologies are the echoes of generations of sci-fi mythology about the awesome power of artificial intelligence — and the dire consequences of misunderstanding them. Warnings from modern pop culture abound: Viki in 2004 (iRobot — which itself is named after and heavily adapted from Isaac Asimov’s 1950 short fiction collection), Skynet in 1984 (The Terminator), and HAL in 1968 (2001: A Space Odyssey). But these myths are rooted in much older stories: R.U.R. — a 1920 science-fiction play by the Czech writer Karel Čapek that first introduced us to the word “robot” and in it’s first incarnation also introduces a version of the first robot apocalypse warning that partially inspired Asimov’s robotics philosophy specifically designed to counteract that possibility. The first use of the phrase “artificial intelligence” shows up in a computer science proposal as a means of distinguishing neural networks from cybernetics in 1951 right on the heels of Alan Turing’s 1950 paper speculating on “thinking machines.” These scientific conversations are all rooted in even older myths: a short story called Moxon’s Master published in 1899, the 1872 satire of human evolution called Erewhon by Samuel Butler — and if we decouple the conversation from computers or machines, we see an older lineage of “unnatural” intelligences: Shelley’s Frankenstein in 1819, all the way back to the Greek myth of Talos, the automation of brass built to protect Crete’s shores from pirates. Movie, myth, or monster, neural network software or cybernetic robot — these stories share a common thread that is essential to our interpretation of the current conversation about AI, what’s really at risk, what’s really to gain, and how and why we must meet this current moment differently.
These stories reveal what happens when humans trade their fundamental humanity, when we allow elements of our human experience to be replaced without clear understanding of who and what that trade-off serves. Humans are good at all kinds of things, and, in fact, need to take responsibility for certain tasks in the universe. Automations, robots, computers — machines of efficiency and speed can be powerful augmentation, can free us up to focus on our highest and best efforts, and as generative LLM AI systems like ChatGPT are showing us, can even been incredible tools of creativity and inspiration, but they are not capable of humanity — and in fact represent the pinnacle of how the industrialization of meaning and experience fundamentally undermine natural systems. But this view requires us to embrace and understand just what is essential for us, the work in the world that is only ours to take responsibility for and just how estranged we are from it.
“My logic is undeniable.”
“Yes, but it just seems too heartless.” **
Regardless of when you think it started or how far off we've strayed, very few would argue humans are living in a good way, balanced and sustainable, in healthy equilibrium with the natural systems of which we are an essential part. And in many cases we've built cultural narratives around that imbalance that justify our behaviors. But the role that we are expected, that only we can play in these systems is exactly that essential part that is in fact what we are openly celebrating sacrificing to basic artificial intelligences only capable of optimizing for efficiency looking backward and utterly cutting us off from the joyous generative expansive exploration of a new future.
As the current dominant LLM-based AI tools get further integrated into both productivity-centered and creative workflows, humans will stop interacting with blank pages entirely. What is celebrated as a massive step forward in efficiency will complete the dehumanization we began generations ago when we started optimizing human value in terms of labor and economic input. Without the endless horizon of the blank pages we will disconnect ourselves from the Source (as Rick Rubin describes creativity as a way of being) of ideas, creativity, and the pattern of existence that we experience in those generative moments when we become responsible for bringing something new into the world. We give up that last gasp of our proper role in the natural world as custodial species and leave nature without a conduit for prayer, culture building, or ceremony in favor of seeing ourselves through the narrower lens of Craftsman — subject to a disembodied, ahistorical direction of something arbitrary and man-made.
Civilization has been slowly transforming us into simpler and simpler, more specialized, less creative engines, cogs for wealth creation, for a long time. The industrialization of meaning over the last couple centuries has locked the inevitability and goodness of this thinking into modern narratives that have put efficiency first, efficacy second, and humanity last. These priorities openly define work as our source of dignity and put all of our most important responsibilities subservient to an economy meant to create the abundant spaces for us to fulfill our best greatness. The relentless secularization of civilization broadly built of elite, coastal cultures and institutions in particular has estranged most any questions beyond economic growth from public discourse about value and purpose — justice is even most often couched in economic rather than moral terms. Such that economic output has become the end, the apex of human effort and left all of us lonely and bereft of meaning, scrambling to recreate purpose in our own personal time in a single generation every generation, with no link to our ancestors and no legacy cast into the future.
“Sonny, save her! Save the girl!” ***
And we are signing up for this transformation, begging to be turned into something other than fully human, to abandon even the possibility that we might again be real human beings in the name of mere efficiency. Nothing fancy or aweing, no grand bargain, just slightly less and slightly less complex work, less creativity, less responsibility for our role. And out of what? Fear that we might not have another idea? That we might not be able to fill that blank page? As if the source of our creativity hasn't always been infinite. Or is it that what we’re asked to fill it with is too petty, small, and unworthy of the greatness Source might promise? In the second case, at least we might be taking a principled stand about where we ought to spend our best time and effort and how. But mostly we're living in the first case, small and narrow. More efficiently doing things out of fear (like the world's largest most “innovative” companies using a generational leap in technology to steal search traffic from another to grab ad market share) and unaware of the trade-off embedded in the narrative. Not only are these ambitions small, if these systems are trained on existing culture, they are also a descending spiral. They fit the future neatly into the probabilities predicted by previous expressions rather than new unpredictable joyous recommendations — no more error no more whimsy or uncertainty, no novelty or even bravery required. Even the hallucinations are boring. All fits neatly into the containers of the past rather than expanding the containers with new content based on conditions that enable real creativity. And the containers get narrower and more confirmation-fitting as we go, narrowing our experience into increasingly efficient predictable spaces incapable of bringing expansion novelty and the joy of the unexpected and the creative power of engaging an unknown world with curiosity and faith that Source has something grand for us to explore rather than sorting bias and extraction.
“Artificial intelligence is better understood as a belief system instead of a technology.” ****
Real human beings aren't meant for so little. When we need to be efficient with labor tasks, these tools might exactly unlock more time and opportunity for us to fully fill our complete roles as real humans. Our relationship to these technologies does not have to be an either-or choice — in fact, there’s a more generative future made possible by a good, powerful both-and orientation. But we cannot allow ourselves to be cornered into the narrow, limiting idea that those productivities are our only or even our most important functions, that productivity is the measure of our worth. We need to embrace more, believe we are worth more. We are the custodial species meant to expand the reach and boundaries of life in endless creative generations. And because we are community beings, while we are all essential, we embrace this role together. We are our best in community, most creative in community, so rather than expanding our capacities and experiences, we risk letting these technologies further narrow not only ourselves but also our relationships and communities, reducing the powerful diversity of our experience, leaving us less than and more lonely. Stepping into our role fully is our way back into a generative relationship with the universe and antidote to our sense of futurelessness and the loneliness endemic to our estrangement from the natural world. This trade-off is no small thing, and for such a small thing…
*HAL (AI) from 2001: A Space Odyssey
**Viki (AI) and Sonny (robot) from I, Robot
***Det. Spooner (human) from I, Robot
****Jaron Lanier from One Half of Manifesto
Please consider becoming a paid subscriber to support this work. Subscribing to 7 Bridges is the best way to keep it free and open to all.