Brian Eno is MORE DARK THAN SHARK
spacer

INTERVIEWS, REVIEWS & RELATED ARTICLES

Boston Review DECEMBER 4, 2024 - by Evgeny Morozov, Brian Eno & others

THE AI WE DESERVE

CRITIQUES OF ARTIFICIAL INTELLIGENCE ABOUND. WHERE'S THE UTOPIAN VISION FOR WHAT IT COULD BE?

With responses from Brian Eno, Audrey Tang, Terry Winograd, Bruce Schneier & Nathan Sanders, Sarah Myers West & Amba Kak, Wendy Liu, Edward Ongweso Jr., and Brian Merchant.

• • •

For a technology that seemed to materialize out of thin air, generative AI has had a remarkable two-year rise. It's hard to believe that it was only on November 30, 2022, when ChatGPT, still the public face of this revolution, became widely available. There has been a lot of hype, and more is surely to come, despite talk of a bubble now on the verge of bursting. The hawkers do have a point. Generative AI is upending many an industry, and many people find it both shockingly powerful and shockingly helpful. In health care, AI systems now help doctors summarize patient records and suggest treatments, though they remain fallible and demand careful oversight. In creative fields, AI is producing everything from personalized marketing content to entire video game environments. Meanwhile, in education, AI-powered tools are simplifying dense academic texts and customizing learning materials to meet individual student needs.

In my own life, the new AI has reshaped the way I approach both everyday and professional tasks, but nowhere is the shift more striking than in language learning. Without knowing a line of code, I recently pieced together an app that taps into three different AI-powered services, creating custom short stories with native-speaker audio. These stories are packed with tricky vocabulary and idioms tailored to the gaps in my learning. When I have trouble with words like Vergesslichkeit ("forgetfulness" in German), they pop up again and again, alongside dozens of others that I'm working to master.

In over two decades of language study, I've never used a tool this powerful. It not only boosts my productivity but redefines efficiency itself - the core promises of generative AI. The scale and speed really are impressive. How else could I get sixty personalized stories, accompanied by hours of audio across six languages, delivered in just fifteen minutes - all while casually browsing the web? And the kicker? The whole app, which sits quietly on my laptop, took me less than a single afternoon to build, since ChatGPT coded it for me. Vergesslichkeit, au revoir!

But generative AI hasn't only introduced new ecstasies of technological experience; it has also brought new agonies. The educational context is a case in point: if ChatGPT holds promise for personalized tutoring, it also holds promise for widespread cheating. Lowering the costs of mischief, as generative AI has already done, is a sure recipe for moral panic. Hence the growing list of public concerns about the likely - and in some cases already felt - effects of this technology. From automated decision-making in government and corporate institutions to its role in surveillance, criminal justice, and even warfare, AI's reach extends deeply into social and political life. It has the potential to perpetuate bias, exacerbate wealth inequality, and obscure accountability in high-stakes processes, raising urgent questions about its impact.

Many of these concerns point to a larger structural issue: power over this technology is concentrated in the hands of just a few companies. It's one thing to let Big Tech manage cloud computing, word processing, or even search; in those areas, the potential for mischief seems smaller. But generative AI raises the stakes, reigniting debates about the broader relationship between technology and democracy.

There is broad consensus that AI requires more of the latter, though what that entails remains fiercely debated. For some, democratizing AI involves greater transparency around the models and datasets driving these systems. Others advocate for open-source alternatives that would challenge corporate giants like OpenAI and Anthropic. Some call for reducing access barriers or building public-sector alternatives to privatized AI services. Most of these solutions, however, focus narrowly on fixing democratic deficits at the implementation stage of AI, prioritizing pragmatic adjustments to the AI systems already deployed or in the pipeline. Supporters of this view - call them the realists - argue that AI is here to stay, that its value depends on how we use it, and that it is, at minimum, worthy of serious political oversight.

Meanwhile, a small but growing group of scholars and activists are taking aim at the deeper, systemic issues woven into AI's foundations, particularly its origins in Cold War - era computing. For these refuseniks, AI is more than just a flawed technology; it's a colonialist, chauvinist, racist, and even eugenicist project, irreparably tainted at its core. Democratizing it would be like hoping to transform a British gentlemen's club into a proletarian library - cosmetic reforms won't suffice.

For their part, AI researchers claim they operated with considerable independence. As one of them put it in a much-discussed 1997 essay, "if the field of AI during those decades was a servant of the military then it enjoyed a wildly indulgent master." If the AI community indeed enjoyed such autonomy, why did so few subversive or radical innovations emerge? Was conservatism and entanglement with the military-industrial complex ingrained in the research agenda from the start? Could an anti-systemic AI even exist, and what would it look like? More importantly, does any of this matter today - or should we resign ourselves to the realist stance, accept AI as it stands, and focus on democratizing its development?

The contours of AI critique have evolved over time. The refuseniks, for example, once included a sizeable subset of "AI futilitarians" who took much delight in dissecting all the reasons AI would never succeed. With recent advances in generative AI - operating on principles far removed from those attacked by philosophically inclined skeptics - this position seems in crisis. Today's remaining futilitarians train their sights on the specter of killer robots and yet-to-come artificial general intelligence - long a touchstone of the tech industry's futurist dreams.

There are, of course, other positions; this sketch of the debate doesn't capture every nuance. But we must face up to the fact that both broad camps, the realists and the refuseniks, ultimately reify artificial intelligence - the former in order to accept it as more or less the only feasible form of AI, the latter to denounce it as the irredeemable offspring of the military-industrial complex or the tech industry's self-serving fantasies. There's relatively little effort to think about just what AI's missing Other might be - whether in the form of a research agenda, a political program, a set of technologies, or, better, a combination of all three.

To close this gap, I want to offer a different way of thinking about AI and democracy. Instead of aligning with either the realists or the refuseniks, I propose a radically utopian question: If we could turn back the clock and shield computer scientists from the corrosive influence of the Cold War, what kind of more democratic, public-spirited, and less militaristic technological agenda might have emerged? That alternative vision - whether we call it "artificial intelligence" or something else - supplies a meaningful horizon against which to measure the promises and dangers of today's developments.

• • •

To see what road we might have traveled, we must return to the scene of AI's birth. From its origins in the mid-1950s - just a decade after ENIAC, the first digital computer, was built at the University of Pennsylvania - the AI research community made no secret that the kind of machine intelligence it sought to create was teleological: oriented toward attaining a specific goal, or telos.

Take the General Problem Solver, a software program developed in 1957 with support from the RAND Corporation. Its creators - Herbert A. Simon, Allen Newell, and J. C. Shaw - used a technique called "means-ends analysis" to create a so-called "universal" problem solver. In reality, the problems the software could tackle had to be highly formalized. It worked best when goals were clearly defined, the problem-solving environment was stable (meaning the rules governing the process were fixed from the start), and multiple iterations allowed for trying out a variety of means to achieve the desired ends.

Of course, this "rules-based" paradigm of AI research eventually lost out to a rival approach based on neural networks - the basis of all modern machine learning, including the large language models (LLMs) powering systems like ChatGPT. But even then, the nascent neural network approach was framed in problem-solving terms. One of the envisioned applications of the Perceptron, an early neural network designed for pattern recognition, was military: sifting through satellite imagery to detect enemy targets. Neural networks required a clearly defined target and trained models to achieve that task. Without a specific goal or a clear history of prior attempts at achieving it, they wouldn't work.

I think it is not a coincidence that early AI tools closely mirrored the instrumental reason of clerical and administrative workers in the very institutions - government, corporate, and military - that spearheaded AI research. These were workers with limited time and attention, for whom mistakes carried significant costs. Automating their tasks through machines seemed both a logical next step and an efficient way to reduce errors and expenses. Some of this focus on goals can be traced to funding imperatives; early AI needed to prove its practical value, after all. But a deeper reason lies in AI's intellectual inheritance from cybernetics - a discipline that shaped much of its early agenda but was sidelined as AI sought to establish itself as a distinct field.

The pioneers of cybernetics were fascinated by how feedback-powered technologies - ranging from guided missiles to thermostats - could exhibit goal-directed behavior without conscious intention. They drew analogies between these systems and the teleological aspects of human intelligence - such as lifting a glass or turning a door handle - that allow us to achieve goals through feedback control. In appropriating this cybernetic framework, AI carried the metaphor further. If a thermostat could "pursue" a target temperature, why couldn't a digital computer "pursue" a goal?

Yet there was an important difference. Early cyberneticians had one foot in machine engineering and the other in the biological sciences. They saw their analogies as a way to understand how the brain and nervous system actually functioned, and, if necessary, to revise the underlying models - sometimes by designing new gadgets to better reflect (or, in their language, "embody") reality. In other words, they recognized that their models were just that: models of actually existing intelligence. The discipline of AI, by contrast, turned metaphor into reality. Its pioneers, largely mathematicians and logicians, had no grounding in biology or neuroscience. Instead, intelligence became defined by whatever could be replicated on a digital computer - and this has invariably meant pursuing a goal or solving a problem, even in the biologically inspired case of neural networks.

This fixation on goal-driven problem solving ironically went uncriticized by some of AI's earliest and most prominent philosophical critics - particularly Hubert Dreyfus, a Berkeley professor of philosophy and author of the influential book What Computers Can't Do (1972). Drawing on Martin Heidegger's reflections on hammering a nail in Being And Time, Dreyfus emphasized the difficulty of codifying the tacit knowledge embedded in human traditions and culture. Even the most routine tasks are deeply shaped by cultural context, Dreyfus contended; we do not follow fixed rules that can be formalized as explicit, universal guidelines.

This argument was supposed to show that we can't hope to teach machines to act as we do, but it failed to take aim at AI's teleological ethos - the focus on goal-oriented problem solving - itself. This is even more puzzling given that Heidegger himself offers one variant of such a critique. He wasn't a productivity-obsessed Stakhanovite on a mission to teach us how to hammer nails more effectively, and he certainly didn't take goal-oriented action as the essential feature of human life.

On the contrary, Heidegger noted that it's not only when the hammer breaks that we take note of how the world operates; it's also when we grow tired of hammering. In such moments of boredom, he argued, we disengage from the urgency of goals, experiencing the world in a more open-ended way that hints at a broader, fluid, contextual form of intelligence - one that involves not just the efficient achievement of tasks but a deeper interaction with our environment, guiding us toward meaning and purpose in ways that are hard to formalize. While Heidegger's world might seem lonely - it's mostly hammers and Dasein - similar reexaminations of our goals can be sparked by our interactions with each other.

Yet for the AI pioneers of the 1950s, this fact was a nonstarter. Concepts like boredom and intersubjectivity, lacking clear teleological grounding, seemed irrelevant to intelligence. Instead, early AI focused on replicating the intelligence of a fully committed, extrinsically motivated, emotionally detached office worker - a species of William Whyte's "organization man," primed for replacement by more reliable digital replicas.

It took nearly a decade for Dreyfus's Heideggerian critique to resonate within the AI community, but when it did, it led to significant realignments. One of the most notable appeared in the work of Stanford computer science professor Terry Winograd, a respected figure in natural language processing whose work had even earned Dreyfus's approval. In the 1980s Winograd made a decisive turn away from replicating human intelligence. Instead, he started focused on understanding human behavior and context, aiming to design tools that would amplify human intelligence rather than mimic it.

This shift became tangible with the creation of the Coordinator, a software system developed through Winograd's collaboration with Fernando Flores, a Chilean politician-turned-philosopher and a serial entrepreneur. As its name suggests, the software aimed to facilitate better workplace coordination by allowing employees to categorize electronic interactions with a colleague - was it a request, a promise, or an order? - to reduce ambiguity about how to respond. Properly classified, messages could then be tracked and acted upon appropriately.

Grounded in principles of human-computer interaction and interaction design, this approach set a new intellectual agenda: Rather than striving to replicate human intelligence in machines, why not use machines to enhance human intelligence, allowing people to achieve their goals more efficiently? As faith in the grand promises of conventional AI began to wane, Winograd's vision gained traction, drawing attention from future tech titans like Larry Page, Reid Hoffman, and Peter Thiel, who attended his classes.

The Coordinator faced its share of criticism. Some accused it of reinforcing the hierarchical control that stifled creativity in bureaucratic organizations. Like the Perceptron, the argument went, the Coordinator ultimately served the agendas of what could be called the Efficiency Lobby within corporations and government offices. It helped streamline communication, but in ways that often aligned with managerial objectives, consolidating power rather than distributing it. This wasn't inevitable; one could just as easily imagine social movements - where ambiguity in communication is commonplace - using the software. (It would likely work better for movements with centralized structures and clear goals, such as the civil rights movement, than for decentralized ones such as Occupy Wall Street or the Zapatistas.)

The deeper issue lay in the very notion of social coordination that Winograd and Flores were trying to facilitate. While they had distanced themselves from the AI world, their approach remained embedded in a teleological mindset. It was still about solving problems, about reaching defined goals - a framework that didn't fully escape the instrumental reason of AI they had hoped to leave behind.

• • •

Winograd, to his credit, proved far more self-reflexive than most in the AI community. In a talk in 1987, he observed striking parallels between symbolic AI - then dominated by rules-based programs that sought to replicate the judgment of professionals like doctors and lawyers - and Weberian bureaucracy. "The techniques of artificial intelligence," he noted, "are to the mind what bureaucracy is to human social interaction." Both thrive in environments stripped of ambiguity, emotion, and context - the very qualities often cast as opposites of the bureaucratic mindset.

Winograd didn't examine the historical forces that produced this analogy. But recent historical accounts suggest that AI research may have, from its inception, attracted those already studying or optimizing bureaucratic systems. As historian of technology Jonnie Penn points out, Herbert A. Simon is a prime example: after aiming to build a "science of public administration" in the 1940s, by the mid-1950s he had become a key player in building a "science of intelligence." Both endeavors, despite acknowledging the limits of rationality, ultimately celebrated the same value: efficiency in achieving one's goals. In short, their project was aimed at perfecting the ideal of instrumental reason.

It's also no surprise that the bureaucracies of the Efficiency Lobby - from corporations to government funding agencies and the military - gravitated toward AI. Even before the 1956 Dartmouth Workshop, often seen as AI's ground zero, these institutions were already pursuing similar goals, not least due to the Cold War. The era's geopolitical tensions demanded rapid advancements in technology, surveillance, and defense, pressuring institutions to develop tools that could process vast amounts of information, enhance decision-making, and maintain a competitive edge against the Soviet Union. The academic push for AI seamlessly aligned with the automation agenda already driving these institutions: tightening rule adherence, streamlining production, and processing intelligence and combat data. Mechanizing decision-making and maximizing efficiency had long been central to their core ambitions.

It is here that we should step back and ask what might have been in the absence of Cold War institutional pressures. Why should the world-historical promise of computing be confined to replicating bureaucratic rationality? Why should anyone outside these institutions accept such a narrow vision of the role that a promising new technology - the digital computer - could play in human life? Is this truly the limit of what these machines can offer? Shouldn't science have been directed toward exploring how computers could serve citizens, civil society, and the public sphere writ large - not just by automating processes, but by simulating possibilities, by modeling alternate futures? And who, if anyone, was speaking up for these broader interests?

In a society with a semblance of democratic oversight in science, we might expect these questions to spark serious inquiry and research. But that was not mid-1950s America. Instead, John McCarthy - the computer scientist who coined the term "artificial intelligence" and the name most associated with the Dartmouth Workshop (he taught there at the time) - defined the field as he and his closest allies saw fit. They forged alliances with corporate giants like IBM and secured military funding, bypassing the broader scientific community altogether. Later, McCarthy openly celebrated these undemocratic beginnings, stating:

AI would have developed much more slowly in the U.S. if we had had to persuade the general run of physicists, mathematicians, biologists, psychologists, or electrical engineers on advisory committees to allow substantial NSF money to be allocated to AI research... AI was one of the computer science areas... DARPA consider[ed] relevant to Defense Department problems. The scientific establishment was only minimally, if at all, consulted.

AI retrospectives often bristle at the ignorance of other disciplines, yet its early practitioners had their own blind spots. Their inability to conceptualize topics such as boredom was not an isolated oversight: it reflects their fundamental failure to reckon with the non-teleological forms of intelligence - those that aren't focused on problem solving or goal attainment. By reducing all intelligence to such matters, they overlooked alternative paths - ones that explore how computer technologies might amplify, augment, or transform other forms of intelligence, or how the technology itself would need to evolve to accommodate and nurture them.

In fairness, it's unsurprising they didn't ask these questions. The Efficiency Lobby knew exactly what it wanted: streamlined operations, increased productivity, and tighter hierarchical control. The emerging AI paradigm promised all of that and more. Meanwhile, there was no organized opposition from citizens or social movements - no Humanity Lobby, so to speak - advocating for an alternative. Had there been one, what might this path have looked like?

• • •

In 1953 the Colorado Quarterly posthumously published an essay by Hans Otto Storm, an inventor and radio engineer who also made a name for himself as a novelist. He tragically died just four days after the attack on Pearl Harbor, electrocuted while installing a radio transmitter for the U.S. Army in San Francisco. Despite his notable literary career, it is this short essay - initially rejected by his publishers - that has kept his legacy alive.

Storm was a disciple and friend of the firebrand heterodox economist Thorstein Veblen. While Veblen is widely known for celebrating "workmanship" as the engineer's antidote to capitalist excess, his thinking took a fascinating, even playful turn when he encountered the scientific world. There, probably influenced by his connections to the pragmatists, Veblen discovered a different force at work: what he called "idle curiosity," a kind of purposeless purpose that drove scientific discovery. This tension between directed and undirected thought would become crucial to Storm's own theoretical innovations. Storm makes a similar crucial distinction between two modes of what he called "craftsmanship." The more familiar of the two is "design," rooted in the mindset of Veblen's engineer. It begins with a specific goal - say, constructing a building - and proceeds by selecting the best materials to achieve that end. In essence, this is just instrumental reason. (Storm was quite familiar with Weber's oeuvre and commented on it.)

What of the second mode of "craftsmanship"? Storm gave this alternative a strange name: "eolithism." To describe it, he invites us to imagine Stone Age "eoliths," or stones "picked up and used by man, and even fashioned a little for his use." Modern archaeologists doubt that eoliths are the result of this kind of human intervention - probably just the result of natural processes such as weathering or random breakage - but that is no blow to the force of Storm's vision. In his own words, the key point

is that the stones were picked up... in a form already tolerably well adapted to the end in view and, more important, strongly suggestive of the end in view. We may imagine [the ancient man] strolling along in the stonefield, fed, contented, thinking preferably about nothing at all - for these are the conditions favorable to the art - when his eye lights by chance upon a stone just possibly suitable for a spearhead. That instant the project of the spear originates; the stone is picked up; the spear is, to use a modern term, in manufacture... And if... the spearhead, during the small amount of fashioning that is its lot, goes as a spearhead altogether wrong, then there remains always the quick possibility of diverting it to some other use which may suggest itself.

The contrast with the design mode of instrumental reason could not be more pronounced. Eolithism posits no predefined problems to solve, no fixed goals to pursue. Storm's Stone Age flâneur stands in stark opposition to the kind of rationality on display in Cold War - era thought experiments like the prisoner's dilemma - and is only better for it. The absence of predetermined goals broadens the flâneur's capacity to see the world more richly, as the multiplicity of potential ends expands what counts as a means to achieve them.

This is Veblen's idle curiosity at work. Separated from it, design principles are fundamentally limited because they require fixed, predetermined goals and must eliminate diversity from both methods and materials, reducing their inherent value to merely serving those predetermined ends. Storm goes on to argue that efforts to apply design to solve problems at scale, using the uniform methods of mass production, leave people yearning for vernacular, heterogeneous solutions that only eolithism can offer. Its spirit persists into modernity, embodied in unexpected figures - Storm identifies the junkman as the quintessential eolithic character.

What sets Storm apart from other thinkers who have explored similar intellectual territory - like Claude Lévi-Strauss with his notion of "bricolage" and Jean Piaget with his observations of children and their toys - is his refusal to treat the eolithic mindset as archaic or merely a phase for primitive societies or toddlers. This longing for the heterogeneous over the rigid is not something people or societies are expected to outgrow as they develop. Instead, it's a fundamental part of human experience that endures even in modernity. In fact, this striving might inform the very spirit - playful, idiosyncratic, vernacular, beyond the rigid plans and one-size-fits-all solutions - that some associate with postmodernity.

That's not to say that eolithic tendencies were not under threat in Storm's day, especially given the imperatives favored by the Efficiency Lobby. Indeed, Storm argued that much of professional education carried an inherent anti-eolithic bias, lamenting that "good, immature eolithic craftsmen" were "urged to study engineering, only to find out, late and perhaps too late, that the ingenuity and fine economy which once captivated [them] are something which has to be unlearned." Yet, even in science and engineering, effective learning - especially in its early stages - succeeds by avoiding the algorithmic rigidities of the design mode. More often, it starts with what David Hawkins, a philosopher of education and one-time collaborator with Simon, called "messing about." (A friend of Storm's and a former aide to Robert Oppenheimer - they all moved in the same leftist circles in California of the late 1930s - Hawkins ensured the posthumous publication of Storm's essay and did much to popularize it, including among technologists.)

Storm was not a philosopher, and his brief essay contains no citations, but his perspective evokes a key theme from pragmatist philosophy. Can we really talk about means and ends as separate categories, when our engagement with the means - and with one another - often leads us to revise the very ends we aim to achieve? In Storm's terms, purposive action might itself emerge as the result of a series of eolithic impulses.

• • •

What does any of this have to do with a utopian vision for AI? If we define intelligence purely as problem solving and goal achievement, perhaps not much. In Storm's prehistoric idyll, there are no errands to be run, no great projects to be accomplished. His Stone Age wanderer, for all we know, might well be experiencing deep boredom - "thinking preferably about nothing at all," as Storm suggests.

But can we really dismiss the moment when the flâneur suddenly notices the eolith - whether envisioning a use for it or simply finding it beautiful - as irrelevant to how we think about intelligence? If we do, what are we to make of the activities that we have long regarded as hallmarks of human reason: imagination, curiosity, originality? These may be of little interest to the Efficiency Lobby, but should they be dismissed by those who care about education, the arts, or a healthy democratic culture capable of exploring and debating alternative futures?

At first glance, Storm's wanderer may seem to be engaged in nothing more than a playful exercise in recategorization - lifting the stone from the realm of natural objects and depositing it into the domain of tools. Yet the process is far from mechanical, just as it is far from unintelligent. Whether something is a useful tool or a playful artifact often depends on the gaze of the beholder - just ask Marcel Duchamp (who famously proclaimed a pissoir an art object) or Brian Eno (who famously peed into Duchamp's Fountain to reclaim its status as a subversive artifact, and not mere gallery exhibit).

Storm points to child's play as a prime example of eolithism. He also makes clear that not all social situations, actors, and institutional environments are equally conducive to it. For one, some of us may have been educated out of this mindset in school. Others may be surrounded by highly sophisticated, unalterable technical objects that resist repurposing. But Storm's list is hardly exhaustive. Many other factors are at work, from the skill, curiosity, and education of the flâneur to the rigidity of rules and norms guiding individual behavior to the ability of eolithic objects to "suggest"' and "accept" their potential uses.

With this, we have arrived at a picture of human intelligence than runs far beyond instrumental reason. We might call it, in contrast, ecological reason - a view of intelligence that stresses both indeterminacy and the interactive relationship between ourselves and our environments. Our life projects are unique, and it is through these individual projects that the many potential uses of "eoliths" emerge for each of us.

Unlike instrumental reason, which, almost by definition, is context-free and lends itself to formalization, ecological reason thrives on nuance and difference, and thus resists automation. There can be no question of formalizing the entire, ever-shifting universe of meanings from which it arises. This isn't a question of infeasibility but of logical coherence: asking a machine to exercise this form of intelligence is like asking it to take a Rorschach test. It may produce responses, especially if trained on a vast corpus of human responses, but those answers will inevitably be hollow for one simple reason: the machine hasn't been socialized in a way that would make the process of having it interpret the Rorschach image meaningful.

Yet just because formalization is off the table doesn't mean ecological reason can't be technologized in other ways. Perhaps the right question echoes one posed by Winograd four decades ago: rather than asking if AI tools can embody ecological reason, we should ask whether they can enhance its exercise by humans.

Framing the question this way offers grounds for cautious optimism - if only because AI has evolved radically since Winograd's critique in the 1980s. Today's AI allows for more heterogeneous and open-ended uses; its generality and lack of a built-in telos make it conducive to experimentation. Where earlier systems might have defaulted to a rigid "computer says no," modern AI hallucinates its way to an answer. This shift stems from its underlying method: unlike the rules-based expert systems Winograd critiqued as Weberian bureaucracy, today's large language models are powered by data and statistics. Though some rules still shape them, their outputs are driven by changing data, not fixed protocols.

What's more, these models resemble the flexibility of the market more than the rigidity of bureaucracy. Just as market participants rely on past trends and can misjudge fast-changing contexts, large language models generate outputs based on statistical patterns - at the risk of occasional hallucinations and getting the context wrong. It's no coincidence, perhaps, that Friedrich Hayek, whose work in psychology influenced early neural networks, saw an equivalence between how brains and markets operate. (Frank Rosenblatt, creator of the Perceptron, cites Hayek approvingly.)

In my small project to build the language app, I started out much like the carefree Stormian flâneur - unconcerned with solving a particular problem. I wasn't counting the hours spent learning languages or searching for the most efficient strategy. Instead, as I was using one of the three AI-powered services - my equivalent of stumbling upon Storm's stone - I noticed a feature that made me wonder if I could link this tool with the other two. Were my hunches about how easily someone as code-illiterate as myself could combine these services correct? I didn't have to wonder; with ChatGPT, I could immediately test them. In this sense, ChatGPT isn't the eolith itself - it's too amorphous, too shapeless, too generic - but it functions more like the experimental workshop where the eolithic flâneur takes his discovery to see what it's really good for. In other words, it lets us test whether the found stone is better suited as a spearhead, a toy, or an art object.

There are elements of eolithism here, in short, but I think this is far from the best we can hope for. To begin with, all three services I used come with subscription or usage fees; the one that transforms text into audio charges a hefty $99 per month. It's quite possible that these fees, heavily subsidized by venture capital, don't even account for the energy costs of running such power-hungry generative AI. It's as if someone privatized the stonefield where the original eolith was discovered, and its new proprietors charged a hefty entrance fee. A way to maximize ecological intelligence it isn't.

There's also something excessively individualistic about this whole setup - a problem that Storm's asocial, prehistoric example sidesteps. Sure, I can build a personalized language learning app using a mix of private services, and it might be highly effective. But is this model scalable? Is it socially desired? Is this the equivalent of me driving a car where a train might do just as well? Could we, for instance, trade a bit of efficiency and personalization to reuse some of the sentences or short stories I've already generated in my app, reducing the energy cost of re-running these services for each user?

This takes us to the core problem with today's generative AI. It doesn't just mirror the market's operating principles; it embodies its ethos. This isn't surprising, given that these services are dominated by tech giants that treat users as consumers above all. Why would OpenAI, or any other AI service, encourage me to send fewer queries to their servers or reuse the responses others have already received when building my app? Doing so would undermine their business model, even if it might be better from a social or political (never mind ecological) perspective. Instead, OpenAI's API charges me - and emits a nontrivial amount of carbon emissions - even to tell me that London is the capital of the UK or that there are one thousand grams in a kilogram.

For all the ways tools like ChatGPT contribute to ecological reason, then, they also undermine it at a deeper level - primarily by framing our activities around the identity of isolated, possibly alienated, postmodern consumers. When we use these tools to solve problems, we're not like Storm's carefree flâneur, open to anything; we're more like entrepreneurs seeking arbitrage opportunities within a predefined, profit-oriented grid. While eolithic bricolage can happen under these conditions, the whole setup constrains the full potential and play of ecological reason.

Here too, ChatGPT resembles the Coordinator, much like our own capitalist postmodernity still resembles the welfare-warfare modernity that came before it. While the Coordinator enhanced the exercise of instrumental reason by the Organization Man, ChatGPT lets today's neoliberal subject - part consumer, part entrepreneur - glimpse and even flirt, however briefly, with ecological reason. The apparent increase in human freedom conceals a deeper unfreedom; behind both stands the Efficiency Lobby, still in control. This is why our emancipation through such powerful technologies feels so truncated.

Despite repeated assurances from Silicon Valley, this sense of truncated liberation won't diminish even if its technologies acquire the ability to tackle even greater problems. If the main attraction of deep learning systems is their capacity to execute wildly diverse, complex, even unique tasks with a relatively simple (if not cheap or climate-friendly) approach, we should remember that we already had a technology of this sort: the market. If you wanted your shopping list turned into a Shakespearean sonnet, you didn't need to wait for ChatGPT. Someone could have done it for you - if you could find that person and were willing to pay the right price.

Neoliberals recognized this early on. At least in theory, markets promise a universal method for problem solving, one far more efficient and streamlined than democratic politics. Yet reality is sobering. Real markets all too frequently falter, often struggling to solve problems at all and occasionally making it much worse. They regularly underperform non-market systems grounded in vernacular wisdom or public oversight. Far from natural or spontaneous phenomena, they require a Herculean effort to make them function effectively. They cannot easily harness the vast reserves of both tacit and formal knowledge possessed by citizens, or at least that type of knowledge that isn't reducible to entrepreneurial thinking: markets can only mobilize it by, well, colonizing vast areas of existence. (Bureaucracies, for their part, faced similar limitations long before neoliberalism, though their disregard for citizen participation stemmed from different motives.)

These limitations are well known, which is why there's enduring resistance to commodifying essential services and a growing push to reverse the privatization of public goods. Two years into generative AI's commercial growing pains, a similar reckoning with AI looms. As long as AI remains largely under corporate control, placing our trust in this technology to solve big societal problems might as well mean placing our trust in the market.

• • •

What's the alternative? Any meaningful progress in moving away from instrumental reason requires an agenda that breaks ties with the Efficiency Lobby. These breaks must occur at a level far beyond everyday, communal, or even urban existence, necessitating national and possibly regional shifts in focus. While this has never been done in the United States - with the potential exception of certain elements of the New Deal, such as support for artists via the Federal Art Project - history abroad does offer some clues as to how it could happen.

In the early 1970s, Salvador Allende's Chile aimed to empower workers by making them not just the owners but also the managers of key industries. In a highly volatile political climate that eventually led to a coup, Allende's government sought to harness its scarce information technology to facilitate this transition. The system - known as Project Cybersyn - was meant to promote instrumental and technological reason, coupling the execution out of usual administrative tasks with deliberation on national, industry, and company-wide alternatives. Workers, now in managerial roles, would use visualization and statistical tools in the famous Operations Room to make informed decisions. The person who commissioned the project was none other than Fernando Flores, Allende's minister and Winograd's future collaborator.

Around the same time, a group of Argentinian scientists began their own efforts to use computers to spark discussions about potential national - and global - alternatives. The most prominent of these initiatives came from the Bariloche Foundation, which contested many of the geopolitical assumptions found in reports like 1972's The Limits To Growth - particularly the notion that the underdeveloped Global South must make sacrifices to "save" the overdeveloped Global North.

Another pivotal figure in this intellectual milieu was Oscar Varsavsky, a talented scientist-turned-activist who championed what he called "normative planning." Unlike the proponents of modernization theory, who wielded computers to project a singular, predetermined trajectory of economic and political progress, Varsavsky and his allies envisioned technology as a means to map diverse social trajectories - through a method they called "numerical experimentation" - to chart alternative styles of socioeconomic development. Among these, Varsavsky identified a spectrum including "hippie," "authoritarian," "company-centric," "creative," and "people-centric," the latter two being his preferred models.

Computer technology would thus empower citizens to explore the possibilities, consequences, and costs associated with each path, enabling them to select options that resonated with both their values and available resources. In this sense, information technology resembled the workshop of our eolithic flâneur: a space not for mere management or efficiency seeking, but for imagination, simulation, and experimentation.

The use of statistical software in modern participatory budgeting experiments - even if most of them are still limited to the local rather than national level - mirrors this same commitment: the goal is to use statistical tools to illuminate the consequences of different spending options and let citizens choose what they prefer. In both cases, the process is as much about improving what Paulo Freire called "problem posing" - allowing contesting definitions of problems to emerge by exposing it to public scrutiny and deliberation - as it is about problem solving.

What ties the Latin American examples together is their common understanding that promoting ecological reason cannot be done without delinking their national projects from the efficiency agenda imposed - ideologically, financially, militarily - by the Global North. They recognized that the supposedly apolitical language of such presumed "modernization" often masked the political interests of various factions within the Efficiency Lobby. Their approach, in other words, was first to pose the problem politically - and only later technologically.

The path to ecological reason is littered with failures to make this move. In the late 1960s, a group of tech eccentrics - many with ties to MIT - were inspired by Storm's essay to create the privately funded Environmental Ecology Lab. Their goal was to explore how technology could enable action that wasn't driven by problem solving or specific objectives. But as hippies, rebels, and antiwar activists, they had no interest in collaborating with the Efficiency Lobby, and they failed to take practical steps toward a political alternative.

One young architecture professor connected to the lab's founders, Nicholas Negroponte, didn't share this aversion. Deeply influenced by their ideas, he went on to establish the MIT Media Lab - a space that celebrated playfulness through computers, despite its funding from corporate America and the Pentagon. In his 1970 book, The Architecture Machine: Toward A More Human Environment, Negroponte even cited Storm's essay. But over time, this ethos of playfulness morphed into something more instrumental. Repackaged as "interactivity" or "smartness," it became a selling point for the latest gadgets at the Consumer Electronics Show - far removed from the kind of craftsmanship and creativity Storm envisioned.

Similarly, as early as the 1970s, Seymour Papert - Negroponte's colleague at MIT and another AI pioneer - recognized that the obsession with efficiency and instrumental reason was detrimental to computer culture at large. Worse, it alienated many young learners, making them fear the embodiment of that very reason: the computer. Although Papert, who was Winograd's dissertation advisor, didn't completely abandon AI, he increasingly turned his focus to education, advocating for an eolithic approach. (Having worked with Piaget, he was also acquainted with the work of David Hawkins, the education philosopher who had published Storm's essay.) Yet, like the two labs, Papert's solutions ultimately leaned toward technological fixes, culminating in the ill-fated initiative to provide "one laptop per child." Stripped of politics, it's very easy for eolithism to morph into solutionism.

• • •

The Latin American examples give the lie to the "there's no alternative" ideology of technological development in the Global North. In the early 1970s, this ideology was grounded in modernization theory; today, it's rooted in neoliberalism. The result, however, is the same: a prohibition on imagining alternative institutional homes for these technologies. There's immense value in demonstrating - through real-world prototypes and institutional reforms - that untethering these tools from their market-driven development model is not only possible but beneficial for democracy, humanity, and the planet.

In practice, this would mean redirecting the eolithic potential of generative AI toward public, solidarity-based, and socialized infrastructural alternatives. As proud as I am of my little language app, I know there must be thousands of similar half-baked programs built in the same experimental spirit. While many in tech have profited from fragmenting the problem-solving capacities of individual language learners, there's no reason we can't reassemble them and push for less individualistic, more collective solutions. And this applies to many other domains.

But to stop here - enumerating ways to make LLMs less conducive to neoliberalism - would be shortsighted. It would wrongly suggest that statistical prediction tools are the only way to promote ecological reason. Surely there are far more technologies for fostering human intelligence than have been dreamt of by our prevailing philosophy. We should turn ecological reason into a full-fledged research paradigm, asking what technology can do for humans - once we stop seeing them as little more than fleshy thermostats or missiles.

While we do so, we must not forget the key insight of the Latin American experiments: technology's emancipatory potential will only be secured through a radical political project. Without one, we are unlikely to gather the resources necessary to ensure that the agendas of the Efficiency Lobby don't overpower those of the Humanity Lobby. The tragic failure of those experiments means this won't be an easy ride.

As for the original puzzle - AI and democracy - the solution is straightforward. "Democratic AI" requires actual democracy, along with respect for the dignity, creativity, and intelligence of citizens. It's not just about making today's models more transparent or lowering their costs, nor can it be resolved by policy tweaks or technological innovation. The real challenge lies in cultivating the right Weltanschauung - this app does wonders! - grounded in ecological reason. On this score, the ability of AI to run ideological interference for the prevailing order, whether bureaucracy in its early days or the market today, poses the greatest threat.

Incidentally, it's the American pragmatists who got closest to describing the operations of ecological reason. Had the early AI community paid any attention to John Dewey and his work on "embodied intelligence," many false leads might have been avoided. One can only wonder what kind of AI - and AI critique - we could have had if its critics had looked to him rather than to Heidegger. But perhaps it's not too late to still pursue that alternative path.

RESPONSES

AI'S WALKING DOG

Brian Eno: Today's tech inverts the value of the creative process.

Thoreau's adage "beware of all enterprises that require new clothes" should perhaps be updated to "beware of all enterprises that require venture capital."

Morozov argues that AI itself has much to offer, but it has not lived up to its potential to serve the public good, and the context of AI's development explains why. I agree. My own misgivings about AI have less to do with the technology itself than with the problematic nature of who owns it, and what they want to do with it. Venture capitalist Marc Andreessen's wildly hubristic visions of the future are par for the course in West Coast technology in that they downplay even the possibility of any downsides, brusquely dismissing these as "safety-ism." I for one wish there had been a few more "safety-ists" around when the algorithms for social media were being crafted.

If a company is run primarily for profit, you'll get entirely different outcomes than if it's run for the public good - despite what the true believers in the "invisible hand" of the market preach. Social media provides the best example, and the experience of what happened with social media is a bad omen for what might happen (and is happening!) with AI. Two words - "maximize engagement," code for "maximize profits" - were all that was needed to send social media into the abyss of spleen-venting hostility where it now wallows.

The drive for more profits (or increasing "market share," which is the same thing) produces many distortions. It means, for example, that a product must be brought to market as fast as possible, even if that means cutting corners in terms of understanding social impacts; it means social value and security are secondary by a long margin. The result is a Hollywood shootout fantasy, except it's a fantasy we have to live in.

AI today inverts the value of the creative process. The magic of play is seeing the commonplace transforming into the meaningful. For that transformation to take place we need to be aware of the provenance of the commonplace. We need to sense the humble beginnings before we can be awed by what they turn into - the greatest achievement of creative imagination is the self-discovery that begins in the ordinary and can connect us to the other, and to others.

Yet AI is part of the wave of technologies that are making it easier for people to live their lives in complete independence from each other, and even from their own inner lives and self-interest. The issue of provenance is critically important in the creative process, but not for AI today. Where something came from, and how and why it came into existence, are major parts of our feelings about it. We feel differently about a piece of music played by an orchestra in a concert hall than we do about exactly the same piece of music made by a kid in a bedroom with a good sample bank. The backstory matters! The event matters! The intentions matter! We have no idea of the actual origin of the text AI delivers to us. Does it matter that what we've scraped off the ether to feed our AIs is not by any means the whole of the world's knowledge, but just the part that happened to have been published in printed books by the small sliver of the English-speaking world that happened to publish them - and made them available to AI bots? What kind of sausage is that? Surely Weisswurst, made of available scraps on the butcher's floor.

AI is always stunning at first encounter: one is amazed that something nonhuman can make something that seems so similar to what humans make. But it's a little like Samuel Johnson's comment about a dog walking on its hind legs: we are impressed not by the quality of the walking but by the fact it can walk that way at all. After a short time it rapidly goes from awesome to funny to slightly ridiculous - and then to grotesque. Does it not also matter that the walking dog has no intentionality - doesn't "know" what it's doing?

In my own experience as an artist, experimenting with AI has mixed results. I've used several "songwriting" AIs and similar "picture-making" AIs. I'm intrigued and bored at the same time: I find it quickly becomes quite tedious. I have a sort of inner dissatisfaction when I play with it, a little like the feeling I get from eating a lot of confectionery when I'm hungry. I suspect this is because the joy of art isn't only the pleasure of an end result but also the experience of going through the process of having made it. When you go out for a walk it isn't just (or even primarily) for the pleasure of reaching a destination, but for the process of doing the walking. For me, using AI all too often feels like I'm engaging in a socially useless process, in which I learn almost nothing and then pass on my non-learning to others. It's like getting the postcard instead of the holiday. Of course, it is possible that people find beauty and value in the Weisswurst, but that says more about the power of the human imagination than the cleverness of AI.

All that said, I do believe that AI tools can be very useful to an artist in making it possible to devise systems that see patterns in what you are making and drawing them to your attention, being able to nudge you into territory that is unfamiliar and yet interestingly connected. I say this having had some good experiences in my own (pre-AI) experiments with Markov chain generators and various crude randomizing procedures. Any reservations about AI get you dismissed as a Luddite - though it's worth remembering that it was the Luddites, not the mill owners, who understood more holistically what the impact of the new mill machinery would be.

To make anything surprising and beautiful using AI you need to prepare your prompts extremely carefully, studiously closing off all the yawning, magnetic chasms of Hallmark mediocrity. If you don't want to get moon rhyming with June, you have to give explicit instructions like, "Don't rhyme moon with June!" And then, at the other end of the process, you need to rigorously filter the results. Now and again, something unexpected emerges. But even with that effort, why would a system whose primary programming is telling it to take the next most probable step produce surprising results? The surprise is primarily the speed and the volume, not the content.

In an era when "cultivated" people purport to care so much about the origins of the stuff they put into their mouths, will they be as cautious with the stuff they put into their minds? Will they be able to resist the information sausage meat that AI is about to serve them?

Brian Eno is a musician, producer, visual artist, and activist. In 2019 he was inducted into the Rock and Roll Hall of Fame as a member of Roxy Music.

• • •

THE REAL LEGACY OF CYBERNETICS

Audrey Tang: Lessons from the personal computing revolution.

Morozov rightly calls for us to turn away from the seductive AI narrative of replicating human capabilities in autonomous machines, toward a rich older tradition of cybernetics and Deweyan pragmatism, which instead imagined a world where machines connect people to collaborate and self-govern more adroitly. He also draws on lost history to project this struggle onto the left-right political divide, looking, for example, to Latin American radicalism for inspiration. While my experience as Digital Minister of Taiwan lacks the romance of Salvador Allende's experience in Chile, perhaps the pragmatism I sought to apply suggests a more consensual path toward Morozov's ambitions.

My work was deeply grounded in the mainstream history of modern computing. After all, while cybernetics did inspire coup-subverted socialist experiments and hippie communes, it was also the primary influence on at least three of the most consequential and successful technological and management trends of the postwar era: personal computing, the Japanese manufacturing miracle, and the internet. Both leftist martyrs and Silicon Valley tycoons have fancied themselves the rebellious heroes of cybernetics. Yet its achievements arguably owe much more to the duller work of scholar-bureaucrats like J. C. R. Licklider (known as "Lick") and W. Edwards Deming, who moved seamlessly across business, government, and the academy - a network Morozov would surely label the "military-industrial complex."

So-called AI technologies may well come to shape all our lives, and we must do everything we can to put humanity's hands on its steering wheel. Yet the tools that have thus far driven the digital age have much less of the logic of instrumental efficiency and human alienation that Morozov rightly critiques. In Lick's words, personal computers offer "man-computer symbiosis" rather than artificial general intelligence. Meanwhile, the miracle of the Japanese kaizen method was built on Deming's insight that empowering line workers to understand full production processes would both enable them to continuously improve quality and avoid replacing them with or transforming them into machines. As for the internet, its packet switching, hypertext, and Deweyan form of collaborative, standards-based governance offer a powerful substrate for a startling range of interactions without making us slaves to the premature optimization that computer science pioneer Tony Hoare identified as the "root of all evil."

Of course, the reign of these contrasting paradigms may be in decline as the internet and personal computing have increasingly become cogs in the machine of AI-powered, centralized digital platforms. Yet when Lick foretold this tragic turn even at the occasion of the internet's birth in his visionary 1979 essay, "Computers and Government," he pinned the cause on precisely the sort of anti-military-industrial agitation that Morozov celebrates.

As program officer for the Information Processing Techniques Office, Lick had harnessed Department of Defense funding through the Advanced Research Projects Agency (ARPA) to jumpstart the funding of some of the earliest computer science departments, including Douglas Engelbart's Augmentation Research Center at Stanford, and connected them through the ARPANET that grew into today's internet. While he empathized with the anti-Vietnam War sentiments that fueled the Mansfield Amendments' prohibition of defense-funded basic research, he saw clearly how the abandonment of networking by the U.S. government would allow corporate monopolies to dominate and stifle digital innovation.

Forced by these strictures to focus narrowly on the performance of weapons systems, ARPA turned, as Morozov bemoans, to a narrow logic of efficiency. This change is even symbolized in the rebranding of the organization to D(efense)ARPA. As Morozov recounts, this played into the hands of a funding-hungry AI community, which was, perhaps ironically, dominated by the work of pioneers like John McCarthy and Marvin Minsky - who, far from being neoliberals, were advocates of AI-powered utopias far beyond what Soviet planners thought possible. The turn we have seen in the West toward both AI and cryptocurrencies in the last decade and a half may be seen as the ripple effects of this change in priorities, just as the internet and personal computing revolutions were the ripple effects of Lick's foundational investments.

In a recent book, ⿻ 數位 Plurality: The Future Of Collaborative Technology And Democracy, E. Glen Weyl and I, in collaboration with dozens of leaders across industry, research, and government, insist that reports of the death of cybernetics have been greatly exaggerated. Its roots were planted much deeper in Asia than in the West. Deming brought participatory production to the core of the lives of millions of people in Japan, after all, and Dewey's extensive travels in China between 1919 and 1921 made his pragmatic and democratic theory of education a foundation of Taiwanese land reform and education.

Thus, while AI and crypto, and the critiques like Morozov's they inspire, dominate the tech discourse of the West, a more hopeful and consensual narrative is playing out half a world away. Japan's Miraikan National Museum of Emerging Science and Innovation eschews the performative and apparently useless robot dogs of Boston's Museum of Science in favor of playful and caring assistive technologies. In India, the publicly funded, openly interoperable Agri Stack - part of the broader India Stack initiative building up the country's digital public infrastructure - has brought public services and payment systems to more than one hundred million unbanked farmers untouched by the cryptocurrency craze. And Taiwan's burgeoning digital democracy has manifested Lick's injunction from half a century ago: "Decisions about the development... of computer technology must be made... in the interest of giving the public itself the means to enter into the decision-making processes that will shape their future."

This is a tradition worth learning from and building on. Yet it is not one that fits easily into the limited political narratives of our time. In Japan, India, and Taiwan - often celebrated by the U.S. political mainstream as strong allies - technological innovation is deeply integrated with a traditional, often religious, social fabric. This is foreign to Western narratives in which the secular eschatology of existential risks and social justice politics are the primary checks on corporate AI ambitions. Yet it may offer a path to more fully free us from the traps the current digital society has laid for us.

AI is not just threatening a disconcerting future; when applied to maximize engagement with polarizing content and thus advertising revenue, it is already warping our ability to see one another and the world around us. The most effective act of rebellion may thus be to transcend these incentives to reinforce existing divides by reaching out across cultures and ideologies to forge a future we want to inhabit together.

Audrey Tang is Cyber Ambassador-at-large of Taiwan, where she previously served as the first Minister of Digital Affairs. She is coauthor, with E. Glen Weyl, of ⿻ 數位 Plurality: The Future Of Collaborative Technology And Democracy.

• • •

MACHINES OF CARING GRACE

Terry Winograd: The goal should be to support humans, not to replace them.

Morozov poses a provocative question, asking how AI might have been directed to different ends than the ones that drive the runaway industry today. As with any technology, we need to question both the technical imperatives and the underlying human values and uses. In the words of the decades-old slogan of Computer Professionals for Social Responsibility, "Technology is driving the future... it is up to us to do the steering."

Morozov also accurately points out the dominant role of the "Efficiency Lobby" in steering the direction for AI so far, as well as many other modern computing technologies. The question to be asked from a socially meaningful point of view, however, is not where else we could have gone in an alternative world, but how we move forward from here.

That is not to say that learning from the past isn't useful. There were indeed alternatives of the sort Morozov seeks from the very beginning of AI and kindred technologies. A visionary example was Gordon Pask's Musicolour machine, built in 1953 in collaboration with Robin McKinnon-Wood, which translated musical input into visual output in a way that learned from the interaction with the musician operating it. As Pask put it:

Given a suitable design and a happy choice of visual vocabulary, the performer (being influenced by the visual display) could become involved in a close participant interaction with the system. He trained the machine and it played a game with him. In this sense, the system acted as an extension of the performer with which he could co-operate to achieve effects that he could not achieve on his own.

This and other explorations like it in subsequent decades did point in a direction that the world - or to be more precise, the commercial technology developers - did not choose to take. But is this the direction in which we should be looking for a broad alternative to current AI?

I am not as enamored as Morozov seems to be with the world of Storm's "flâneur." I agree that there is something attractive about the image of playfulness, imagination, originality, with no problems to solve, no goals to pursue. But there are deeper human consequences and opportunities that are at stake when we design technologies. What Morozov leaves out in his efficient-versus-playful dichotomy is the role of human care and concern. This is evident in the way he talks about intelligence, which he sees as the measure of being human. Thus he seeks alternative kinds of "non-teleological forms of intelligence - those that aren't focused on problem solving or goal attainment."

But care is not a form of intelligence. The philosopher John Haugeland famously said "the trouble with artificial intelligence is that computers don't give a damn." This is just as true of today's LLM-based systems as it was of the "good old-fashioned AI" Haugeland critiqued. Rather than a kind of intelligence, care is an underlying foundation of human meaning. We don't want to fill the world with uncaring playful machines any more than with uncaring efficiency generators.

Morozov has also missed the main underlying points of the examples he cites from my work with Fernando Flores. The Coordinator was indeed marketed with offers of increased organizational efficiency, but the underlying philosophy reflected a deeper view of human relationships. It was centered on the role of commitment as the basis for language. The Coordinator's structure was designed to encourage those who used it to be explicit in their own recognition and expressions of commitment, within their everyday practical communications. The theme of this and Flores's subsequent work is of "instilling a culture of commitment" in our working relationships, allowing us to focus on what we are creating of value together.

My analogy of AI to bureaucracy evokes not just the mechanics of bureaucratic rule-following but the hollowing out of human meaning and intention. We are all familiar with a bureaucratic interaction where our interlocutor says, "I'm sorry, I understand your concern, but the rules say that you have to..." That is, care for the lifeworld of the person being told what to do cannot be a consideration. To return to Haugeland's insight, the bureaucratic system doesn't give a damn. It's designed that way on purpose, to remove human subjectivity and judgment from matters even when they are of crucial, life-determining importance.

Morozov recognizes that as long as AI remains largely under corporate control, placing our trust in this technology to solve big societal problems might as well mean placing our trust in the market. But putting it under government control, given the current nature of governments in the world, may not be an improvement. The problem isn't how to engender AI systems that are more playful and less boring but to lay out what it would mean to create and deploy systems that are supportive of human concern and care. I agree these would be systems designed to enhance the interaction of humans, not to replace it. As outlined in Douglas Engelbart's early vision, the goal should be intelligence augmentation rather than artificial intelligence.

There have been many calls for moving toward AI "alignment" with human values and concerns, but there is no simple mechanism of alignment that we can appeal to. As Arturo Escobar argues, conventional technology design tends to support a singular, globalized world view that prioritizes efficiency, economic growth, and technological progress, often at the expense of cultural diversity and ecological health. This is not the result of "closed world" assumptions, but of the consequences of the process by which data is collected, networks are trained, and models deployed.

We return to the question we started with: not "How might things have happened differently?" but "How might things be different in the future?" Morozov ends with a tantalizing proclamation: the lesson of the Latin American experiments is that "technology's emancipatory potential will only be secured through a radical political project." What is the radical political project of our times, within existing national and international systems of governance, that has the promise to nurture AI's emancipatory potential? Unfortunately, this is a far more difficult and consequential question.

Terry Winograd is Professor Emeritus of Computer Science at Stanford, where he founded the Human-Computer Interaction Group.

• • •

TRUST ISSUES

Bruce Schneier & Nathan Sanders: The closed corporate ecosystem is the problem.

For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing - and, often, on seed funding from the U.S. Department of Defense. But today's tools are hardly the intentional product of the diverse generations of innovators that came before.

We agree with Morozov that the "refuseniks," as he calls them, are wrong to see AI as "irreparably tainted" by its origins. AI is better understood as a creative, global field of human endeavor that has been largely captured by U.S. venture capitalists, private equity, and Big Tech. But that was never the inevitable outcome, and it doesn't need to stay that way.

The internet is a case in point. The fact that it originated in the military is a historical curiosity, not an indication of its essential capabilities or social significance. Yes, it was created to connect different, incompatible Department of Defense networks. Yes, it was designed to survive the sorts of physical damage expected from a nuclear war. And yes, back then it was a bureaucratically controlled space where frivolity was discouraged and commerce was forbidden.

Over the decades, the internet transformed from military project to academic tool to the corporate marketplace it is today. These forces, each in turn, shaped what the internet was and what it could do. For most of us billions online today, the only internet we have ever known has been corporate - because the internet didn't flourish until the capitalists got hold of it.

AI followed a similar path. It was originally funded by the military, with the military's goals in mind. But the Department of Defense didn't design the modern ecosystem of AI any more than it did the modern internet. Arguably, its influence on AI was even less because AI simply didn't work back then. While the internet exploded in usage, AI hit a series of dead ends. The research discipline went through multiple "winters" when funders of all kinds - military and corporate - were disillusioned and research money dried up for years at a time. Since the release of ChatGPT, AI has reached the same endpoint as the internet: it is thoroughly dominated by corporate power. Modern AI, with its deep reinforcement learning and large language models, is shaped by venture capitalists, not the military - nor even by idealistic academics anymore.

We agree with much of Morozov's critique of corporate control, but it does not follow that we must reject the value of instrumental reason. Solving problems and pursuing goals is not a bad thing, and there is real cause to be excited about the uses of current AI. Morozov illustrates this from his own experience: he uses AI to pursue the explicit goal of language learning.

AI tools promise to increase our individual power, amplifying our capabilities and endowing us with skills, knowledge, and abilities we would not otherwise have. This is a peculiar form of assistive technology, kind of like our own personal minion. It might not be that smart or competent, and occasionally it might do something wrong or unwanted, but it will attempt to follow your every command and gives you more capability than you would have had without it.

Of course, for our AI minions to be valuable, they need to be good at their tasks. On this, at least, the corporate models have done pretty well. They have many flaws, but they are improving markedly on a timescale of mere months. ChatGPT's initial November 2022 model, GPT-3.5, scored about 30 percent on a multiple-choice scientific reasoning benchmark called GPQA. Five months later, GPT-4 scored 36 percent; by May this year, GPT-4o scored about 50 percent, and the most recently released o1 model reached 78 percent, surpassing the level of experts with PhDs. There is no one singular measure of AI performance, to be sure, but other metrics also show improvement.

That's not enough, though. Regardless of their smarts, we would never hire a human assistant for important tasks, or use an AI, unless we can trust them. And while we have millennia of experience dealing with potentially untrustworthy humans, we have practically none dealing with untrustworthy AI assistants. This is the area where the provenance of the AI matters most. A handful of for-profit companies - OpenAI, Google, Meta, Anthropic, among others - decide how to train the most celebrated AI models, what data to use, what sorts of values they embody, whose biases they are allowed to reflect, and even what questions they are allowed to answer. And they decide these things in secret, for their benefit.

It's worth stressing just how closed, and thus untrustworthy, the corporate AI ecosystem is. Meta has earned a lot of press for its "open-source" family of LLaMa models, but there is virtually nothing open about them. For one, the data they are trained with is undisclosed. You're not supposed to use LLaMa to infringe on someone else's copyright, but Meta does not want to answer questions about whether it violated copyrights to build it. You're not supposed to use it in Europe, because Meta has declined to meet the regulatory requirements anticipated from the EU's AI Act. And you have no say in how Meta will build its next model.

The company may be giving away the use of LLaMa, but it's still doing so because it thinks it will benefit from your using it. CEO Mark Zuckerberg has admitted that eventually, Meta will monetize its AI in all the usual ways: charging to use it at scale, fees for premium models, advertising. The problem with corporate AI is not that the companies are charging "a hefty entrance fee" to use these tools: as Morozov rightly points out, there are real costs to anyone building and operating them. It's that they are built and operated for the purpose of enriching their proprietors, rather than because they enrich our lives, our wellbeing, or our society.

But some emerging models from outside the world of corporate AI are truly open, and may be more trustworthy as a result. In 2022 the research collaboration BigScience developed an LLM called BLOOM with freely licensed data and code as well as public compute infrastructure. The collaboration BigCode has continued in this spirit, developing LLMs focused on programming. The government of Singapore has built SEA-LION, an open-source LLM focused on Southeast Asian languages. If we imagine a future where we use AI models to benefit all of us - to make our lives easier, to help each other, to improve our public services - we will need more of this. These may not be "eolithic" pursuits of the kind Morozov imagines, but they are worthwhile goals. These use cases require trustworthy AI models, and that means models built under conditions that are transparent and with incentives aligned to the public interest.

Perhaps corporate AI will never satisfy those goals; perhaps it will always be exploitative and extractive by design. But AI does not have to be solely a profit-generating industry. We should invest in these models as a public good, part of the basic infrastructure of the twenty-first century. Democratic governments and civil society organizations can develop AI to offer a counterbalance to corporate tools. And the technology they build, for all the flaws it may have, will enjoy a superpower that corporate AI never will: it will be accountable to the public interest and subject to public will in the transparency, openness, and trustworthiness of its development.

Bruce Schneier is a public interest technologist and lecturer at the Harvard Kennedy School. His latest book is A Hacker's Mind: How the Powerful Bend Society's Rules

Nathan Sanders is a data scientist and affiliate of the Berkman Klein Center for Internet & Society at Harvard.

• • •

AI'S MISSING OTHERS

Sarah Myers West & Amba Kak: The closed corporate ecosystem is the problem.

In our moment of profound inequality and global crisis, now flush with chatbots and simulated images, Morozov is right that we sorely need a clearer articulation of the world we do want to live in, not just the one we want to leave behind. But the challenge of specifying that vision - much less winning it - requires more refined lessons about the challenges ahead and where political power might be built to overcome them.

The field of AI has been not just co-opted but constituted by a few dominant tech firms. It is no coincidence that the dominant "bigger is better" paradigm, which generally uses the scale of compute and data resources as a proxy for performance, lines up neatly with the incentives of a handful of companies in Silicon Valley that disproportionately control these resources. The widely lauded AlexNet paper of 2012 was an inflection point. In its wake, deep learning methods - reliant on massive amounts of data, contingent labor, and exponentially large computational resources - came to dominate the field, spurred at least in part by the growing presence of corporate labs at prestigious machine learning conferences.

This isn't a new phenomenon. The same components shaped the Reagan administration's vision for a Strategic Computing Initiative meant to ensure American technological prowess in AI. The program was ultimately discarded with the realization that its success would require endlessly scaling computing power and data.

This resurrected vision of infinite scale no matter the cost now drives AI figureheads like Sam Altman to lobby for public investment in chipmaking and the ruthless expansion of power for data centers. If the unregulated surveillance business model of the last decade and a half generated the data, compute, and capital assets to secure Big Tech's dominant posture, this next phase will require doubling down on these infrastructural advantages. In this view, the ChatGPT moment is not so much a clear break in the history of AI but a reinforcement of the corporate imperatives of the early aughts.

Things might have taken another direction. After all, as Morozov suggests, the term "artificial intelligence" has meant many different things over its seventy-year history. There are still other models he doesn't mention that resonate with his argument. Feminist AI scholars like Alison Adam once held up situated robotics as an alternative paradigm, interpreting intelligence as emerging not from rule-bound and bureaucratic expert models but out of experience embodied through contact with the outside world. And corporate AI labs once incubated the careers of researchers with a much more radical politics. Lucy Suchman is one of them: emerging from Xerox PARC, she helped to found the field of human-computer interaction, devoted to understanding the contingency of how humans interact with machines in a messy world. (Suchman was also one of the founders of Computer Professionals for Social Responsibility, a group that organized in opposition to the Strategic Computing Initiative and the use of AI in warfare.)

More recently, critical scholarship and worker-led organizing that sought to redefine the trajectory of AI development had its fleeting moment within the Big Tech labs too, from Google to Microsoft. This was the current that produced the research institute we lead, AI Now, and others like the Distributed AI Research Institute, founded by Timnit Gebru. But Big Tech's tolerance for internal pushback swiftly faded as tech firms have pursued rapid development and deployment of AI in the name of efficiency, surveillance, and militarism. With vanishingly few exceptions, worker-led organizing and the publication of critical papers are swiftly quelled the moment they become threatening to corporate profit margins, hollowing out the already limited diversity of these spaces. In place of this more critical current, AI firms have adopted a helicopter approach to development, creating AI-sized versions of entrenched problems they could offer ready solutions for: iPad apps for kindergartners to solve for teacher shortages, medical chatbots to replace nurses.

It was in this context that the mission to "democratize AI" emerged, and it has now permeated efforts around AI regulation as well as public investment proposals. These initiatives often call for communities directly impacted by AI - teachers impacted by ed tech, nurses contending with faulty clinical prediction tools, tenants denied affordable housing by rent screening systems - to have a seat at the table in discussions around harm reduction. In other cases they focus on ensuring that a more diverse range of actors have access to computing resources to build AI outside of market imperatives. These efforts are motivated by the sense that if only the right people were in the conversation, or were given some small resources, we'd have meaningful alternatives - perhaps something approaching what Morozov calls AI's "missing Other."

The idea of "involving those most affected" certainly sounds good, but in practice it is often an empty signifier. The invitation to a seat at the table is meaningless in the context of the intensely concentrated power of tech firms. The vast distance between a seat at the table and a meaningful voice in shaping whether and how AI is used is especially stark in regulatory debates on AI. Mandates for auditing AI systems, for example, have often treated impacted communities as little more than token voices whose input can be cited as evidence of public legitimacy - a phenomenon Ellen Goodman and Julia Tréhu call "AI audit washing." The effect is to allow industry to continue business as usual, doing nothing to transform the structural injustice or fix the broken incentives powering the AI-as-a-solution-in-search-of-a-problem dynamic.

This tension also plays out in U.S. debates around government-led R&D investment in AI, which lawmakers rightly lament still pales in comparison to the billions of dollars spent by the tech industry. As historians of industrial policy attest, governments have historically driven R&D spending with longer-term horizons and the potential for transformative public benefit, whereas industry is narrowly focused on commercialization. But thanks to its agenda-setting power and widely adopted benchmarks, the tech industry now defines what counts as an advance in basic research. The effect is to blur the line between scientific work and commercialization and to tilt efforts toward superintelligence and AGI in order to justify unprecedented amounts of capital investment. As a result, many current "public AI" initiatives ostensibly driven by the premise of AI innovation either lean heavily into defense-focused agendas - like visions for a "Manhattan Project for AI" - or propose programs that tinker at the edges of industry development. Such efforts only help the tech giants, propelling us into a future focused on ever-growing scale rather than expanding the horizon of possibility.

Morozov rightly rejects this path. But achieving his vision of a "public, solidarity-based, and socialized" future requires going further than he suggests. Rather than starting from the presumption of broadly shared faith in "technology's emancipatory potential," this effort must emanate from the visions of AI's missing others - the critical currents and alternative visions that Silicon Valley has firmly excluded.

Sarah Myers West is Co-Executive Director of the AI Now Institute and serves on the OECD's Expert Group on AI Futures.

Amba Kak is Co-Executive Director of the AI Now Institute and a former Senior Advisor on AI at the Federal Trade Commission.

• • •

WHOSE VALUES?

Wendy Liu: Boosters peddle the illusion of objectivity to avoid messy politics.

I confess I've become weary of reading about AI. I am tired of the self-serving mythologizing of its proponents. I am also tired of thinking about its horrific environmental impact, its potential for automating away human labor, the unpleasant working conditions involved in generating training data - and on and on. I get it, and I am tired of it. Sometimes I just want to think about something else.

But Morozov has given us an argument worth paying attention to. "'Democratic AI' requires actual democracy," he concludes. What's needed, as ever, is politics, not merely coming up with the right parameters in some AI model while the real world crumbles around us.

Which - no offense to Morozov - may seem like a fairly obvious point. But it's a point lost on the techno-optimist crowd, with their glassy-eyed, almost religious belief in the power of AI. See, for instance, venture capitalist Marc Andreessen's recent blog post, "Why AI Will Save the World," which asserts that "anything that people do with their natural intelligence today can be done much better with AI," and therefore AI could be a way to "make everything we care about better." Or, former Google CEO Eric Schmidt's conviction that we should go full speed ahead on building AI data centers because "we're not going to hit the climate goals anyway" and he'd "rather bet on AI solving the problem."

This would be all well and good if AI was actually developing along those lines. But is it? The current hype cycle is fueled by generative AI, a broad category that includes large language models, image generation, and text-to-speech. But AI boosters seem to be appealing to a more abstract meaning of the term that has a little more fairy dust sprinkled over it. According to Andreessen - whose firm was an early investor in OpenAI - we could use AI to build tutors, coaches, mentors, and therapists that are "infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful." Could we? Would this be derived from the same AI that OpenAI (valued at $157 billion at time of writing) currently sells to its largest enterprise customer, accounting and consulting firm PwC? Do we really believe that giving OpenAI's customers the ability to train chatbots on their internal data will help build what Andreessen hails as "the machine version of infinite love"? What is the process by which amorphous traits like patience and compassion will be instilled in these large language models? Or are we supposed to believe that such traits will arise automatically once enough Reddit threads and YouTube videos have been processed?

Maybe I'm too cynical; my Luddite sympathies are showing. These days, new technology tends to provoke in me more skepticism than excitement. One can almost always predict it will be used by some segment of capital to extract profit in a novel way, at the ultimate expense of workers who already don't have much left to give. I can't hold back a certain reverence for the technical achievements inherent in something like ChatGPT, but I'm troubled by the semantic burden the term "AI" is being asked to bear. The capabilities of the present moment - which, as technically impressive as they may be, are still fairly prosaic and mundane - are being conflated with the AI zealots' unsubstantiated faith in an all-knowing, beneficent intelligence that will solve climate change for us, all to prop up the valuations in this trillion-dollar bubble. Too much is being asked of AI. Too much is needed from it. Whatever AI is capable of, the current messaging is distorted by the sheer amount of financial speculation involved.

The most frustrating thing about our current moment is that it didn't have to be this way. The thought experiment Morozov describes, envisioning an alternate path for the development of AI unshackled from its Cold War past, reminds us of the importance of values in the trajectory of any technology. AI is not just a matter of an objective intelligence pursuing objectively better aims. Whatever aims will be pursued will depend on the encoded values. These values will be a product of many things - the beliefs of the builders, the bias of the ingested data, the technical limitations - but will be particularly informed by the norms and structures under which the technology is developed. An AI-based sales bot trained to upsell customers isn't doing so because it's the "intelligent" thing to do, and certainly not because it is the right thing to do, but because the company wants to make more money and has encoded this value into the bot. Poor sales bot: born to be infinitely loving, destined to be infinitely slimy.

Of course, the very idea of values is something that AI proponents like Andreessen conveniently omit in their conversion sermons on the power of AI. They'd rather labor under the illusion of objective intelligence and objective good because adjudicating between competing values is annoying and messy - the domain of politics. Better to pretend that we "all" want the same things and that AI will merely help us "all" get there faster. Never mind that the values of someone making rock-bottom wages doing data cleanup for an AI company might be pretty different from those of a tech billionaire who owns a $177 million house in Malibu as well as significant stakes in numerous AI companies.

If the real challenge lies, as Morozov argues, in cultivating the right Weltanschauung, then I think the first step is to be suspicious of the ravings of power-hungry billionaires. As a start, we should try to reclaim the idea of AI from their clutches: if we unburden it of the hefty responsibility of "saving" us, it might actually become something moderately useful. After all, as Morozov writes, to realize the emancipatory potential of technology requires a "radical political project." So let's start with the idea of AI, and then see what else we can reclaim.

Wendy Liu is a writer and former software engineer. She is author of Abolish Silicon Valley: How To Liberate Technology From Capitalism.

• • •

THE PLOT AGAINST FINANCE

Edward Ongweso Jr.: Any public agenda will have to take on Wall Street.

Morozov's paean to "ecological reason" is a breath of fresh air, demonstrating how the Cold War perverted not only AI development but our capacity to imagine alternatives - technological forms untethered to markets and the military. He lays special emphasis on the way AI today captures the ethos of neoliberalism; I'd like to expand on the way financialization helps it do so. The funds being funneled into the generative AI boom reflect a particular array of interests and externalities, and behind it all looms a long-standing asset bubble underwriting the expansion of our global computational infrastructure.

In 2023 venture capitalists invested billions in startups chasing AI but were outspent two to one by just three tech firms: Microsoft, Alphabet, and Amazon. If this year is any indicator, that trend will hold steady: venture capitalists have raised tens of billions while Microsoft, Alphabet, Amazon, and Meta notched $106 billion in capital expenditures aimed at expanding their own AI infrastructure and capabilities. Early this year, OpenAI founder Sam Altman was pitching a $7 trillion plan to investors to build global infrastructure for semiconductor production, hyperscale data centers, and energy sources for both. Meanwhile, an increasingly lucrative alliance has emerged between Big Tech and fossil fuel companies; the former have been signing cloud computing and generative AI deals that maximize productivity (and emissions), while the latter build new coal and natural gas power plants to satisfy the exploding energy demands of new AI data centers. As these various actors - Big Tech firms, venture capitalists, egomaniacal founders, and the fossil fuel sector - egg each other on, ratcheting up investment to the trillions, asset managers like BlackRock and Blackstone are angling themselves to profit handsomely off the prospective deal pipeline.

Morozov is right that many of those who warn about a technology bubble tend to think it is going to pop any day soon. There's no reason to think so: we have been waiting for the other shoe to drop for well over a decade to no avail. For a brief moment in 2022, a series of deflations and demolitions in the tech sector - along with the end of quantitative easing - suggested the era of ballooning valuations was over, but the correction has proven illusory.

During an asset bubble, it is only a matter of time until moonshot projects and zombie firms and business models sustained by misallocated capital finally see their demise. Can we afford to wait around? Given the success of firms like Uber, Lyft, and Airbnb in leveraging misallocated capital into political power that then reshapes markets and urban governance into forms that will sustain them once the capital gluts retreat, I would say no. Things are even more dire considering the glut of capital is being used to build out infrastructure, goods, and services that are hastening the collapse of our ecological niche. Any plan that seeks to promote ecological reason will indeed need a radical political project. It will have to tackle Silicon Valley and its financiers, Wall Street, fossil fuel companies, and now the defense industry - all at once. How exactly we break the back of this unholy alliance is unclear, but we can tease out the shape of some things we need.

Venture capitalists, their funds, and their portfolio firms enjoy a great deal of their power from the inflation of startup valuations. High valuations let firms use investor funds as an anticompetitive weapon to assail markets and states; they also let venture capitalists orchestrate lucrative exits. Draining this vast pool will require a tax regime that disadvantages this type of capital ownership with aggressive taxation or by shifting the ability to value assets (such as an equity stake) to, say, the IRS. Biden's billionaire tax, which follows these contours, spurred a hysterical response and outpouring of support for Trump from the sector that many found surprising. As both Morozov and Ben Tarnoff observed, however, VCs have never been liberal stalwarts - they've been primarily concerned with preserving their ability to transform speculative gains into real wealth that then confers political and economic power. Weathering ridiculous responses to a policy proposal is one thing, but cobbling together support that survives a Silicon Valley eager to flex its lobbying chops will be another.

There will also need to be some sort of public alternative to technological innovation driven by venture capital. Cornell law professor Saule T. Omarova, Biden's aborted nominee for Comptroller of the Currency, offers a blueprint for options that might be palatable enough for capitalists. Chief among them is a National Investment Authority (NIA) that provides public equity to public projects. Omarova's vision is to build out infrastructure for financing green energy projects that provide high wages and ventures that insulate the country from supply-side shocks or fight against inequality one way or another. The NIA would also function as a public asset manager that can negotiate or coordinate the provision of emergency credit, take equity positions in failing or bankrupt firms, and restructure them in alignment with a public development strategy.

This public-spirited agenda could go further still. The federal government's relaxation of the so-called "prudent man rule" in 1979 - allowing pensions to invest more heavily in VC funds - spurred the industry's growth from $100-200 million during the 1970s to $4 billion by the end of the 1980s. Reversing its ability to access pension funds should be another priority. With some resistance, the Securities and Exchange Commission has proposed rules aimed at tightening regulation of private funds by VCs, hedge funds, and private equity, but what else can we do to take advantage of this underbelly (and convince others to join us in doing so)?

A public investment option combined with an asset manager could also be used to experiment with market and non-market interventions. Aspirationally, we could seize assets like computational infrastructure and either spin them down or operate them publicly to drive private firms out of business. Meanwhile, intangible assets (datasets, algorithms, and so on) could be used to furnish an alternative research agenda that promotes ecological reason. The goal should not be to replicate the utilities model - markets and states dominated by public versions of Google and BlackRock - but to clear the land of obstacles that prevent us from pursuing genuine experiments.

Is there a way to advance such a project despite the capitalists and efficiency shills who will surely be aghast at the long-term prospect of diminished power? There just might be, starting with Morozov's rousing call to arms - a reminder that in technology criticism we have fallen for the trap of documenting each sin of a seemingly impervious Leviathan, when the point is to change the world!

Edward Ongweso Jr. writes on technology, finance, and labor and cohosts the podcast This Machine Kills.

• • •

LEARNING FROM THE LUDDITES

Brian Merchant: The key to an alternative is building a movement.

I've spent so many years among the Luddites - among oral histories, archived letters, and old newspaper articles about them - that there are scenes from their history that are so seared into my brain, it's almost as if I was there myself. I can conjure the battle at Rawfolds Mill, where more than a hundred clothworkers made their ambitious and ill-fated assault on a hated factory owner who used automation not to ease their burden but to undercut their wages and employ child labor. The clothworkers, under the banner of Ned Ludd, were crushed; they left trails of blood in the mud as they fled into the forest.

It's the quieter moments that are more indelible, however, and more likely to come to mind as I'm reading news of AI startups and artist strikes. Take the young Luddite leader George Mellor, remarking to his cousin on an evening walk that he'd seen how bosses used automating machinery and found "the tendency's all one way" - to concentrate more and more wealth and power at the expense of workers. Mellor was, and continues to be, right about that, and he made this observation in 1811, before industrial capitalism was fully forged.

Or take the debate, between Mellor and an apprentice saddlemaker, John Booth, about how best to address the rise of the industrial entrepreneur, mechanization, and the factory-owning class. In an account first recorded by the historian Frank Peel, one that is surely embellished, Booth argues that the Luddites are right to resist the factory owners, but that they should embrace the technology - they should rebel for reform, not for refusal. "I quite agree with you... respecting the harm you suffer from machinery," Booth said, according to Peel. "But it might be man's chief blessing instead of his curse if society were differently constituted... To say that a machine that can do this for you is in itself an evil is manifestly absurd. Under proper conditions it would be to you an almost unmixed blessing."

"If, if, if!" Mellor interrupted. "What's the use of such sermons as thine to starving men?... If men would only do as thou says, it would be better, we all know. But they won't. It's all for themselves with the masters."

It is hard, even futile, Mellor argues, to imagine a world where an advanced technology is put to the common good when it is erasing livelihoods right now. If dismantling the machinery ends an injustice and tips the scales toward equality, that should be the project. This dispute, too, remains as relevant and pointed as ever, two centuries later. It lies, I believe, at the heart of the matter of actualizing the laudable prescriptions put forward in Morozov's essay: reimagining how we might develop and institute technologies like AI more democratically, more holistically, more attuned to amplifying humanistic and scientific pursuits - and decoupled from its current death drive to profit management at any cost.

I find Morozov's vision of an eolithic mode of technological development - one in which we are free to experiment with and develop technologies not to advance the aims of a military or administrative state, not to realize corporate efficiencies, perhaps not even toward any design at all - to be a beautiful, even moving, one. It says much about how severely Silicon Valley capitalism has narrowed our imagination that so relatively simple an idea can feel so utopian. I'm also in staunch agreement that we are in dire need of such reimaginings, and a concerted effort to make room for them.

The question remains how to get there from here, which brings us back to Mellor and Booth's argument. The path will include a radical political project, as Morozov notes, and yes, meaningfully democratizing AI will require true small-"d" democracy. But a project of what character? Of radical resistance, or of political reform? Of revolution or abolition? The stylized opposition between "realists" and "refuseniks" may prove far less rigid in practice.

Again the Luddites might offer some wisdom. Generative AI presents a host of threats to working people. It promises to increase surveillance, exacerbate discrimination, and erode wages. It threatens to concentrate power among the Silicon Valley corporations who own and operate the large language models, and their clients among what Morozov aptly termed the Efficiency Lobby. These companies are accumulating investors and market cap, while hurtling toward IPOs that stand to richly reward stakeholders, regardless of whether the enterprise AI software delivers as promised or not. The tendency's all one way indeed.

And once again, the workers most immediately - and perhaps most existentially - threatened by generative AI companies are skilled craft workers. Visual artists, writers, musicians, voice actors, and illustrators; journalists, copywriters, graphic designers, and programmers. Many of these workers have joined a campaign of refusal, of modern Luddism: class action lawsuits to try to stop the AI companies from profiting off of the wholesale, nonconsensual appropriation of creatives' labor; efforts in organized labor to stop studios and corporations from using AI to generate scripts or animated productions; consumer campaigns to declare goods and services as AI-free. A still-underappreciated truism of our technological moment is that there is great solidarity to be found in refusing a technology - AI, mostly - that is used to exploit or replace a worker.

The striking Writers Guild of America (WGA) screenwriters, whom I spoke with on picket lines and at rallies, were surprised to see their cause become so celebrated in 2023, when they drew a red line at allowing studios to use AI to generate scripts. They ultimately won the right to use AI how they saw fit in their own creative process, ensuring that, for three years while the contract holds, at least, if they use AI at all, it will benefit them, and not their bosses. Here we might see the seeds of a potentially radical project to move control of how a technology is used into the worker's own labor process - born of a refusal to allow management to claim that right for itself.

The WGA is a uniquely powerful union in the creative industries, of course; many other jobs, including illustration and copywriting, are often freelance and more precarious. Any movement will have to work to encompass these workers too, as well as the numerous data cleaners, quality assurance testers, and content moderators on whose labor - carried out in stressful conditions for abominable pay - the whole system depends. And I think we must recognize that democratizing technology means offering access to a kill switch - and that generative AI, in its current formation, may well be deemed too wasteful, too undermining of the creative trades, too polluting of the information ecosystem, and too toxic to stand. Booth ultimately joined Mellor's Luddites, after all.

Like him, however, I see that many of AI's ills stem from those who control and stand to profit from it. A democratic movement might equally well cut off the plagiarism and slop production and redirect this technology toward predicting new proteins and writing custom language apps. The key to achieving any alternative routes, eolithic or otherwise, will lie in the scaffolding - in locating productive ways to harness the energies and solidarities of refusal into a broader project of reclamation, of reimagination, of renewal.

Brian Merchant is a journalist, critic, and the former tech columnist for the Los Angeles Times. His latest book is Blood In The Machine: The Origins Of The Rebellion Against Big Tech.

• • •

CULTIVATING MEANING

Evgeny Morozov responds.

I'm grateful for these thoughtful responses, many of which grapple with the central question of how to bring "AI's missing Other" into existence. Before engaging with their specific proposals, however, I need to clarify what this Other actually represents.

Bruce Schneier and Nathan Sanders defend the importance of problem solving (supposedly against my downplaying of it), while Terry Winograd characterizes my position as advocating for "playfulness... with no problems to solve, no goals to pursue." But these moves fundamentally misunderstand the relationship between instrumental and ecological reason. The eolithic flâneur doesn't set out on an intentional quest to find stones but nevertheless does operate within a framework of long-term projects, ends, and problems to solve. As Storm himself notes, "the stones were picked up... in a form already tolerably well adapted to the end in view and... strongly suggestive of the end in view."

These ends emerge from culture, history, and society, but their exact form depends on how each of us interprets (and reinterprets) them. This is one place where humans differ fundamentally from computers: our different constellations of meaning lead to radically different interpretations of the same object. Hence my argument about the futility of having a computer take a Rorschach test: the exercise is meaningful only in light of human-like life projects - with all their associated anxieties, aspirations, and frustrations - which shape how we make sense of the images.

Far from ignoring questions of care and concern, as Winograd suggests, my conception of intelligence places them at its center. While I agree these aren't themselves forms of intelligence, they are inseparable from how we respond to what I would call the prompts to care - whether moral, political, or aesthetic - that the world presents to us.

This understanding helps clarify the missing Other. Contrary to Winograd's reading, I'm not advocating for more playful AI systems like Gordon Pask's Musicolour machine. Instead, I envision an alternative non-AI project that would deploy some of the technologies currently used in AI - together with other social and cultural resources - to foster ecological reason. The goal would be to make more things meaningful to more people by enabling us to cultivate the interests and skills that transform noise or boredom into meaning and care.

Cold War AI was a massive military Keynesian project to entrench instrumental reason - increasingly embedded in technological systems - in all social domains. Today's counterpart, by contrast, would leverage technology (but not only technology) to promote moral reasoning, political imagination, and aesthetic appreciation in humans.

Play can certainly help. As Brian Eno writes, "the magic of play is seeing the commonplace transforming into the meaningful." This underscores my closing remarks about developing the right Weltanschauung: the point is not about following the rituals of play (which is what we do when we play soccer or chess), but, rather, ceasing to doubt that another world is, in fact, possible. A good place to start is by realizing that the same ingredients and starting conditions could yield very different results; a mere stone can be so much more.

It's in that spirit that I'd defend my use of historical hypotheticals. While they don't provide a roadmap for action, they serve to crack open our imagination - something especially crucial given Silicon Valley's chokehold on how we envision the future, as many responses make clear.

The alternatives we imagine needn't be limited to structural reforms of existing technologies, important as they are - whether revamping funding mechanisms (as Edward Ongweso Jr. argues), empowering workers (as Brian Merchant and Wendy Liu suggest), or building more transparent infrastructure (as Schneier and Sanders advocate). More fundamentally, we need to reimagine what we're trying to accomplish when we deploy technology to enhance intelligence in the first place. Rather than endlessly qualifying AI with adjectives - "democratic," "playful," "socialist," and so on - perhaps we should return to first principles and ask whether the relationship between technology and intelligence can be conceptualized entirely outside the framework we inherited from the Cold War's Efficiency Lobby.

I'll be the first to acknowledge the difficulty. Thus, while I share Sarah Myers West and Amba Kak's concerns about techno-optimism, they mischaracterize my argument as riding on a renewal of faith in technology's emancipatory potential. As Winograd correctly notes, I invoked the Latin American examples from the early 1970s precisely to demonstrate the opposite point: merely changing how we think about technology - having an "aha" moment about its alternative possibilities - isn't enough. Without embedding these insights - this Weltanschauung - within a radical political project, our recognition of technology's potential remains just that: potential, unrealized and unrealizable.

Winograd is right that the crucial question - what such a project might look like today - is challenging. Many respondents offer their own answers. I believe its basic contours would mirror those of the Latin American initiatives of the 1970s, which were deeply informed by dependency theory. The starting point would be recognizing that contemporary technological development - despite its problem-solving prowess - remains fundamentally capitalist in nature and thus ultimately stands in opposition to human flourishing and ecological survival. What's needed is a national - and, in some cases, regional - project to imagine and implement noncapitalist developmental paths, not just for technology but for society as a whole. Of course, such an agenda would take dramatically different forms in different contexts - what works for the United States would differ markedly from what might succeed in Guatemala, Thailand, or Kenya. And what to do about the United States, the entrenched hegemon of the global economy, is no easy question either.

Despite her valuable discussion of developments outside North America and Europe, Audrey Tang overlooks this crucial question of noncapitalist development alternatives. While one can debate the precise influence of cybernetics on figures like Edwards Deming, we shouldn't forget the extensive critiques - by both Japanese and other thinkers - of Toyotism and the lean production methods that drove Japan's economic miracle. To celebrate these systems merely because they incorporated some worker participation and used concepts like feedback is to miss their deeply political and ideological nature. After all, they strove after higher productivity in (still) highly hierarchical and mostly authoritarian capitalist workplaces. This approach exemplifies precisely the kind of technocratic thinking, divorced from considerations of alternative paths, that I mean to challenge.

Similar criticisms apply to projects like India Stack. While Tang presents the example as a triumph of local innovation, it represents just one developmental model - one that primarily serves India's domestic capitalist class in its effort to avoid paying tribute to Silicon Valley. Without carefully examining how capitalism, in both its global and national forms, co-opts elements of tradition and social fabric that facilitate accumulation, we risk celebrating surface-level diversity while missing its ultimately homogenizing effects. While time will tell whether India Stack enhances or inhibits ecological reason, I remain deeply skeptical.

The promise of technological alternatives lies not in replacing Silicon Valley's digital imperialism with local variants but in reconceptualizing technology's role outside the logic of capital accumulation. This demands more than technical innovation or local control; it requires a radical political vision that can distinguish genuine social transformation from rebranded capitalist development. Our task is not to make AI more democratic or digital infrastructure more nationally flavored, but to build technological futures that break free from the very framework that keeps preaching "there's no alternative."

Evgeny Morozov writes on technology and politics. He is author of The Net Delusion and To Save Everything, Click Here: The Folly Of Technological Solutionism. His latest podcast is A Sense of Rebellion.


ALBUMS | BIOGRAPHY | BOOKS | INSTALLATIONS | INTERVIEWS | LYRICS | MULTIMEDIA


Amazon