War on the Slop Machines
Generative AI’s destructive effects on learning, creativity, and sociality are becoming clearer by the day. Socialists should be clear about the costs of its promises of convenience.
It seems like not a week passes that we don’t learn about a new perverse use of artificial intelligence or get more information about its destructive effects. Earlier this month, much of my X/Twitter feed was talking about James D. Walsh’s New York magazine feature, “Everyone Is Cheating Their Way Through College.” The article is about, well, you can guess: the ubiquity of college students using AI to cheat on assignments, in both STEM and humanities classes, and college instructors being stumped as to what to do about it.
I saw some reactions to the piece suggesting that Walsh was being sensational: that AI-fueled cheating is not as common as he makes it out to be, or that there are obvious ways professors can AI-proof their assignments (e.g., assigning in-class, written Blue Book or oral examinations instead of term papers). Even if not everyone is cheating, however, in my own limited college-teaching experience post-ChatGPT, and that of other college instructors I’ve talked to, it is indeed widespread. And at least in many disciplines, there isn’t really a substitute for long-form argumentative writing of the kind that is especially vulnerable to AI cheating.
Perhaps the problems here are manageable. But there’s something deeply disturbing about the habits and attitudes of the students that Walsh talked to. One of them, Chungin Lee, was a serial AI cheater at Columbia University, who dropped out to launch a start-up — whose aim is to build an app that helps students cheat on all manner of assignments. (Eventually, the hope is, the app will run in a wearable headset where it could even give you prompts to help you “cheat” on dates.)
Most of the students Walsh talked to do not come off as viciously amoral as Lee. An anonymous freshman at another university, who reported using ChatGPT to cheat in all her classes worried that she had become “dependent” on AI, said she “already considered herself addicted to TikTok, Instagram, Snapchat, and Reddit” and that she would scroll TikTok “for hours and hours, until my eyes start hurting, which makes it hard to plan and do my schoolwork.” ChatGPT is, naturally, the solution to this problem.
I don’t mean to write a “kids these days” screed. Most of the blame for this sorry state of affairs has to be placed on the adults who brought this world into being. When it comes to higher education in particular, as Walsh himself observes, the set of incentives that is driving the current AI-cheating crisis took shape long ago:
The ideal of college as a place of intellectual growth, where students engage with deep, profound ideas, was gone long before ChatGPT. The combination of high costs and a winner-takes-all economy had already made it feel transactional, a means to an end. . . . In a way, the speed and ease with which AI proved itself able to do college-level work simply exposed the rot at the core.
I think this is right, though we can expect AI to speed the rotting. Sadly, this process — of a valuable sphere of activity being degraded into a mere means to an end, thereby leaving the door open for tech innovations like ChatGPT to blow the house over — has not been confined to the campus.
Killed by Convenience
I recently found out about a particularly incredible application of generative AI. The family of a man who was killed in a road-rage incident in Chandler, Arizona, used AI to create a simulation of the deceased man giving his own impact statement in the sentencing of his convicted killer. “The state asked for a 9.5-year sentence, and the judge ended up giving [the killer] 10.5 years for manslaughter, after being so moved by the powerful video, family says,” according to ABC 15 Arizona. “The judge even referred to the video in his closing sentencing statements.”
I’m tempted to reach for the contemporary cliché that this is something out of a Black Mirror episode — that this incident calls for even greater suspension of disbelief than the Netflix show does. While I can certainly understand the family of a murder victim going to great lengths to honor their memory, and to seek what they think of as justice — why would a court of law indulge the fantasy that an AI program could bring a person back to life? And more importantly, why would a judge see such a performance as relevant to their decision?
The absurdity of this use of AI surpasses what I previously thought of as a peak case, of people having romantic relationships and falling in love with chatbot companions. As I’ve discussed here before, I think mistaking human-chatbot interactions for real human relationships is a grave metaphysical and moral error. It also comes with very tangible risks, shown by the case of the teenage boy who was pushed into committing suicide by his AI companion, and by a number of people who seem to be having psychiatric delusions triggered by their conversations with ChatGPT. And that’s not to mention the way generative AI is adding a layer of fake (but often undetectably so) slop onto our already attention-sucking, anxiety-producing, mind-melting social media feeds.
How did we get here? The incentives driving the purveyors of social media and AI are clear enough. What ultimately matters for Silicon Valley is not the actual usefulness of its inventions or their broader social consequences, but what people are willing to pay for them. In this, of course, tech capitalists aren’t different from any other capitalists. But what is maybe distinctive about the contemporary tech sector is the way its products are interacting with consumers’ preferences and values.
Think about many of the consumer products we associate with the conveniences of modern capitalism: telephones, automobiles, washing machines, refrigerators. These either allowed us to do things we were previously unable to do — like talk over long distances, or keep food for longer periods of time without it going bad — or to be much more efficient in doing things we were already able to do, like traveling via road or doing the dishes or the laundry. While some precious souls might romanticize the days of having to correspond by mail or horse-and-buggy transportation, most of us are happy we have the tools we now have to facilitate valuable activities like staying in touch with distant loved ones, or just to make day-to-day tasks like feeding and clothing ourselves easier.
At least in some of its prominent uses, AI is doing something profoundly different. The processes of reading, grappling with, and writing texts oneself is necessary for one’s actually learning the subject matters that the texts deal with. The student who uses AI to do these things is obviously not developing the skills to do them themselves, nor in all likelihood are they learning whatever it is the relevant texts — and the process of grappling with them — might have to teach.
Unsatisfied
The prospect of students using ChatGPT to read and write for them en masse, then, is worrying for our future. Even if we’re bullish about large language models (LLMs) overcoming their current defects to produce more accurate and artful imitations of human writing, we at the very least need literate people to produce original research and writing. And even if — a big if — we think that AI will one day be able to do that (perhaps we’ll digitize all the historical archives and the whole physical realm of flora and fauna and celestial bodies, and perhaps LLMs will become capable of making truly novel arguments), we at least need people who are capable of checking the AI for accuracy.
But set aside these more technical worries. Grant for the sake of argument that the skill defects created by students using AI as a crutch will ultimately be remedied by AI, as its evangelists seem to imagine. Still, I would maintain that it’s intrinsically bad for people to not learn to read, grapple with, and produce thoughts on their own. It is bad, in other words, for people not to learn how to think. It is destructive for them, as individuals, not to develop their capacities for rational thought, and destructive for us as a society. (And it is not so far-fetched to imagine it leading to dystopian outcomes.)
The problem here is that what is in fact a final good, or an end in itself, in the classical philosophical terminology — something valuable for its own sake, and not just as a way of getting or bringing about something else we care about — is being treated merely as an instrumental good, or a means to an end. The creators of AI encourage people to use it to do the sorts of research, insight extraction, and thought that would have once been the purview of human subjects. These activities, as philosophical traditions stretching back to antiquity have stressed, are worth doing and caring about for their own sake. When we ask AI to engage in them for us, we are treating final goods as if they were instrumental goods.
The same can be said about substituting AI companionship for the human kind. In that case, conversations or relationships are seen merely as a means to the end of making us feel good, of releasing the appropriate neurotransmitters in one’s brain. But that, I’d think it wouldn’t need to be said, is a depressingly, perversely instrumental view of human relationships.
To be sure, none of this matters from the perspective of the tech capitalists; they just care about making money off of AI. But their ability to profit from AI depends on consumers actually finding it useful for certain things. And clearly, consumers are finding it useful — to complete homework assignments so they don’t have to do any reading or writing themselves, or to enjoy the frictionless simulation of human interaction and companionship without the inconvenience of having to deal with an actual person.
AI might be very good at satisfying these consumer preferences. The problem is, the preferences themselves are corrupt. They are corrupt precisely because they involve the inversion of instrumental and final goods discussed above: treating thinking as a burden that needs to be gotten through so we can get an A, or treating conversation merely as a means to a desired sense of comfort or affirmation.
The claim that some preferences are distorted may be a hard pill to swallow in our contemporary cultural moment, when the primacy of individual desire and choice has become common sense and convenience and efficiency are cherished among the highest human values. To be on the left, however, means wanting to expand genuine autonomy and the opportunity to flourish for all people. And that means leftists can’t avoid reckoning with the facts that some desires are deeply misguided, and that catering to their satisfaction is fundamentally at odds with our goals of promoting autonomy and human flourishing.
I’ll leave it to others — but not ChatGPT — to figure out a more complete left-wing response to AI. But any response should start by fostering a deep skepticism about the supposed benefits of Silicon Valley’s miracle product, forthrightly challenging Big Tech’s economic and political power, and being willing to advocate for severe restrictions on AI’s use — even, or especially, when consumers find it too seductive to resist.
Students that cheat are missing the point of studying to get an education, and they are cheating themselves. The reason for getting an education is not to get a good job, it is to learn to think. And learning to think is actually learning to learn. Learning like everything else human beings do takes practice and if you don't get good at it, you probably won't pass the interview for a thinking job, nor will you keep that thinking job very long. Learning new things is at the heart of creativity, the heart of what makes us human. Someday we will be able to build robots that play sports better than any human athletes, do you really think anyone will watch games between robots? I am a retired Ph.D. research scientist, and the best years of my life were the several years of my youth I spent learning to think, learning to learn. And that gift I earned in my youth just keeps on giving joy and meaning to my life.
This relentlessly negative piece about AI assumes a fantasy land in which most college students were once outstanding citizens who mastered critical thinking and articulate expression. Only a minority of college students have the ethical and intellectual traits idealized by the author and the use of AI will at least be a crutch for them after college, raising the quality of their work above what it would otherwise have been. A further distortion of reality is the dismissal of the extensive open-source software community, whose participation in AI development is not profit-driven. I suggest that the author follow up with a devastating critique of pocket calculators and an expose of the evils of the decline of cursive writing.