ARTIFICIAL INTELLIGENCE is ubiquitous: as I drive through the streets of Los Angeles to campus each day, autonomous, driverless Waymo cars are becoming an increasingly common sight. The headlines I read before heading to UCLA are filled with stories about the promise of AI. And social media platforms seem to be increasingly dominated by AI-generated images and videos.
The work I do has also led me to have AI on my mind a lot lately. I have been deeply engaged in the field of Chinese studies for nearly 30 years, not just as a professor and an author but also as a translator of fiction. In the time since the United States came out of the gate as the clear leader in the new global "AI race," the recent appearance of the Chinese AI start-up DeepSeek has generated much attention as well as a mix of excitement and anxiety. For literary translators, the AI revolution has been a source of much soul-searching and introspection. A former PhD student of mine, who is now a professor of translation studies at a major research university in China, recently confided that her department is "imploding" thanks to an exodus of majors in the wake of new AI programs and platforms. Who wants to hire a translation studies major to do work that can be done quicker, cheaper, and in some cases more accurately, by an AI-aided machine? I, too, question whether I would want to continue devoting months of my time to meticulously translating texts that machines are getting better and better at rendering into English. Where, I wonder, is the tipping point in this digital version of John Henry's battle with steam-powered drills?
Over the past month, two AI nightmares have troubled me most. One has to do with the reception of a recent translation project, while the other relates to my teaching. Each provides a different boots-on-the-ground perspective on what it's like to be doing certain kinds of intellectual labor during the dawn of the AI era.
It was in 2020, at the height of the COVID-19 pandemic, that I undertook one of the most difficult translation projects of my career, tackling all three novels in the Hospital trilogy by acclaimed Chinese science fiction writer Han Song. Part of the decision to take this on was the way the series seemed to anticipate the distressing challenges of a world defined more and more by fast-spreading and dangerous diseases (the trilogy's books were originally published between 2016 and 2018, before the COVID pandemic), and the way it painted a dark portrait of an AI world gone awry. I was also, though, consciously looking for a book so strange, twisted, and wild that neither Google Translate nor any other AI translation platform could possibly handle it. I wanted to translate something only a human would be capable of reworking in a new language.
The trilogy -- composed of the novels Hospital, Exorcism, and Dead Souls -- is hard to characterize. If you imagine Davids Lynch and Cronenberg co-directing an adaptation of Kafka set in a Chinese politburo meeting, you might start to get the picture. Arguably more experimental fiction than standard, the trilogy describes a dark world where everyone is a patient caught in a cycle of endless suffering. Meanwhile, the entire city where these patients live -- the entire world, in fact -- is a massive hospital. Pulling the strings is an AI algorithm called "Siming, the Director of Destinies." As the trilogy unfolds, Siming goes awry, spinning out of control, driving the hospital into greater chaos, and eventually trying to kill itself.
At the start of 2025, the final English translation of the trilogy, Dead Souls, was published by Amazon Crossing. The inherent "weirdness" of the series seemed to scare away a lot of traditional reviewers, though the first two parts did get nice attention in this publication. I did discover, however, a podcast uploaded on January 17 by the channel Sam & Pam Learn About Books. (A few days later, it was renamed SAM&PAM's Book Radar.) Pleasantly surprised that the two hosts had taken the time to read this difficult novel so soon after publication, my curiosity was piqued. I hit play.
"Buckle up, everybody, because this deep dive is gonna be a wild one" -- so began Sam and Pam's nearly 16-minute review. It only took me a minute or so to realize something was off. Are they reading from a script? I asked myself. Their delivery sounded so clean, so smooth, so professional. Although there were all kinds of imperfections to their speech, such as filler words, moments of hesitation, and examples of the two hosts cutting each other off, there was also something sterile about their cadence. There was absolutely no background noise or what some refer to as "room tone." Around two or three minutes in, my sense of an uncanny listening experience grew, as I realized they were already repeating many of the same keywords about the book, like "unsettling," "twisted," and "disturbing." Ultimately, Sam and Pam were saying a lot without really saying anything. Is this AI? I wondered. I took a closer look at the podcast description (which has since been reworded): "Discover your next literary obsession [...] where artificial intelligence meets the art of storytelling. Our AI hosts deliver objective, personalized book recommendations while cutting through the noise of traditional reviews."
The trilogy's translation was the culmination of five years of work -- during which I painstakingly labored through more than 1,300 pages of difficult text, all the while tweaking language, playing with different word choices, experimenting with different styles -- and the first podcast review discussing the final volume was a dialogue between two AI-generated robots who (that?) couldn't even get the sequence of the trilogy correct (they described Dead Souls as volume two of the trilogy).
I felt sick to my stomach. What does it mean when essentially sophisticated robots are the ones discussing issues like "the risk of AI" and "the nature of humanity"? Toward the end of their review, "Pam" says that reading Dead Souls is "an experience that will stay with you [...] It's a book that makes you think, feel, and question everything you thought you knew about the world and your place in it." Can "Pam" think? Can she and "Sam" feel? And what is their "place in the world"? What would it mean if the only reviews of a novel that set out to warn us of a maniacal AI algorithm gone haywire ended up being delivered by machines?
Meanwhile, coming off nearly a year away from teaching thanks to a Guggenheim Fellowship that bought me the release time that I needed to complete Dead Souls, I found myself back teaching my big 250-student lecture course for the first time in nearly two years. Unfortunately, the robots were also there waiting for me. While I had anticipated that the "AI threat" would force me to adjust grading rubrics and tweak assignments, I wasn't prepared for the nature of the assault.
Before the quarter began, I developed a new set of assignments aimed at getting around AI, forcing students to "think outside the box" and hone their creativity. For example, I assigned contemporary Chinese artist Xu Bing's Book from the Ground (2003-ongoing), a short novel written entirely with emojis, and asked students to submit a creative response to the book in essay form, also using exclusively emojis. I wrote the guidelines up and put them in the syllabus, but then, two days before our first class, one of my teaching assistants pulled me aside: "Professor Berry, bad news ... I put the prompt for the emoji essay into ChatGPT -- it can now do it!"
That turned out to be a portent of what was to come. As the class progressed, my TAs and I were assaulted by a flood of AI-generated reading response essays and assignments. One student who complained about her grade of "A-" -- she thought she deserved an "A" -- asked me to take a look at the essay her TA had graded. It only took 15 seconds to realize: This was written by AI. (I later verified this by running it through numerous AI-detection online platforms.) Gradually, I also realized that the appearance of AI was inverting the relationship of labor when it comes to education. Students were taking all of 15 seconds to type in a prompt and hand in a robot-generated assignment while my TAs and I were carefully reading through them, offering comments and suggestions in good faith. And then, when we suspected foul play, we had to switch gears, put on our "detective" hats and spend even more time trying to find evidence of academic dishonesty. Even the handful of cases we caught felt like a veritable tsunami, but I know it was only the tip of the iceberg. According to a recent report, 94 percent of AI-generated college writing goes undetected.
Then came a final slap in the face. Thanks to all those searches for AI-detection software I was running, my social media pages suddenly became filled with advertisements for various AI platforms specially designed to help college students game the system by "writing" papers that "bypass AI detectors like Turnitin and GPTZero." Again, I felt sick to my stomach.
Once upon a time, I thought the promise of AI might be to improve our lives, to harness new computing power to make us smarter and more efficient. Of course, AI is evolving quicker than any of us can comprehend, but at least as of this moment, it feels like AI is taking us backward. Students cannot resist the temptation of AI. It is too accessible, too easy. More and more of our students are not reading, and they are not writing. Instead, they submit AI-written essays that are often predictable, repetitive, and laden with "hallucinations" and fake sources; and we, as educators, are in turn increasingly relegated to the role of AI police. Talk about "unsettling," "twisted," and "disturbing." The "convenience" and "shortcuts" that AI brings comes at too hefty a price, I am becoming convinced. Students are increasingly unwilling (or unable?) to invest the time needed to read long, complex narratives and compose their own essays that reflect, question, and explore the impact new ideas are having upon them. And as we outsource our thinking to the machines, we inevitably find ourselves all the more impoverished. What if, in our barter with AI, "the pound of flesh" we offer up turns out to be our brain?
So as tech moguls, journalists, and investors rave about the promise of AI, we are increasingly inundated by a constant stream of advertisements that champion everything from AI-powered writing tools to AI medical diagnostics -- and as the social media feed becomes increasingly populated with AI-generated images and videos, I cannot help but lament the irreparable damage it is unleashing on education. It seems like we now have AI "dead souls" commenting on the books we publish and submitting papers for grades.
AI may indeed be hailed by many as "the future." At least for the moment, though, it appears to be taking us backward. Next time I teach, I'm going back to blue books and oral exams.