By Odin Rasco
It takes only a look around to see AI technology is in the midst of a meteoric rise. The proliferation of it was most recently highlighted during the Super Bowl, with nearly a quarter of the game’s ads (15 out of 66) involving AI in some way, according to a report by Adweek. Considering ChatGPT had more than 800 million users weekly in 2025 — and that the platform is helping design everything from toothpaste to specialty Coca-Cola products – Resourcera estimates that around 1 in 10 people alive today use AI.
Part of the appeal of the tech is its user-friendliness, allowing anyone to give an AI model instructions in plain language, no coding required. Because AIs like ChatGPT or Anthropic’s Claude are Large Language Models, trained on nearly immeasurable samples of the written word, many users turn to them to generate essays, cover letters, summaries and other kinds of writing.
But for individuals who engage with writing not as a task to get done but rather as a craft to hone – journalists, authors, screenwriters and educators – the rapid spread of AI has raised multiple concerns about how the new technology is impacting their fields.
“I feel that in a short period of time I’ve become very counter-cultural without meaning to, because I have a kind of like ‘kill it with fire’ attitude towards [AI],” author and educator Lisa Locascio Nighthawk remarked. “I didn’t consent to this, you know? And I guess, you know, we don’t get to consent to the cultural changes that impact us; but I don’t appreciate how it’s all happened in what feels like about two years.”
Tech-enabled thought theft?
Nighthawk is the author of New York Times Editors’ Choice novel “Open Me,” as well as executive director of the Mendocino Coast Writers’ Conference and chair of the Antioch MFA in Creative Writing. Her own words, though unwillingly, may now be a small part of how one AI model works: “Open Me” was one of the more than seven million books Anthropic downloaded or digitized to support the training of its large language models.
Anthropic was taken to court for its unlicensed use of books in its LLM training in Bartz v. Anthropic, one of the first major lawsuits brought by authors against an AI company. The U.S. District Court for the Northern District of California issued a summary judgement on the case authored by Judge William Alsup on June 23, 2025. The court’s judgment found Anthropic’s use of books, many of which were obtained on pirating websites, “inherently, irredeemably infringing.” The court did maintain, however, that use of legally-obtained books in the training of an LLM constituted fair use.
The Author’s Guild, the oldest professional organization in the United States for published writers, strongly disagreed with the fair use portion of the judgement, publishing a response stating “It feels as though the court rushed to issue a decision without fully understanding the copyright law and legal issues or the potential harm.”
The class action suit was later settled out of court, with Anthropic agreeing to pay an award of $1.5 billion to the rightsholders of 500,000 titles out of the 7 million copies of books it used to train its models, meaning each title is expected to net $3,000 (to be split between the author and publisher).
For Nighthawk, the potential payout from the settlement doesn’t address the feelings that come from knowing her work was used to train an AI without her permission.
“It’s impossible to put a dollar figure on that,” Nighthawk explained. “I mean, I worked on that novel for 7 years. It’s galling, especially given how hard it is to do anything as a writer. You just feel really disempowered, basically.”
After urging from friends and colleagues to look into the Anthropic settlement, Sacramento-based author Naomi J. Williams discovered her 2015 novel “Landfalls” was another book Anthropic may have used without permission to train its AI.
“We are working on starvation wages, right?” Williams noted. “When I found my book in that database, I said, ‘Woohoo, $3,000.’ I haven’t seen a dime of that settlement yet … Certainly I’m not counting on it to pay bills or anything, but I mean, there’s something a little bit creepy about knowing that the work of your own mind has been used to help train a machine.”
Williams, who has also published multiple short stories and essays, is co-director of CapLit, a reading series and literary organization founded in Sacramento in 2025. Although she’s had experiences with AI that have raised some red flags, Williams doesn’t see the technology as an apocalyptic issue for authors so far.
“I’m 62, and so I very much came of age even before the internet, right?” Williams mused. “So, I’ve seen these big changes come and everybody gets up in arms of ‘this is the death of literature’ or whatever, and I feel like I’ve seen enough changes come and go, and the much talked of death of books and literature has yet to occur. So, although I’ve been alarmed by some of the AI related stuff I’ve seen, I’m not squarely in the doom and gloom, ‘this is just horrible’ camp.”
Hallucinogenic feedback loops for journalism?
When it comes to ethical concerns stemming from AI, writers’ concerns don’t stop at unauthorized use of their work to train the models. Journalist Krys Shahin, a Sac State alumni, recently took a deep dive into the potential ecological impacts from AI data centers including air pollution, water use and energy use.
Shahin’s reporting started with trying to find the source for the pervasive claim that an AI prompt uses the equivalent of a glass of water; though that estimate is disputed, the Environmental and Energy Study Institute found that large data centers can consume up to 5 million gallons of water a day.
“That equals roughly the daily demand for a community of 50,000 people,” Shahin wrote.
Although there are real ecological impacts to consider, AI is not without its positives, according to Shahin.
“I’m not anti-AI inherently, I think AI can be used for a lot of good things,” Shahin acknowledged. “I think it’s great that Cal Fire uses AI to help detect fires in really remote areas. I think it’s good that local municipal companies and organizations use it for road maintenance or just generic city health. I think these are all great things, but once it breaches into the creative aspects of things, the things that require human connection, that require human touch, it gets a little bit more finicky.”
One of the finicky cases, according to Shahin, is when journalism and AI overlap.
“My main concern is the lack of regulation with AI, especially in such an ethical field like journalism,” Shahin explained. “We, as humans, have to make a lot of decisions based on our own past experiences; what we’ve learned going through journalism school or just doing journalism and talking to people. Those chatbots can’t make the same decisions that I can. They can’t make any decisions, actually. They just feed you back what you want it to.”
‘Garbage people’ with ‘scam book’ objectives
Though the massive quantities of water needed to cool data centers are concern enough, for self-published authors like Sacramento-based Wayne Campbell, there’s also the struggle to grab a buyer’s attention amidst a constant flood of AI-generated titles on ebook storefronts.
“As somebody who’s in the self-published space, you see that all the time with the crowded field in Kindle Digital Publishing, where the market has been just completely flooded by garbage written by garbage people who are only seeking a profit,” Campbell remarked. “They’ll type in a prompt into an LLM and have a complete novel or novella, push it out with a garbage cover and a garbage blurb priced at $0.99 and they’ll do that at volume. KDP is just completely awash with that stuff even with the protections that they say that they have.”
A 2024 NPR article backs up Campbell’s observations, with others in the self-published space finding the marketplace filled with AI-generated “scam books.”
As time moves forward and companies and consumers continue to use AI, more concerns are likely to crop up. Some, like Nighthawk, believe AI may be a bubble fit to burst. Yet, even if that ends up being the case, the technology itself, and the questions that come along with it, won’t be going anywhere.
“The thing is, it’s here,” Williams admitted. “We can no longer live in a world where it does not exist.”
This story was funded by the City of Sacramento’s Arts and Creative Economy Journalism Grant to Solving Sacramento. Following our journalism code of ethics and protocols, the city had no editorial influence over this story and no city official reviewed this story before it was published. Our partners include California Groundbreakers, CapRadio, Hmong Daily News, Russian America Media, Sacramento Business Journal, Sacramento News & Review and Sacramento Observer. Sign up for our “Sac Art Pulse” newsletter here.


Be the first to comment on "Theft, feedback loops and ecological red flags: Capital Region writers face a new reality with AI"