Long gone are the days when hundreds of full-time editorial cartoonists were employed by major daily newspapers across the U.S.
The Herb Block Scholarship estimates the number of cartoonists working at papers nationwide has dropped from 120 to just 30 over the past 25 years. In 2023, three Pulitzer-winning cartoonists were
laid off by the McClatchy newspaper chain in just one day.
As print continues its decline, a new challenge has emerged for this workforce: the rise of AI image generators. The first major text-to-image models were released back in 2022 and quickly ignited debates about
copyright infringement and
labor displacement among illustration communities. That year, the Association of American Editorial Cartoonists (AAEC)
banned the use of AI-generated imagery in its membership applications and awards, citing the technology’s ability to deprive artists “of opportunities to make a living.”
In recent weeks, this debate reemerged after ChatGPT rolled out a
new image generation feature on March 25. In part, the update allowed users to easily generate images of public figures for the first time, which OpenAI said would more freely permit the use of ChatGPT for “
satire and political commentary.”
Cartoon stylings are now in the hands of every ChatGPT user. Though, arguably, a caricature of a president without the wit and biting commentary of a cartoonist isn’t a political cartoon at all. Rather than ask if AI can be used to mimic cartoons, I was curious if any actual working cartoonists see value in these technologies. And if they do, how exactly they are using them.
To get a better sense, I spoke to
Joe Dworetzky at Bay City News and Pulitzer-winner
Mark Fiore, two established cartoonists who have been experimenting with generative AI. While many cartoonists have sworn off AI entirely, Dworetzky and Fiore are grappling with how these tools could assist in, not replace, their work.
Playing in the AI sandbox
Editorial cartooning is a second act for Joe Dworetzky, who practiced law for 35 years before pursuing journalism full time. In 2020, he landed a reporter role at
Bay City News, an Oakland-based regional journalism nonprofit, and folded his off-the-clock cartooning hobby into his coverage. For the past five years he’s published political and social cartoons under the column “
Bay City Sketchbook.” Eventually, he started incorporating some cartoons into his regular legal affairs stories.
“Cartoons can be good even if they’re not beautifully drawn,” said Dworetzky, explaining that since he came to cartooning later in life, his technical ability hasn’t always matched his ambitions. “I’ve always been very frustrated by my limited drawing skills.”
After the first major AI image generators, like OpenAI’s Dall-E and Adobe’s
Firefly, were released, Dworetzky jumped at the chance to test them out. He wondered if the technology could help close the gap between his own imagination and his skill. “The images just had some verve and style to them that was quite surprising to me,” said Dworetzky of those early experiments.
In early 2024, he discovered an image generator called
Leonardo AI. Leonardo allowed artists to upload 40 samples of their own work to try to mimic their distinctive style. Dworetzky uploaded a series of celebrity author portraits he’d illustrated for a book project. To his surprise, he saw his own style reflected in Leonardo’s output. “The technical skill felt like it was at the level of mine,” he said.
Water color and ink drawing of author Kurt Vonnegut by Joe Dworetzky. Portrait of a generic “man” generated using Leonardo AI by Joe Dworetzky. (
Courtesy of Bay City News)
At the time,
Local News Matters, the site run by Bay City News, was already encouraging experimentation with AI among its journalists. They created a “
sandbox,” a vertical on the site where reporters could put trial runs with the technology, making it clear to readers that they were only experiments. “Struggling with hand-drawn images has been a big part of what has given me joy as a cartoonist. But if I can find a way to really train the algorithm on my own work — not on 40 drawings but on 1,000 or 2,000 — I might be able to silence the voice in the back of my head that says that this is sleazy,” he wrote. The feedback from readers was mostly encouraging.
That same month, Dworetzky began covering an
ongoing lawsuit between Elon Musk and Sam Altman, which alleges OpenAI breached the company’s original mission to be an “AI safety” nonprofit. In his
first story on the suit, he produced seven AI-generated cartoons to accompany the written article. Each image caption cited Dall-E, OpenAI’s premier image generator at the time, as the source. An author’s note appeared at the bottom of the story: “
The illustrations in this article were created using ChatGPT and DALL-E based on — but not always faithful to — prompts composed by the author.”
“Because the legal complaint was about this technology, there was something fitting about creating illustrations for this series using the technology,” said Dworetzky, adding that it’s generally hard to find good photography or visuals to illustrate legal coverage. He has since gone on to report more than
10 follow-up stories on the legal battle, with cartoons that often depict Musk and Altman as fighting roosters, in courtrooms presided over by “wise owls.”
Currently, ChatGPT is Dworetzky’s preferred image generator for these cartoons, though he usually edits the images it produces in Photoshop and always writes the caption text himself.
So far, Dworetzky has limited his use of generative AI to this one series and continues to draw his cartoons for Bay City Sketchbook entirely by hand. “Generally, you can do 90% of a generative AI project in 10% or less of the time,” said Dworetsky.
He finds the final stretch of his AI-assisted projects the most difficult. Refinement is needed to perfect a single-panel cartoon, but image generators “are often frustratingly non-iterative.” It can be difficult to make small repeated adjustments and edits with most AI tools. “You sometimes have to take three or four different iterations, then lasso something from one and put it into another, and then add something else that you’ve hand-lettered and or drawn on top of it.”
Algorithmic bias is another concern. Image generators
reinforce harmful stereotypes baked into their training data, including racist, sexist, and homophobic tropes and
Western cultural biases. (Cartooning as a profession is
already dominated by white men). Dworetzky tries to be vigilant and keep these biases from seeping into his published cartoons. “Sometimes you’ll create scenes in which every single lawyer is a white man,” he said.
Dworetzky sees cartoons as primed for assistive AI, considering the form is not tied to accuracy in the same way as other kinds of visual journalism,
like courtroom sketches. But he worries about the long-term impact of these tools on the profession, and frames generative AI as an accelerant that could diminish the already contracting editorial cartooning workforce.
“It makes you feel like the value of [cartooning] is on a pathway to zero,” he said, comparing its trajectory to meme pages on social media. He sees meme creators who post their work for free to gain followers. For a select few, it’s a tenable but precarious way to make a living.
“I fear for the profession and the fun of having something that captures what a cartoon does — a beautiful, quick strobe light of a situation,” he said. Dworetzky calls back to the “
rich history” of cartooning in democratic politics. “You want cartoonists to exist.”
“Pre-bunking” misinformation
Mark Fiore is used to breaking form. Back in 2010, he became the first person to
win a Pulitzer Prize for animated editorial cartoons. He has continued to work as a leading digital cartoonist, publishing animations on <a href="http://SFGATE.com" rel="nofollow">SFGATE.com</a>, NPR, and Slate, and
launching a Substack that now has over 11,000 subscribers.
Fiore began experimenting with AI tools
back in December 2023, testing image generators to see if they could produce background scenes for his animations. For those early tests, he used Leonardo AI, which allowed him to upload 40 of his illustrations, including cartoons about past Israeli incursions into Gaza. Then he prompted the image generator to produce “a Fiore cartoon of a partially destroyed city.”
The images were passable and, in broad strokes, picked up his style. He wrote about the experiments
on his Substack, but held off on incorporating these backgrounds into his work.
Images generated with Leonardo AI by Mark Fiore using the prompt: “A fiore cartoon of a partially destroyed city.” (
Courtesy of Mark Fiore)
The one-minute clip debunks a lie by Donald Trump. During the
Pacific Palisades wildfire earlier this year, he ordered the release of billions of gallons of water from two reservoirs in Central California, and said if he’d been allowed to do it sooner “there would have been no fire.” In reality, those reservoirs had no possible route to Los Angeles and instead diverted water away from local farmers.
One of the backgrounds in the clip of drought-stricken farmland was created using an AI image generator.
Still from the “Fun Facts That Matter!” animation, including backgrounds produced by a custom AI image generator.(
Courtesy of Mark Fiore)
The discourse around generative AI tools in journalism is often preoccupied with workplace efficiency and speed, frequently in service of lowering production and labor costs. Fiore argues there is a different need for speed —
to cut off the spread of disinformation.
“Researchers say misinformation is analogous to a virus,” said Fiore, gesturing to a body of scholarship that suggests there are ways to
inoculate people to misinformation, or to “
pre-bunk” certain lies. “It’s easier to stop the fewer people that see it. So you need a much quicker response.”
Normally, Fiore’s basic animations take two to three days to produce — and that, he says, is already a “ridiculously fast” pace for most animators. He sees assistive AI tools as a way to crank up production even further. “Rather than letting misinformation percolate and operating more on this plodding weekly schedule,” said Fiore, “what if I had the ability to jump on misinformation right away?”
Fiore has now partnered up with a Stanford computer science student and researcher,
Maty Bohacek, who is building custom image generators that are trained on Fiore’s work and can better fit into his animation workflow. Bohacek and Fiore’s prototype is powered by a
foundation model called Flux, developed by the German research firm
Black Forest Labs.
The tool can already create static background images, including the one featured in “Fun Facts That Matter!” The duo is now fine-tuning the model to produce images of secondary or background characters, including large crowd scenes. The next step will be automating the motion of these characters (eye blinks, arm movements), though Fiore thinks the technology isn’t currently capable of handling main character drawings or other foregrounded visuals.
The only part of production Fiore says he would never use AI tools for is the actual writing and ideation. “For me, it’s much more about the animation production. I don’t want to outsource my brain,” he said.
Outside of his own work, Fiore is not very optimistic about the technology’s downstream effects. For early-career cartoonists, particularly single-panel cartoonists, Fiore worries AI will “eliminate the possibility that someone starting out gets work.” In the past, smaller news outlets might have hired upstart cartoonists. In the near future, he sees those same publishers turning to automation to cut costs. “I definitely see it as a threat,” said Fiore. “I don’t really feel guilty [about my adoption] because the only labor that I’m trying to displace is my own.”
Some days, Fiore has doubts about jumping on the generative AI bandwagon. “Am I gonna look back on this in a few years and say wow, I went way too big on NFTs?” he said with a laugh. Most of the time he’s confident generative AI isn’t a fad, but instead is a useful tool to support his work.
Even if AI-generated imagery does become core to his process, he never plans to put down the stylus entirely. “I don’t want to just be entering text prompts for the rest of my life,” he said. “I got into this because I enjoy drawing — I’m not going to give that part up.”
Andrew Deck is a staff writer covering AI at Nieman Lab. Have tips about how AI is being used in your newsroom? You can reach Andrew via
email,
Bluesky, or Signal (+1 203-841-6241).