American publication published a list of books for the summer: 10 out of 15 works do not exist in reality - they were imagined by AI - ForumDaily
The article has been automatically translated into English by Google Translate from Russian and has not been edited.
Переклад цього матеріалу українською мовою з російської було автоматично здійснено сервісом Google Translate, без подальшого редагування тексту.
Bu məqalə Google Translate servisi vasitəsi ilə avtomatik olaraq rus dilindən azərbaycan dilinə tərcümə olunmuşdur. Bundan sonra mətn redaktə edilməmişdir.

An American publication has published a list of books for the summer: 10 out of 15 works do not exist in reality - they were imagined by AI

One of Chicago's leading daily newspapers, the Chicago Sun-Times, has published a list of summer reading recommendations generated by artificial intelligence. The list includes real authors and fake books, reports Medusa.

Image generated by AI

The 2025 summer reading list was released on May 18, appearing in a Chicago Sun-Times feature called the Heat Index. Only five of the 15 books on the list were real. The list included fake books by authors including Britt Bennett, Min Jin Lee, Rumaan Alam, Rebecca McKie, Maggie O'Farrell, and Percival Everett. These authors never wrote the works on the list.

Each book had a short description. For example, Isabel Allende’s Tidewater Dreams is described as “a multigenerational saga set in a coastal town where magical realism meets environmental activism.” Andy Weir’s The Last Algorithm is described as “about a programmer who discovers that an artificial intelligence system has become sentient and has been secretly influencing global events for years.”

On the subject: Elon Musk said that artificial intelligence will take away all jobs from people

The list also includes real works, such as Ray Bradbury's Dandelion Wine, Jesse Walter's Magnificent Ruins, and Françoise Sagan's Bonjour Tristesse.

In the same issue of the newspaper, some of the articles by Marco Buscaglia mention people who do not exist. For example, "Dr. Jennifer Campos, professor of leisure studies at the University of Colorado, and "Dr. Catherine First, a food anthropologist at Cornell University." The articles also reference non-existent articles.

Marco Buscaglia admitted that he "sometimes" uses artificial intelligence, but always checks the material.

"I didn't do it this time. It's 100 percent my fault and I'm very ashamed," he said.

Not the only scandal

In addition to the Chicago Sun-Times, The Philadelphia Inquirer has also featured AI-generated book lists. The publications were produced by King Features, a subsidiary of media giant Hearst, said Victor Lim, vice president of marketing and communications for Chicago Public Media. He said that “historically,” the paper does not subject King Features’ inserts to editorial review.

"We are updating our policies to require internal editorial oversight of such content," Lim said.

The Chicago Sun-Times is distancing itself from the list of fictitious books because it says it is not editorial content. The Chicago Sun-Times has promised that its subscribers will not be charged for the issue that contained the inaccuracies, that the section will be removed from the online version of the newspaper, and that the editors will change their internal rules to indicate that the materials were prepared by a third party. They will strive to “never let this happen again.”

This isn’t the only case where publications have published AI-generated content. High-profile incidents have occurred with media holding Gannett and Sports Illustrated magazine, in both cases artificial intelligence was used by a third-party marketing company.

Cheap and angry

I wondered what the AI ​​thought about this embarrassment. Well, Grok, for example, puts the main blame on us, humans, and our negligence. However, it also immediately explains the reasons for such negligence. For example, the embarrassment at the Chicago Sun-Times is due to the fact that, according to the union, the publication is currently short of editors due to layoffs: 20% of the staff was laid off in March 2025.

“Media outlets like the Sun-Times are increasingly using AI to save money,” the AI ​​noted. “This forces publications to outsource content or rely on AI freelancers. Text generation is cheap.”

Grok believes that media outlets, despite cuts, should invest in fact-checkers and editors, and freelancers should check AI findings through databases (Goodreads, Library of Congress). And readers shouldn’t relax, just as they shouldn’t trust everything that’s written in newspapers. Especially now.

Hallucinating Digital Brain

“AI, especially generative models like ChatGPT, Claude, or Grok, invent non-existent books due to the way they work, the lack of critical review, and human negligence. This phenomenon is known as ‘AI hallucinations,’” the digital brain explained to me.

Generative AI models based on large language models (LLM) do not “know” facts, but predict text based on probabilities trained on huge data sets (books, articles, websites). They create plausible combinations of words, even if the result is fictitious.

The technical reason is that models like GPT-4 or Claude are trained on texts up to a certain date (say, 2024), and for 2025 they can only predict, not confirm. Without verification, the AI ​​produces “hallucinations” like Min Jin Lee’s non-existent Nightshade Market.

“For example, my cutoff is October 2024,” Grok admitted. For 2025 events, like the Sun-Times list, language models “predict” the outcome. Also, the AI ​​doesn’t distinguish fact from fiction — it’s optimized for coherence, not accuracy.

The journalist is also guilty of formulating his request inaccurately: “If a Sun-Times freelancer asked for ‘summer 2025 books’ without specifying ‘real’, the AI ​​could generate fakes by following templates.”

Is it possible to build automatic credibility checking into the “digital brain”?

You may be interested in: top New York news, stories of our immigrants, and helpful tips about life in the Big Apple - read it all on ForumDaily New Y

Yes, it is technically possible to build in mechanisms that reduce hallucinations, but it is impossible to completely eliminate them yet due to the limitations of technology and the nature of language. Therefore, publications should not economize on the “human factor” yet. For people, this news looks comforting: it is still very early to say that artificial intelligence will take all the jobs in the publishing industry.

But, on the other hand, I admit honestly, with such apprentices as ChatGPT, Grok or DeepSeek I could churn out an issue of some ladies' glossy pulp in a day, which used to take a whole editorial team a month to create. And as for the fictional characters generated by AI, the banal maxims they issue are practically no different from those uttered by real "experts".

Read also on ForumDaily:

'We'll all throw away our smartphones': Meta's smart glasses astonish the journalist who tested them

Student Finds Out Professor Used Chat GPT, Demands Tuition Refund

IT is no longer trendy: programmer lost a cool job because of AI: he sent 800 resumes, but to no avail

In the U.S. journalism Artificial Intelligence Educational program fake news
Subscribe to ForumDaily on Google News

Do you want more important and interesting news about life in the USA and immigration to America? — support us donate! Also subscribe to our page Facebook. Select the “Priority in display” option and read us first. Also, don't forget to subscribe to our РєР ° РЅР ° Р »РІ Telegram  and Instagram- there is a lot of interesting things there. And join thousands of readers ForumDaily New York — there you will find a lot of interesting and positive information about life in the metropolis. 



 
1241 requests in 1,260 seconds.