The future of "artificial" news

Artificial intelligence is rapidly entering the newsroom, offering efficiency, personalization, and new opportunities for journalists. But with the benefits come serious risks: hallucinations, disinformation, “deepfakes,” and a loss of trust in the media. The challenge lies in finding a balance between innovation and editorial integrity, while preserving the role of journalists as guarantors of the truth. AI can be a powerful ally, but only with transparency, human oversight, and clear guidelines can it serve democracy.

ACQJ editorial office

In its early days, artificial intelligence in the newsroom seemed like a tech toy, a silent assistant that could perform some boring, repetitive tasks. But today, it is becoming an integral part of the editorial machine, changing the way we gather, write, and distribute news. And the question is no longer whether we will use it, but rather how. si we will use it, but at what price.

In Germany, the EXPRESS.de portal "hired" a virtual journalist named Klara Indernach, powered by AI, which produces articles clearly labeled as machine-generated, with profile photos created by Midjourney, and with mandatory review by human editors for accuracy. Klara already contributes about 11% of articles and during busy periods has helped increase traffic by 8–12%, while AI-optimized headlines have increased clicks by 50–80%. Such results seem like a dream for any newsroom struggling for clicks, but they also raise a strong question: what happens when speed of production trumps care for the truth?

The magic of efficiency and the hidden side of the coin

Supporters of the technology say AI does not replace journalism, but rather enhances it. Transcription tools turn an hour-long interview into text in minutes. Algorithms analyze big data, financial reports or social media trends, discovering patterns that can be turned into impactful stories. The Times of London has developed JAMES, a digital servant that adapts newsletters to readers' habits, while Clarín in Argentina has WalterAI, which offers the same article in several formats, from abridged to main points.

Even in Albania, some small newsrooms are quietly experimenting with AI tools for translations, news summaries, and generating "SEO-friendly" versions of articles.

“The most common formats used by Albanian newsrooms are related to word processing and production platforms, mainly ChatGPT and Perplexity, which are used for basic functions such as summarizing and reorganizing materials. In some cases, these tools are also integrated into content management systems such as WordPress, allowing journalists to use them directly from there. Beyond that, there are also more limited uses of programs for producing images or subtitles,” says Emirjon Senja, journalist and researcher in the field of media.

But where efficiency quickly raises the cost of production, the risk of error also quickly appears. Generative models do not really understand content; they are masters of language, not truth. The case of the portal CNET It's one of those cases that every newsroom must have a nightmare about that keeps them up at night. Dozens of AI-generated articles resulted in 41 major corrections due to inaccuracies.

“Given the limited use that AI is seeing in Albanian newsrooms, the main risk is the uniformity of content. A text created by ChatGPT is published online, then taken by other journalists, reformulated again with the same tool and published again on other portals. After a few months, when someone searches search engines for the same topic, they are faced with the same material reproduced endlessly. In this way, the audience is exposed to the same content without diversity. Another risk is related to the lack of verification: journalists often do not check the information produced by AI, paving the way for prejudice and disinformation, which with the help of technology spreads even faster,” Senja warns.

But the dangers of over-reliance on AI don't stop there.

Hallucinations, misinformation and loss of trust

In editorial jargon, “hallucination” has nothing to do with art or creativity. It’s the term that describes AI’s ability to invent “facts” that sound completely plausible but are untrue. And when these are inserted into a published article, the damage is twofold: the public is misinformed, and trust in the media is eroded.

Even more dangerous is the new weapon of disinformation, the “deepfake.” In early 2024, a journalist France 24 program was targeted with a doctored video that mimicked his voice and changed the title of a report. The fake news spread before it could be debunked, causing serious reputational damage.

Personalization, “echo chambers” and financial pressure

Another great promise of AI is content personalization, news tailored to the interests of each reader. It is a powerful weapon to increase engagement, but it also has a dangerous side effect: the creation of “echo chambers” where the reader only sees news that aligns with his or her beliefs. This fragments public discourse and weakens the role of the media as a shared information space. In Albania, where political polarization is already extreme, the careless use of such algorithms could further divide the public, beyond the current perception in which the media are divided into consolidated political trenches.

Po JUST, The financial crisis of traditional media is one of the main drivers towards automation. Newsrooms that lose journalists due to the inability to pay them, but also seeing automation as a form of cost minimization, may be tempted to replace part of the staff with technology. But examples like that of Microsoft, which in 2020 replaced MSN curators with AI and published inaccurate and offensive news, proving that short-term savings can turn into long-term losses.

Hybrid models, where AI handles routine tasks and journalists handle work that requires judgment, seem to be the wisest path. Le Monde uses AI for translations, but every text passes through the eye of an editor. Because the "human eye" is what distinguishes good news from a simple generated text.

Rules are "lame" against the pace of technology

While AI is rapidly being integrated, ethical and legal regulations are moving at a slow pace. Paris Charter on Journalism and AI proposes clear principles, human oversight, transparency, accuracy and copyright protection. Major agencies like AP and Reuters are making deals with AI developers for the use of their content. The EU has adopted the Artificial Intelligence Act, which requires the labeling of AI-generated content and is discusses standards for election informationIn Albania, beyond the fact that there is no legal framework regulating media as a field or concept, the code of ethics does not yet include clear guidelines for AI, despite recent efforts to include such provisions in it.

“Careful and responsible use is essential. Journalists must recognize both the potential and the risks of artificial intelligence, and verify any information that is produced. This means always requesting the source or reference along with the content, so that it is verifiable. Only in this way can AI serve as an aid to journalism, not a source of danger,” underlines Emirjon Senja.

The truth is that AI is neither a savior nor a fatal threat, it is a tool. A newsroom that sees AI as a permanent “hands-on journalist,” doing repetitive work but never going out into the field, will benefit from speed and volume without sacrificing quality. But this requires three non-negotiables: transparency (the reader must know when AI has contributed to the content), human oversight, no text should be published without editorial control, and public and internal policies that define the limits of its use.