AI in Journalism – Don’t Let AI Do Your Work, But Make the Process Faster

Adopting AI in Journalism has been being a hot take for the past few years. AI can help journalists fasten their work, but it’s too soon to let the technology replace the real people.
AI and news
Courtesy: EnvZone
By | 8 min read

Recently, Channel 1, a news platform, has been making headlines by showing that all of their reporters and production are AI-generated. This could be the answer for an industry where falling ad revenues make it more challenging to support the numerous employees required to operate, including journalists, producers, cameramen, editors, anchors, and expensive equipment.

This is how far from AI has come in the journalism arena, but the technology has been impacting the industry for several years. Despite its benefits to journalists, many experts and organizations have been vocal about how immature the technology is given the complicated nature of journalism. This is why many news organizations already have guidelines and policies on how to use AI effectively.

There are also concerns about AI taking over journalists’ jobs. However, before examining these worries, let’s see how AI will help journalism in general.

AI in Journalism and How it Affects the Landscape?

The integration of AI in journalism brings a range of significant benefits.

The Vast Array of Benefits

One of the primary benefits is the heightened efficiency in news production. By automating routine tasks such as data analysis and basic content creation, AI tools enable journalists to devote more time to complex and nuanced aspects of reporting.

AI also boosts user engagement through personalized content delivery. It customizes news content to match each reader’s interests, creating a more relevant and engaging news experience that captures and retains audience attention.

AI enhances journalism by saving time on tasks like summarizing and translating content, which helps journalists work more productively. It can also handle mundane tasks, such as drafting emails and memos, freeing up time for more critical work. AI helps identify trends in data and can assist with brainstorming ideas and suggesting headlines.

The Risks to Put in Mind

Despite the benefits, AI can be a killer at some points. It holds the potential for homogeneous and generic content, which can diminish originality. Overreliance on AI might erode critical thinking and human judgment, while AI-generated misinformation poses a significant threat if not properly managed.

Privacy concerns arise from the extensive data AI systems handle, and there is a risk of job displacement as AI could replace some roles within the industry. Plus, the influx of low-quality or inaccurate AI-generated content could degrade the overall quality of information available online.

How Do News Organizations Adopt AI in their Daily Tasks?

Considering the potentials and the risks that come along with using AI generative tools in journalism, each news outlet has their own way in harnessing this technology.

Speaking at Collision Conferences, Traci Mabrey, a general manager of Factiva, a business intelligence platform owned by Dow Jones, shared that, “We’re both an aggregator of publications and obviously we are a peer of all of these fine folks in terms of the Wall Street Journal and the Times and a series of global publications. So, in both the way that we promote journalism and the way that we aggregate it from a Factiva perspective, we’ve been using AI, right, this word really almost since the inception of the platforms.

online news association conference
Courtesy: Online News Association

“I think we’re all looking at generative AI and things like it’s this brand new thing, but think about it as machine learning, you’re applying metadata, all these very techy types of terms, we’ve been using that forever. And that’s a really important component as we look at this New Horizon. It’s going to be something, and I don’t think any of us know exactly what that is yet, but we have been using the building blocks of it, I think, for quite some time.”

As for Fast Company, they has not really been using AI in any formal way. It’s still very much about human beings crafting words and creating imagery, as shared by Harry McCracken, the global technology editor at the newsroom.

He added, “However, in terms of my own work, in retrospect, I have been using AI for a while, and it has had an impact. I use Grammarly to review almost everything I write, accepting some suggestions that have always been generated by AI. Just lately, it’s gotten kind of scary smart in a way that it couldn’t have been a while ago. So yes, AI has had a little bit of impact on my writing for the last two or three years.”

WIRED, a monthly American magazine, published in print and online editions, which focuses on how emerging technologies affect culture, the economy, and politics, has already leveraged AI to fasten their work.

Gideon Lichfield, the former editor in chief of all editions of WIRED said that “For us, like a lot of places, we use AI transcription tools for transcribing interviews—that’s an obvious use. Sometimes, if you’re reporting in a language, you don’t understand or writing a story with material from a country whose language you don’t speak, you use Google Translate just to get a sense of it. So, AI has obviously been in our background for a long time, but this shift to generative AI is a completely different class of thing.”

News Outlet Came up with Guidelines to Use AI in a Proper Way

Given that AI is still in its early stages and poses potential risks to journalism, it is crucial to establish guidelines for its application in the field. In July 2022, only a handful of newsrooms worldwide had policies governing how their journalists and editors could use AI-driven digital tools. By the following year, dozens of prominent global newsrooms had developed formal documents related to AI usage.

During this period, the artificial intelligence research firm OpenAI introduced ChatGPT, a chatbot capable of generating various written materials, including code, plays, essays, jokes, and news-style stories. OpenAI was founded in 2015 by Elon Musk and Sam Altman, with significant investments from Microsoft over the years.

Major news organizations such as USA Today, The Atlantic, National Public Radio, the Canadian Broadcasting Corporation, and the Financial Times have established AI guidelines or policies. This movement acknowledges the transformative potential of AI chatbots in journalism and the impact on public perception.

In September 2023, research published on the preprint server SocArXiv delved into how newsrooms are adapting to the expanding capabilities of AI-based platforms. While preprints have not been formally peer-reviewed or published in academic journals, the paper is currently under review at a leading international journal. One of the authors, Kim Björn Becker, a lecturer at Trier University in Germany and a staff writer for Frankfurter Allgemeine Zeitung, highlighted this in their analysis.

The study provides an overview of AI policies across 52 news organizations, covering regions such as Brazil, India, North America, Scandinavia, and Western Europe. This research marks one of the first comprehensive examinations of how the news industry is navigating the integration of AI technology.

The analysis provides a snapshot of the current state of AI policies and documents for 52 news organizations, including newsrooms in Brazil, India, North America, Scandinavia and Western Europe.

The authors point out that AI policies from commercial news organizations are usually more detailed and specific than those from publicly funded ones. They include more information on what is allowed and what is not.

Commercial news organizations also place a stronger focus on protecting sources. They advise journalists to be careful when using AI tools to analyze large amounts of confidential or background information. This caution is likely due to the legal risks that could affect their business.

Should Journalists Worry About AI?

Recent discussions among tech leaders and experts suggest that generative AI poses an existential threat. This raises significant concerns not only within the media but across broader societal contexts. The impact of generative AI brings a mix of potential benefits and risks, prompting a serious examination of whether it represents a formidable challenge or an opportunity for progress.

To elaborate on these concerns, Harry McCracken is suggesting that while the idea of AI causing catastrophic harm, like destroying the world, may be exaggerated, there are real and pressing concerns associated with its development and use.

Journalists gather in a conference room
Courtesy: Online News Association

He said, “I think the worrying about it blowing up the world or killing us all is a little overwrought, particularly because there’s a pretty long list of genuine concerns that are either an issue right now or pretty clearly will be over the next few years. These include things like misinformation. There are huge privacy concerns with a handful of large companies grabbing all our data and synthesizing that for their own benefit.”

He continued, “So, I’d say there are plenty of things to worry about with AI, but not so much about destroying the world. Maybe more like, in the same way that social media has in a lot of ways degraded the human experience, AI might as well, along with all the great things about it as well.”

Gideon Lichfield added to the point, “Look, I think the fact that the people warning of the most severe apocalyptic existential threats are themselves technologists tells you something because it comes from a place of excessive belief in the power of technology. I agree with Harry—the Skynet scenario is not going to happen. The misinformation flooding is definitely a high risk.”

“The general degradation of the internet—the increasing volume of just shared garbage that is out there, that is going to be generated by AI—that’s a real worry. It’s already, even before generative AI, degrading the quality of search engines and the information out there. That is the thing that I worry about. The job displacement part is also a concern because while there is a way to use AI that empowers people and gives them extra tools, it’s also a great temptation for companies and employers to simply look at it as a way to save costs,” said the WIRED former editor.

What Do Journalists Need to Do to Create a Symbiosis Relationship with AI?

“AI should never be used by journalists in place of truth or realism,” stated Alex Gordon, Senior Director for the product team at NBC News Group.

He added to his point, “If we start to push out AI-generated content that changes the narrative of the story, we are no better than any bad actors out there.”

“In terms of defining artificial intelligence, when used properly, it is the automation of human intelligence, making life easier for tasks that currently require a lot of manual effort.”

According to Alex Gordon, we’ve been using AI for several years, starting with simple tasks like transcription. We replaced manual transcription with an AI solution that processes MP3s into fully transcribed text. AI has also been used to log image data, such as identifying people in pictures.

AI is viewed not as a replacement technology but as an assistive tool. Its role is to enhance workflows and, where appropriate, replace certain tasks, always with human oversight to ensure proper use.

Generative AI, which creates text, images, and media from human commands, presents both opportunities and risks. While it can generate realistic content, such as fake images or videos, it also poses risks if misused, like altering content to mislead or affect markets.

Alex Gordon said, “Many of our editorial teams are engaged in time-consuming activities to check for fake content. AI can quicken this process. We want to use fire to fight fire, employing AI as the best way to suss out AI used by bad actors. There is more upside in how journalists can use it, such as summarizing large quantities of text, generating conclusions from meetings, or upscaling images. These are things journalists are starting to tackle now to improve their work.”

However, there is a thing that Alex highlighted about what AI shouldn’t be used. AI should not be used in isolation or without proper oversight. Before implementing AI solutions, you should discuss them with your team leadership to ensure they are aligned with the overall goals and workflows.

Additionally, relying too heavily on AI for tasks could lead to a situation where your role becomes redundant, as AI might take over your responsibilities. The goal is to use AI as a tool to support and enhance your work, not to replace the need for human involvement and decision-making.

“Don’t use AI in a vacuum; make sure that you have brought up this attempt to use AI to your own team leadership. Don’t let AI do the work for you in such a way that you then become the self-perpetuating replacement of your own job,” he said.

There is a risk of a homogeneous voice when we go further with AI, as shared by Gideon Lichfield. However, the editor highlighted that the ones that harnesses the technology to help their writing better, not to write their own works, really got that edge.

He said, “But then again, the people who make the effort to be different are the ones who I think get the edge. There’s also the fact that for a lot of uses of writing that are not journalism and not creative writing, it doesn’t matter. For marketing emails or corporate communications, having a homogeneous voice is fine. If AI can save you time in doing those, great. But when it’s necessary to stand out, that’s when you want to rely on your human touch.”

  • Being a Section Editor and Analyst at EnvZone, London works to create and ensure the quality, freshness and creditability for all articles. This brings more informative and reliable materials to…