When the robots take over: The future of AI and journalism
What will journalism look like in an increasingly automated world?
What will journalism look like in an increasingly automated world?
Longtime journalist Kim Trynacity worries that artificial intelligence is “on track to replace people, period.”
“There are some practical applications which I think are beneficial,” said Trynacity, the former president of the Canadian Media Guild’s CBC branch. “But I think taking people away from people is the biggest threat that we see [with] AI in journalism.”
Artificial intelligence and its impacts on writing have garnered public attention this year. With the introduction of language models like ChatGPT, which can produce endless amounts of content in milliseconds, the landscape of written media is shifting drastically. Newsrooms are no exception to this shift.
A multi-faceted tool
Newsrooms worldwide have already begun implementing different forms of AI technology into their production.
In a 2021 article published on J-Source, Toronto Metropolitan University journalism professor Adrian Ma wrote about different ways AI has been aiding news production. According to Ma’s article, AI is helping journalists identify and remove gender bias in stories, curate unique multimedia elements and collect data at top speeds.
In an interview, Ma spoke about how these technological advancements can help journalists. “When you save people time, it pays off in dividends,” said Ma. “Things like data and freedom of information in Canada are really terribly organized. A lot of the data you get is inconsistent… I think we’re gonna get a lot more stories now because of the amount of processing that’s available… You can get summaries and visualizations in seconds now.”
While AI in the newsroom might seem like a new phenomenon, the first intersection of AI and journalism can be traced to 2013. The wire service Associated Press was the first to use Automated Insights technology to produce stories from data on sports and corporate earnings.
Following this, both Agence France-Presse [1] and Reuters began using algorithms to speed up data collection and increase news production, while the Los Angeles Times launched Quakebot, the first online bot created to automatically write and publish news reports on earthquakes in California.
Nowadays, AI is used in a myriad of ways by journalists worldwide. Joanna Kocik is a content specialist who helps businesses with online marketing. Kocik has used AI throughout her work in both journalism and marketing.
“I know that this revolution we are observing since the release of ChatGPT is… blowing everyone’s minds, but the truth is, we’ve been using AI for years,” said Kocik. “Even Google Translate is artificial intelligence.”
Although AI has been used for a long time, Kocik still believes that new tools like ChatGPT have introduced something valuable to journalism.
“What we’re observing with generative AI is, of course, a slightly different thing,” said Kocik. “It suddenly [became] available for everyone… If there’s a need to speed up work, streamline content production, then it works fine. For summarizing big texts or big data sets, for extracting information, for proofreading.”
Reporting on AI
While journalistic AI tools have evolved through the years, reporting on the subject of AI has undergone little change. In April of this year, The Conversation published an article that was critical of the way journalists are covering AI, arguing that “news media closely reflects business and government interests in AI by praising its future capabilities and under-reporting the power dynamics behind these interests.”
The article, written by three researchers from the Quebec-based research team Shaping AI, said that news coverage of AI is unbalanced. According to the article, “Few critical voices find their way into mainstream coverage of AI. The most-cited critical voice against AI is late physicist Stephen Hawking, with only 71 mentions. Social scientists are conspicuous in their absence.”
Accuracy concerns
One key concern about generative AI in the newsroom is accuracy. While researching the impact of AI on news in February, computer scientist Nick Diakopoulos tested a chatbot’s ability to report news. After asking Microsoft’s Bing chatbot several questions about recent news stories, Diakopoulos found that only 53 per cent of the chatbot’s responses were accurate.
With public trust in journalism already on the decline, it’s clear that AI poses a significant risk to the accuracy and integrity of news. However, postdoctoral law researcher Matteo Monti may have a solution to this issue.
In an article published to Opinio Juris, Monti wrote about different ethical concerns regarding AI in newsrooms and ways journalists might address them. The article suggests that a set of rules and guidelines regarding the use of AI in the newsroom might help tackle issues like the spread of misinformation. Monti wrote that issues of inaccuracy “should be overcome by also applying a code of ethics to programmers who will be part of the new technological world of journalism.”
Since the release of Monti’s article in 2019, some newsrooms have created their own code of ethics regulating the use of AI. However, according to AI expert David Caswell, these codes are far from covering all potential risks of the technology’s use. He said there is no universal code of ethics for every newsroom.
Caswell says that many newsrooms already have an AI code of ethics, but it’s an ongoing process. Most only provide a statement of intent, outlining goals surrounding transparency and adaptability.
In October, BBC executive Rhodri Talfan Davies published a statement for the public outlining three principles to keep in mind when working with AI: acting in the public’s best interest, prioritizing “talent and creativity” and being transparent.
“I think we’re a long way from a stable set of guidelines,” said Caswell. “But there are some very good frameworks that are developing.”
An AI education
Another solution to overcoming issues surrounding AI-produced content might be through education. Alfred Hermida, a professor and former director of the School of Journalism at the University of British Columbia, has begun incorporating AI technology into his courses at UBC.
In his journalism research course, for example, Hermida has his students critique and grade literature reviews written by chatbots. “It’s a way of saying generative AI is very powerful at producing content, but what are the limitations?” said Hermida.
While he’s yet to see any courses dedicated solely to teaching about the use of AI in newsrooms, Hermida predicts there will be in the future. “I think [AI] is something that will have to be integrated a lot more,” said Hermida. “When you go to journalism school… you have classes in interviewing, you have classes on how to talk to people, how to build contacts… I think of AI in a very similar way.”
Will the robots take over?
Many journalists worry about how AI will affect their job security. While most AI tools require humans working alongside them to ensure coherence and factual accuracy, Trynacity worries the technology may one day be capable of working alone.
“I think there are some scientists working with the ultimate goal of [AI] becoming more human-like,” said Trynacity. “Not only will human monitors be replaced, but there will be no need to have human monitors.”
Others believe the human side of journalism is inherently irreplaceable. Ma argues that while AI may fulfill tasks such as research, data gathering and transcription, there will always be a need for human journalists to tell well-rounded stories.
“I don’t see AI being able to do multi-faceted investigations into things that hold together multiple human perspectives at this point,” said Ma. “I think those kinds of jobs are safe. It kind of depends on how you want to imagine the industry.”
A complex future
It’s clear that with a lack of stable guidelines, problems with accuracy and fears surrounding job security, journalistic AI tools still have many creases to iron out.
But AI also has the potential to help newsrooms overcome biases, create unique story forms and report data at high speeds. So, should journalists fear these new developments, or embrace them?
Hermida believes one strategy might be for journalists to try to outdo their robot counterparts.
“The risk of AI is not that it’s going to replace every job in journalism, but that it’s going to take the ones that require less skill, less knowledge, less expertise,” said Hermida. “One of the things [journalists] will have to do is increase their expertise, increase their knowledge, increase their uniqueness.”
About the author
Kaitlyn MacNeill
Kaitlyn MacNeill is a fourth-year journalism student at the University of King's College living in Halifax, Nova Scotia.