
ChatGPT – an artificial intelligence (AI) driven chatbot capable of generating written content, computer code, and answers to complex questions through a simple conversation box – has been at the centre of the world’s attention since launching two months ago, setting an all-time app record with 100 million users signing up in that short period.
Naturally, as someone who makes a living writing local news, ChatGPT seemed to present an existential threat felt by workers across various sectors in recent years – that technology now had my job in its crosshairs.
That dread was intensified by news that the popular online media company Buzzfeed was cutting 12% of its workforce, opting to use ChatGPT for content creation that would then be reviewed by human editors.
I decided to try a test of ChatGPT for myself. Using last week’s Haldimand Council agenda, I picked an agenda item at random and asked ChatGPT to write me an article.
The agenda item I chose was a presentation by Darin Wybenga, Traditional Knowledge and Land Use Coordinator for Mississaugas of the Credit Fist Nation (MCFN). Wybenga was scheduled to give the new Council a look into MCFN history.
I gave ChatGPT very specific parameters, asking, “Can you write me an article about the History of Mississaugas of the Credit First Nation presentation that took place on February 7, 2023, with quotes from presenter Darin Wybenga.” Here are a few excerpts from ChatGPT’s article:
“On February 7, 2023, Haldimand County Council in Ontario, Canada hosted a presentation on the history of the Mississaugas of the Credit First Nation. The presentation was delivered by Darin Wybenga, a representative of the Mississaugas of the Credit First Nation and aimed to educate council members and the public about the rich history and culture of this indigenous community.”
“The presenter also spoke about the challenges faced by the Mississaugas in the 19th and 20th centuries, including the loss of their land and resources as a result of the Indian Act of 1876 and the residential school system. ‘The Indian Act of 1876 and the residential school system had a devastating impact on our community,’ said Wybenga. ‘We have been working hard to reclaim our land and preserve our culture.’”
“The presentation concluded with a discussion on the future of the Mississaugas of the Credit First Nation. ‘We want to build bridges between the indigenous and non-indigenous communities and work together to create a more inclusive and equitable society,’ said Wybenga. ‘We hope that events like this one will help raise awareness about our community and our rich history.’”
Not bad, right? The article, while fairly redundant and lacking finesse, included key talking points from the presentation and organized them in a manner that made sense on a basic level. With an editor touching up this story, it seemed easy to envision ChatGPT being utilized routinely by news organizations to efficiently generate information.
After reading the article, I headed over to Haldimand’s website so I could watch the video presentation for myself and see if ChatGPT had generated accurate coverage of it. Here’s where things get interesting.
While the History of MCFN presentation had initially been scheduled for 11 a.m. last Tuesday, it had to be rescheduled to the following week due to technical issues with the video feed. Therefore, ChatGPT’s article on it, which did not really raise any red flags at first glance, was a complete fabrication, using quotes pulled from different sources online and presenting them as a summary of a presentation that never actually took place.
Through this lens, it’s easy to see why so many critics are concerned about the many ways chatbots like ChatGPT can easily and quickly generate misleading information and articles. Rather than list them myself, I thought I would ask ChatGPT to lay out some of its limitations.
“As an AI language model, ChatGPT is capable of generating human-like text, but it still has limitations that can affect its ability to write accurate and trustworthy journalism. Some of the problems with ChatGPT writing journalistic articles include:
Lack of context and background knowledge: While ChatGPT has been trained on a vast amount of text, it doesn’t have a personal understanding of the world and its current events. It can’t comprehend the context or background information necessary to write a nuanced and well-informed article.
Bias in the training data: ChatGPT was trained on text from the internet, which can be biased in terms of perspectives, opinions, and cultural references. As a result, the model may generate text that perpetuates existing biases, even if it’s unintentional.
Limited fact-checking ability: ChatGPT does not have the ability to independently verify the accuracy of information it has been trained on or the information it generates. This means that it may generate articles that contain inaccuracies or falsehoods.
Difficulty with objectivity: Journalism relies on objectivity and impartiality, but ChatGPT is an AI model that doesn’t have personal opinions or emotions. However, the training data it was exposed to may contain biased or subjective perspectives, which can affect the objectivity of the text it generates.”
Speaking in its own defense, the chatbot added, “It’s important to note that these limitations should not detract from the potential of AI language models like ChatGPT in journalism and other creative fields. However, it’s crucial that any text generated by these models be thoroughly reviewed and fact-checked before being published or used for decision-making purposes.”
Hmm, perhaps my fears of a Terminator-like cyborg wearing a reporter’s fedora smashing through the front door are a little premature.
However, there is no denying the historic significance of ChatGPT. It has opened the floodgates in terms of what is possible with technology and AI in particular. Along with having no qualms writing about an event that did not take place, ChatGPT is quite adept at generating full-fledged, well-reasoned essays on any topic you ask of it, raising concerns on how its existence could fundamentally change education at all levels.
Its potential in the medical field is also vast, and untested, with many medical students and doctors online discussing the chatbot’s ability to both generate astonishingly in-depth and accurate answers to complex medical questions, while also occasionally confidently offering a similarly in-depth answer that is completely false.
Microsoft founder Bill Gates recently said in an interview that the “progress over the next couple of years to make these things even better will be profound,” noting its potential to improve efficiency across a vast array of sectors. He summed it up: “This will change our world.”
Whether Gates is right or not remains to be seen. While the potential of AI is staggering, the short-term threats it presents – such as easily spreading misinformation and negatively impacting job markets in a variety of sectors – are very real and very much worthy of losing a good night’s sleep over some existential dread.






