To GPT, or not to GPT, that is the question…

ChatGPT amassed more than 100 million users within two months of launching (Reuters) and yet it’s still shrouded in controversy, begging the question, should we be embracing it or, rather, keeping it at arm’s length?
Emily Conradi
Senior Strategist & Copywriter

With AI flooding the media – specifically in reference to ChatGPT – more and more people are questioning whether to jump aboard the chatbot train. Companies have been quick to draft in the experts to get their staff ahead of the tech – ensuring they’re fully trained on how to use it, how not to – and what they can and can’t input, in order to avoid scandal or breaches of privacy.  ChatGPT amassed more than 100 million users within two months of launching (Reuters) and yet it’s still shrouded in controversy, begging the question, should we be embracing it or, rather, keeping it at arm’s length?

So, how do the likes of ChatGPT and Google Bard work? In a nutshell, Chatbots – also known as Large Language Models (LLMs) – are trained on vast amounts of data – and, by vast, we’re talking hundreds of billions of words. This information is processed and analysed – spotting things like text patterns, how words and phrases relate to each other, predictions on what should come next etc (a bit like autocorrect when you type a message). Based on its training, it then learns what words typically follow the last in a given context, enabling it to generate information and suggestions at the click of a button.

The speed at which ChatGPT has grown is phenomenal. For perspective, it took TikTok nine months to reach 100 million users, and Instagram two and a half years (Sensor Tower). Yet, as with any technological advancement, there are reservations – and rightly so. Some might argue that AI is encouraging a nation of plagiarisers and cheaters of the education system? Furthermore, where are these LLMs getting their information from, who really owns the content and does the end user have the moral or even legal right to regurgitate it?

With so many active users, people are also quick to point out the restrictions of LLMs – wrong answers, accuracy issues, bias – and worse, serious discrimination – in its responses.  Both ethically and factually, it’s somewhat of a minefield.

Is ChatGPT a reliable source? Let’s ask it…

“It’s always a good idea to verify any critical information or seek additional sources when using an AI model like ChatGPT. While I aim to be helpful, I should not be considered a definitive or authoritative source. Consulting reputable sources, experts or conducting further research is advisable when accuracy is crucial.”

No different from having a sub-editor, there’s a responsibility to scrutinise style, fact check and cite sources. As a Copywriter, it’s easy to catastrophise at the thought of chatbots generating features in seconds *envisions a cemetery full of Copywriter headstones*. What’s more, as someone who enjoys the research side of writing – it does feel a little like cheating. That aside, I wouldn’t avoid a toll road purely for the satisfaction of arriving at the exact same destination, just three hours later. Plus, unlike a toll road, the basic version of Chat GPT is free – at least for now.

With so many adopting AI technology, will it soon become a case of, if you can’t beat them, join them? For now, I’ll certainly be eying any information with caution. While it’s tempting to take a shortcut, there’s no compromise on quality. Are the days of carefully researching an article over? For me, not just yet.

Emily Conradi
Senior Strategist & Copywriter
Marketing, media and PR specialist with a background in retail, hospitality and publishing. Foodie, wordsmith, crafter and theatre enthusiast.