Should we be using AI to write our content?

Josip Rozman, AI consultant at Plextek examines if we should be using AI to write our content.

  • 11 months ago Posted in

In an ever more competitive world to grab a potential customers’ attention, it is imperative to present the right messages to your target demographics to maximise marketing ROI. Many of the largest companies have built their marketing plans around such an advertising model, but could this be taken a step further? In a world where hyper-targeted marketing is the dream goal, rather than supplying a single advertisement aimed at a specific demographic, could a more personalised approach be taken? Additional information about the end user could be used to present to the individual, and in a better way describe the value proposition to maximise the chance of user engagement, higher conversion rates. Attempting to cater to every potential individual we might want to target with a given product in a traditional sense would be unfeasible, but AI is a potential solution to this problem.

Getting to the right target customer

A potential customer often struggles to understand how a given product might provide value in their specific circumstances. To produce a personalised value-driven advert, a marketing executive would need to dedicate time and effort to understand the person’s needs before producing a personalised sales pitch. User research, persona profiling and other traditional marketing techniques have worked well in the past but are time intensive and sometimes very costly. Could this manual background work be replaced by an AI service such as ChatGPT. The AI tool could be given some information about the individual, subject to privacy restrictions and it would design content specific to that information, then refine & replicate across all of a business’s customer segments.

Are these models currently good enough for such use cases in their current state? There are a few big concerns, firstly there is a problem often referred to as ‘hallucinating’ where the AI model will produce confidently sounding but factually incorrect outputs. Microsoft recent incorporation of ChatGPT with Bing is a step in the right direction where the model is asked to produce accurate references to accompany the produced text.

Secondly, and potentially a larger issue is that said models are large language models trained to predict the next most plausible token given a context window. In case of ChatGPT there is a small prompt hidden from the user stating that it is a helpful large language model developed by OpenAI. The hidden prompt is used to make the AI take the persona of a helpful chatbot and respond in such a style. There have been reports that in longer conversations, the chatbot would occasionally take a different persona and for instance respond as a child would. In the instance of personalised marketing, long conversations would not necessarily pose a problem, but the information provided to the model would need to be curated and presented in an appropriate way.

Finally, there is the issue of bias in responses. The AI training process consists of both supervised learning and reinforcement learning, where “good” responses would be encouraged. The creation of the dataset can introduce unintentional biases to the model and there have been some reports that ChatGPT expresses a pro-environmental, left-libertarian ideology when subjected to political compass tests. Further research on this topic still needs to be done but if proven true, brands need

to make sure that the content produced matches their tone of voice AND their brand values, so they do not alienate their intended audience.

There are obviously risks involved ranging from reputational damage to privacy concerns involved with using AI, but is the opportune benefit higher than the risks? Here at Plextek, we use AI in one way to optimise our Google Ads campaigns in order to target different time zones more effectively. This is a great example of a marketing human generating the initial content but utilising marketing AI software to strengthen the activity for finer results.

To conclude, ChatGPT and its successors have potential to see real world use. I am more comfortable with non-client facing use cases: for instance, while code produced by the said AI model is often suboptimal, there could be benefits of using it for tasks such as brainstorming. In the event of client facing applications, like generating marketing content, great care should be taken in the way the model is prompted so it does not produce misleading, bias, or offensive content.

By Manish Shah, Chief Transformation Officer, ServiceNow.
By Dan Llewellyn, Director of Technology at xDesign.
By Rajasekar Sukumar, Senior Vice President & Head of Europe at Persistent Systems.
By Jo Debecker, Managing Partner and Global Head of Wipro FullStride Cloud.
By Marco Pozzoni, EMEA Storage Sales Director at Lenovo.
Steps CEOs can take on their journey to becoming AI-first leaders. By Pam Maynard, Avanade CEO.
By Steve Young, UK SVP and MD at Dell Technologies.