Some of the world’s biggest advertisers, from food giant Nestle (NESN.S) to consumer goods multinational Unilever (ULVR.L), are experimenting with using generative AI software like ChatGPT and DALL-E to cut costs and increase productivity, executives say. But many companies remain wary of security, copyright risks, and the dangers of unintended biases baked into the raw information feeding the software, meaning humans will remain part of the process for the foreseeable future.
Generative AI can help speed up the creative process by allowing marketers to generate images, text, and videos with the push of a button — a radical departure from when it could take hours or days to develop a single concept. The software can also identify visuals that evoke emotion, which marketers can use to drive online engagement.
Marketing teams hope the technology will result in cheaper, faster, and virtually limitless ways to advertise products, executives at two top consumer goods companies and the world’s largest advertising agency told Reuters. Investment is already ramping ahead of expectations that AI will forever alter how advertisers bring new products to market.
The technology has already been used to create ads viewed billions of times on social media and predict how customers will respond to specific marketing messages. The Associated Press trained an AI software to automatically write short earnings news stories, for example, freeing journalists to write more in-depth pieces. The Icahn School of Medicine at Mount Sinai has developed an AI-powered tool called Deep Patient that analyzes a person’s medical history and can detect more than 80 diseases up to a year before they show signs of symptoms.
One of the biggest obstacles to greater adoption of generative AI by marketers is a need for more trust. “If you want a rule of thumb: Consider everything an AI service knows about you as if it were a juicy gossip piece. Would you want it getting out?” said Ben King, VP of customer trust at Okta, a provider of online authentication services.
In the past year, a generative AI system trained to “discover” new content in paintings went viral when the Dutch museum Rijksmuseum (RIJKSM.AS) shared its research results online. The resulting YouTube video showed the system’s X-rays revealing a new scene beyond the edges of Baroque artist Johannes Vermeer’s oil painting The Milkmaid.
A more realistic concern is that generative AI systems are susceptible to biases, especially those rooted in racism and gender discrimination, which may be ingrained in the training data used to train the software. Earlier this year, Bloomberg News found that the Stable Diffusion algorithm generated imagery that reflected a variety of stereotypes, including generating more images with darker skin tones for prompts such as fast-food workers or social workers and assuming women were paid less than men. Some researchers have begun to address these issues by incorporating “fairness” algorithms into their AI programs, but such tools are still in the early stages of development.