Consumer goods companies need to understand overall requirements of AI applications to avoid failure, says GlobalData

Artificial Intelligence (AI) refers to software-based systems that use data inputs to make decisions on their own or that help users make decisions. Generative AI refers to AI creating content in any shape or format, from writing original text to designing new structures and products. These technologies have developed rapidly over the last 18 months and generated serious hype. However, the benefits and costs of AI applications are poorly understood. Consumer goods companies need to fail – and fail fast – in their AI initiatives to gain this understanding, according to GlobalData, a leading data and analytics company.

GlobalData’s upcoming conference ‘AI in Consumer Goods: Fail Now!’ argues that consumer goods companies need to understand the technical, financial, and organizational requirements of any AI application to reliably assess the level of risk that application represents. Consumer goods companies need to consider how an AI should be trained to enable it to function cost-effectively. They also need to consider which delivery model is the most suitable from a data security and infrastructure cost point of view.

Rory Gopsill, Senior Consumer Analyst at GlobalData, comments: “Industry professionals remain bullish about AI’s potential to disrupt numerous industries (including consumer goods). According to GlobalData’s Q1 2024 Tech Sentiment Poll*, over 58% of respondents believe AI will significantly disrupt their industry. However, consumer goods companies should remember that the technology has limitations and risks. Chatbot failures caused Air Canada and DPD financial and reputational damage, respectively, in the first quarter of 2024. DeepMind’s own CEO warned against AI overhype in April 2024.”

In reality, adopting AI can pose very real financial and security risks. Training an AI can prove very expensive, especially if the task being automated is complex and requires an advanced AI. Furthermore, if an AI application requires training data that is commercially sensitive or confidential, a company may choose to train the AI in a private cloud environment rather than a less secure public cloud. Purchasing and maintaining the necessary IT infrastructure for this would be very expensive and organizationally demanding.

Gopsill continues: “Consumer goods companies need to be aware of these (and other) risks when choosing to develop AI applications. If they are not, their AI initiatives could fail with serious consequences. For example, sensitive data could be exposed, development costs could outweigh the application’s benefits, the quality of the AI application could be diminished, or the project could simply never get finished.”

Gopsill concludes: “Understanding these risks will enable consumer goods companies to fail early and safely and to learn from that failure. This will equip them with the knowledge to implement AI in a way that is safe and profitable. Fostering a culture of transparency around the risks of AI will help drive industry application and protect consumer goods companies and customers from the potential pitfalls of this evolving technology.”

*GlobalData’s Q1 2024 Tech Sentiment Poll was conducted with 352 participants globally

*GlobalData’s Q4 2024 Tech Sentiment Poll was conducted with 395 participants globally

*GlobalData’s Q3 2024 Tech Sentiment Poll was conducted with 363 participants globally

Media Enquiries

If you are a member of the press or media and require any further information, please get in touch, as we're very happy to help.



DECODED Your daily industry news round-up

This site is registered on wpml.org as a development site.