As a CX professional, are you considering deploying generative artificial intelligence? Here’s what to ask yourself when getting started.
Everyone in business is talking about generative artificial intelligence (AI). You’ve already seen countless headlines in recent months covering OpenAI’s ChatGPT and Google’s Bard as the two companies develop AI models capable of creating personalized written content by mining patterns from existing data.
Generative AI is taking off — and fast. ChatGPT alone has over 100 million monthly active users, and its website sees over 1 billion pageviews every month.
Individuals and businesses alike are leveraging generative AI for use cases ranging from composing poetry to writing request for proposals (RFPs) to creating video advertisements. Among all of the use cases, one thing is common: appearing human. These models are increasingly good at it — even if that means inventing references in generated research papers.
Despite the seemingly endless, risk-free benefits, you shouldn’t go all-in on generative AI blindly. As with any type of technology, especially new technology, you should debate its advantages and disadvantages. While generative AI may seem attractive on the surface, it’s possible that you may lack the resources necessary to implement it. Or you might not know the exact use case to maximize its impact. And with AI in particular, you want to be mindful that it needs guardrails to remain as responsible as it is effective.
Generative AI is growing rapidly, and it’s expected to be relied on heavily by organizations in several industries in the coming years. In fact, many organizations are already investing time and resources to deploy and leverage the technology. As a customer experience (CX) or employee experience (EX) expert in particular, you might find yourself considering generative AI for your operations.
Here are the top five questions to ask yourself as a CX professional to ensure generative AI models are effective, ethical, and aligned to your organization’s values and goals.
Before deploying any AI model, it’s essential to consider the privacy and security of training data, especially that containing personally identifiable information (PII). Open source AI models represent particular risks to data privacy and security. These risks are due to the vulnerability created by open access to the source code, the usage of third-party libraries, and lack of control over sharing and distribution.
To mitigate risk, companies deploying AI models on sensitive data should ensure that there are strong access controls and measures in place over the codebase, libraries, and data storage.
Generative AI models are particularly susceptible to bias because they’re designed to learn from large data sets and produce predictions or recommendations based on that data.
Models must be trained on a diverse and representative set of data. In the context of customer and employee experience, this means that there is sufficient data representative of the demographics, cultures, and other segments of a company’s customer and employee bases. Remember that data-driven bias is more likely when leveraging open source libraries because there’s no validation process to the factual accuracy or ethics of the source material.
Potential algorithmic bias in models is also vital to recognize and investigate. What has the model been trained to optimize? For example, a number of generative AI models have been trained to produce output specifically for social media — this may lead to more polarizing or sensational content because that content is common, effective, and amplified on such platforms.
Ask how the AI models that you’re considering have been trained and optimized for CX and EX use cases, as well as how models are being audited and monitored against bias.
Bringing your one-off questions to ChatGPT as an individual seeking quick answers is one thing. But it’s another to scale generative AI across tens or hundreds of millions of records across the entire enterprise.
In order to be successful at scale, generative AI requires large amounts of diverse and secure data, infrastructure from high-performance computing clusters, and research and development resources to monitor, maintain, and grow models.
Cost is a major consideration in the build-vs-buy conversation, so seek to understand how well your organization is set up to take on the costs of scaling. Additionally, you should ask yourself: What will the level of effort be to integrate home-grown generative AI into business processes and workflows?
To be worth the cost, AI needs to be tied to measurable outcomes. Justifying the need for generative AI models is perhaps the most important question, so assess the costs associated and determine whether or not a strong business case for using models exists.
Don’t get carried away in brainstorming the potential cost-saving applications for generative AI. Before that, you need to think of the practicality and implications. It would be great to save millions of dollars by converting agents in the contact center to chatbots, but consumers — and society at large — don’t have full trust in AI in terms of data privacy and the handling of sensitive issues.
Effective starting points for a generative AI strategy in CX/EX are ones tied to positive business outcomes, are easy to explain to the chief experience officer (CXO) and the rest of the C-suite, and integrate into existing workflows.
Technology solutions are most impactful when adopted at scale. However, AI is a technology that can be difficult to roll out at scale.
Customers and employees may be hesitant to interact with AI-powered systems or recommendations if they don’t trust the technology or feel that their privacy isn’t being respected.
It’s important to communicate an AI strategy — especially one potentially impacting employee performance measurement, compensation, and day-to-day tasks — with transparency and honesty. Share modeling methodologies, encourage questions, and make regular audit or bias testing results available. Start with use cases that don’t directly impact compensation or employee reviews, instead focusing on quality-of-life use cases, such as task automation and workflow optimization.
Finally, in order to drive both adoption and meaningful business outcomes through actions, your generative AI models must have accurate and actionable outputs that are relevant.
Chances are you and your competitors are already using AI in some capacity. Now is the time to move away from the fad stage of generative AI and focus this technology on tangible use cases to benefit your employees and your bottom line. Attention is mounting to generative AI and its advantages for business — from producing content in the blink of an eye to streamlining processes for employees. Take advantage of the possibilities.