AI Security Needs to Be a Priority for Experience Programs

close of up a woman staring at a screen

Emerging AI technologies provide great opportunities to transform how companies can deliver customer and employee experiences, but they also come with serious security questions. At Medallia, our top priorities are our customers’ data privacy and security and ensuring responsible use of that data.

Medallia has been training and employing artificial intelligence (AI) models for more than 15 years with Text Analytics and Speech Analytics — including speech-to-text transcription — for businesses in sensitive industries like financial services, insurance, and healthcare, as well as for government agencies. 

More than one million active users every week utilize these AI-powered solutions across nearly every industry to understand and deliver personalized experiences to customers and employees. And today, we announced four ground-breaking generative AI solutions at Experience ’24 to accelerate those efforts. Because we’ve been focused on AI for as long as we have, security considerations are — and will always be — foundational to how we design and implement AI models.

When looking to implement the latest and greatest AI methods into your experience programs, it is critical to understand the risks a model could pose to the security of the data your business handles. 

Let’s outline some of the questions you should ask yourself and your potential AI solution vendor to give you an idea of how they think of AI security.

Data security: Where is my data going? Who uses it?

Some large language models (LLMs) or other complex models may require your data to exit your own data centers for analysis to those run by whoever owns the model. When data flows “out” of your data center and “into” a third-party one, that leaves opportunities for attackers to intercept that data and leaves your data security subject to the security controls implemented by the third party. 

But there are other data leak risks, too. Data can “leak” at the source or at a third party’s subprocessors. The more your network expands, the more risk of data leakage. Therefore, the flow of your data — where it comes from and where it goes — and the security controls at rest and in transit are extremely important when considering how secure your data truly is when using AI models. 

Model risks: What is my data being used for? How is it being used?

In an ideally data-secure world, businesses would build in-house proprietary models, train them on their own data, and prevent that data from leaving their data centers. Unfortunately, building proprietary AI models built specifically for a single business’s data and systems is prohibitively time consuming and expensive, especially for those that are early in their AI journey. 

To mitigate these risks, many organizations may opt to employ in-house open-source models — open-source models hosted in their own data centers. Even with extremely secure data pipelines, there are still risks; e.g., models trained on copyrighted content, models no longer supported or upgraded, models using outdated technology, and so on. These risks relate to another type of security risk: model stability.

How do I mitigate the risks to the security of my data posed by AI?

So what should you, the business, do to mitigate these risks associated with the security of AI models? The answer is to choose partners that are cognizant of these risks and actively dedicate model design, legal contracts, data security, compliance, and other security-relevant resources to their platforms.

For example, our high standards for data storage include encryption at rest, strict access control, data retention policies, right to delete, and compliance with international data security standards, such as ISO 27001. What does all of that mean? It means that we comply with the highest security standards, which includes ensuring that the data is encrypted, retained only for a set amount of time, and can be deleted upon request. 

Medallia also maintains compliance with a number of extremely stringent security certifications and requirements in addition to proprietary security standards not covered by regulations. When using our AI models — Text Analytics, Speech Analytics, speech to text, and anything fed by these models — businesses, government agencies, and individuals are protected by many layers of security, down to how we design our AI products.

In addition to these, we tackle these issues in many ways, including:

1. Implementing high standards for security of data storage and transmission, with alignment to best standards such as ISO 27001 and SOC 2.

2. Aligning with AI security standards and best practices as recommended by the National Institute of Standards and Technology AI Risk Management Framework — also known as NIST AI RMF — and ISO/IEC 23053 Framework for AI Systems Using Machine Learning (ML), some of the most robust global standards.

3. Creating an internal AI Moderation Council that is responsible for overseeing all AI product development at Medallia. 

4. Launching an external AI Advisory Board that will include participation from Medallia customer and partner communities, focused on the responsible and ethical use of AI as well as  exchanging learnings, challenges, and best practices.

Secondly, vendors should always be transparent with how they will use your data in your contracts with them. They should ask explicitly for your consent to the types of data they will use and for what purpose. Medallia ensures that customers give clear consent in our contracts with them. When we change anything about how data is being used, we alert our customers in advance. We will never send your data to an ambiguous subprocessor or anywhere not previously consented to under the terms of the contract.

The outcome of all of this is a robust, secure foundation for AI models past, present, and future, as well as the overall structure of Medallia’s experience management platform. Our dedication to building democratized, personalized, and secure analytics and reporting for enterprise-scale businesses requires us to be at the forefront of security processes and regulations. You get to reap all the benefits of that work, all while knowing that you have a responsible stakeholder invested in the future of your AI-powered experience programs. 

Data security needs to be paramount for your AI vendor

Data security is essential when choosing an AI vendor. We at Medallia have learned this working in sensitive sectors for the past two decades. Safeguarding data is at the core of how we  approach developing and delivering software. When considering AI solutions, it’s vital to probe potential vendors on their security measures.

Questions about data privacy, model risks, and ethical considerations should guide your evaluation. Understanding where your data goes and how it’s used is crucial. Mitigating risks requires selecting vendors who prioritize security, compliance, and transparency. Medallia, for example, adheres to stringent data protection standards, complies with international regulations, and ensures explicit customer consent.

Incorporating AI into your experience programs is a powerful move, but it must be done with care. Medallia not only provides cutting-edge AI capabilities, but a commitment that our customers’ data privacy, security, and responsible use of that data are our top priorities.