Zac Trojak

Principal, Public Sector

Improving the interactions everyday Americans have with critical government services is receiving a level of focus as never before. This heightened attention on bettering customer experience (CX) is evidenced by a broad spectrum of initiatives including proposed legislation, the formation of the Office of American Innovation and the GSA Centers of Excellence, and a specific focus within this year’s President’s Management Agenda.

Perhaps nowhere is this more visible though than in the June guidance issued by the Office of Management and Budget (OMB) within Circular A-11 Section 280, entitled “Managing Customer Experience and Improving Service Delivery.” Section 280 provides a framework for many of the nation’s critical customer-facing functions, identified as High Impact Service Providers, to establish a system for measuring, reporting and improving CX.

You’re likely familiar with the basics – what A-11 generally outlines, relevant requirements and deadlines, and recommendations OMB makes for getting started, etc. What follows is a different perspective that takes a look at some potentially overlooked points. Below are four specific lines from Section 280 that go beneath the surface-level and rather speak to crucial, underlying intent of how exactly these efforts can improve citizen-facing services.

1. “Obtaining direct feedback from customers is critical to CX performance improvement.”

In a vacuum, this best practice summary statement couldn’t ring more true. What’s perhaps overlooked here, though, is the value inherent in hearing from as many customers as possible.

The world has been irrevocably transformed by the pace of change, resulting in a wide variety of ever-evolving challenges that confront customers. As such, the government can no longer rely solely upon annual studies or the viewpoints of a few to represent the whole. Customer feedback is the most vital data point any agency can hope for.

Agencies that have historically bristled at the notion of collecting voluminous amounts of feedback should shift focus away from limiting how much feedback is sought but rather develop a strategy for managing it effectively.

2. “Ensure high-impact programs are receiving and acting upon customer feedback to drive performance improvement and service recovery.”

This line is a bit buried among a list of eight objectives for the implementation of the guidance, ranking sixth on the list. However, it contains a core, yet obscured, principle. It is not enough to simply collect customer feedback. To truly drive excellence in CX, agencies must “act” upon that feedback and do so in a timely manner.

Just as in the private sector, customers are not taking time out of their busy lives to provide feedback in hopes their response will be turned into a data point on a scorecard. They’re doing so because they have something to say; because they have a problem that needs solving; because they have an idea for how to improve. Capturing their inputs simply in order to benchmark performance is not enough.

Agencies must focus on how to systematically, and in as near real-time as possible, act upon customer feedback in ways both large (agency/program wide) and small (individual customer).

3. “Data should be coded so that it can be sorted for action by organizational units, such as office location.”

This line is also contained within the best practice section, identified as one of the specific best practices themselves, and merits specific discussion as it highlights an overarching key for success that is otherwise not touched upon in the guidance.

As the private sector recognized years ago, improving CX is everyone’s job. As such, individual organizational units, like a specific office or call center team, should have direct, real-time access to customer feedback. They possess critical knowledge and skill sets that are not only unmatched across the enterprise but are the key cog to interpreting agency policies with the impact they’re having on real-world CX.

Improving CX does not happen as the result of an insights organization crunching numbers day-in and day-out and then disseminating findings, though that aspect should be part of a more comprehensive approach. Rather, agencies need to enable and empower the frontlines to operationalize CX, allowing them to review and act upon direct feedback and, in turn, bubble back up their learnings for wider consumption and subsequent analysis.

4. “Agencies are welcome to ask additional questions beyond those seven.”

Section 280 lists seven score-based questions each agency should be asking. The guidance similarly supports a best practice of keeping questionnaires short in order to improve response rates and reduce burden on the customer. This direction is perfectly aligned with industry leaders and should be applauded. However, I’d suggest agencies also include one additional question – an open-ended question asking the customer for their “why”.

More powerful than any score is the reason behind it. Agencies should be equally focused, if not more, on the “why” behind the rating for a particular experience and customers should be given a forum to share such feedback.

Technology is paramount here as analyzing unstructured feedback manually does not scale and presents challenges related to interpretation and bias. That said, agencies will limit their ability to drive change if looking at scores alone. For instance, knowing your score for “Ease” has been trending down for weeks is good information but what will you do with that data point alone? Asking an open-ended question will enable you to understand why, in this example, customers are finding the experience cumbersome, not just that a problem exists.

For more insights into the evolution of CX across government or to see a demonstration of how Medallia is helping to lead CX transformation in the public sector, please reach out to Government@Medallia.com.