Diagnosing the Disconnect
I was brought in to reduce help desk tickets, I found something deeper — systemic gaps in training and communication that were quietly overloading every team in the program.​​
Client
CMS Center of Innovation
Role
Human-Centered Design Lead
Team Structure
Embedded into Agile Product Team
Timeline
8 weeks
Methods
Jobs-To-Be-Done · Stakeholder Interviews · SME Interviews · Artifact Analysis · Opportunity Prioritization Workshop
Context
I was embedded as the Human-Centered Design Lead within an agile product team supporting the data submission portal for the CMS (Centers for Medicare & Medicaid Services) Innovation center — a federal program that runs pilots called Models to test improvements to Medicare and Medicaid.
​
Each model involves a lot of moving parts. There's a core Model Team, a network of third-party implementation partners, and the healthcare systems actually participating in the pilot — called Model Participants. All of these groups work together to collect, submit, and evaluate data. The outcomes aren't abstract: evaluation results directly affect financial incentives for participating healthcare systems.
​
My team sat at the intersection of all of it — supporting the portal that model participants used to submit their data, and fielding the fallout when things went wrong.
​
Challenge
Help desk ticket volume was high. Cases were being misrouted, triaged multiple times by different teams, and taking too long to resolve — all the teams were frustrated with the number of deadline extensions.
Leadership wanted fewer tickets. My job was to figure out what was actually driving them.
Process
Step 1: I pushed back on the brief​
When I was brought in, the ask was simple: reduce tickets. But we had no idea of how many teams were involved in this operation, what roles did each team comprise of, or who was accountable for overseeing the tickets on the model team (Hint: Nobody!).
I find out that there was no role overseeing the operations of tickets.
But the main challenge to understanding any of this was that the process of a model, role of each team member in each phase, or the communication protocols were not documented from a design perspective. All the existing documentation was done by the Development team which made it difficult to conceptualize the problem. I started with the question:
What do model teams need to do their jobs —
and where are those needs going unmet?
So, I began by interviewing subject matter experts and internal stakeholders not to map what people were doing, but to understand what they were fundamentally trying to accomplish. I anchored the research in a Jobs-To-Be-Done framework to understand the model implementation conceptually and document the technology-agnostic process .

I created a Job map based on the data from secondary research and five interviews, detailing the goals and needs of the model team for each phase of the model. This helped us narrow down the scope for this project by focusing on the model team’s communication needs during two phases of model implementation.
​I framed the problem statement based on the scope which led to setting goals for this round of research. At this point in the project, the goals were largely exploratory, not confirmatory because we did not have a good enough understanding of the problem space.
Step 3: 250 tickets told me things people couldn't
I was looking for patterns in issue types, routing paths, and resolution steps. What I found was striking. Certain issues kept coming back. Certain routing paths kept looping. The tickets weren't random — they were signals of something structural. The volume wasn't a help desk problem. It was downstream noise from failures happening much earlier in the process.
Based on the interviews and ticket analysis, I developed user profiles and archetypes for each role on the model team. Then I got them in front of the actual people in those roles to validate and refine them. That's when I hit a wall I hadn't anticipated.
Step 2: I couldn't talk to the people I most needed to reach
Here's where it got complicated. The people generating the most tickets — model participants, the actual healthcare systems — weren't directly accessible to me. I had to work with Subject Matter Experts as proxy users: people with deep firsthand knowledge of participant experiences, but not the participants themselves. I used the triangulate the interview data with ticket data, the only place with participant voices I had access to.
I also interviewed 6–8 model team members across different roles — and designed those conversations to go beyond surface-level workflows. I wanted to hear about the workarounds, the frustrations, the things people had just accepted as normal. Due to this being a concurrent study, we had the advantage of adding on to the interview protocol not just questions we found in previous interviews, but also questions we uncovered from analyzing tickets.​​​​
​
​


​​Step 4: The archetypes didn't match the system
The conceptual roles that emerged from my research — the way people actually operated — didn't map onto the roles defined in the ticketing system. In the system, roles determined access and privileges. In real life, people wore multiple hats, worked across boundaries, and operated in ways the system didn't account for.
​
After discussions with the development and business team, I consolidated the model team roles on the entire data collection platform mapping official responsibilities to systems privileges and that reduced a lot of confusion in setting up and using the platform for the model team and contractors.
This wasn't just an interesting observation. It had direct implications for design opportunities, which I structured in the form of 'How might we' questions.
Step 5: I built a roadmap with the team
Once I had findings and design opportunities in hand, I facilitated an Opportunity Prioritization Workshop with stakeholders across development, design, and business. We mapped opportunities against value and effort together, refined the OKRs as a group, and negotiated sequencing in real time. It was messier than presenting a finished plan — but it meant the team walked out of that room with genuine ownership over what came next.
Finally, I created a executive summary (goals achieved from this round of research, findings, design opportunities, road map) to present to the external (CMS and other third party) teams.
Key Insights
The help desk ticket problem wasn't a help desk problem.
After 8 weeks of research — 250 tickets analyzed, hours of interviews, and one messy prioritization workshop — here's what I found:
Three groups had unmet needs, and those needs were feeding into each other.
Model participants weren't set up to succeed.
Education about data requirements, submission processes, and how evaluation affected their incentives arrived too late, in overwhelming volume, and was often unclear. They weren't reaching out to the help desk because the system was broken. They were reaching out because no one had prepared them well enough to avoid needing help in the first place.
​
Model teams had no reliable way to communicate with participants at scale.
Important information got lost, duplicated, or delivered inconsistently. Without a structured channel, every team was improvising — and participants were getting different answers depending on who they asked.
​
The help desk was operating without the right tools.
Staff were managing complex access provisioning, triaging a flood of email-generated tickets, and doing it all without a comprehensive FAQ to lean on.
The Real Reason
Unmet training needs for participants, unmet communication needs for model teams, and unmet informational needs for help desk staff were compounding on each other — generating the coordination breakdowns that produced redundant tickets, unnecessary escalations, and slow resolution times. The help desk was the last stop in a chain of failures, not the source of them.
​
This reframing unlocked five design opportunities:
-
How might we help model participants understand what successful participation looks like?
-
How might we help model participants resolve validation and submission issues on their own?
-
How might we make access provisioning easier for help desk staff?
-
How might we give help desk a reliable, comprehensive FAQ for triaging?
-
How might we improve how email-generated tickets get routed?
Solution
Using the value vs. effort framework from the workshop, we identified a usability inspection of the data submission portal as our first move — cognitive walkthroughs and heuristic evaluations, conducted through the lens of each consolidated role.
High value, lower effort, and something we could start immediately without waiting for new infrastructure. More importantly, it was grounded in evidence — we could point directly from research findings to the decision to do it.
I also delivered a phased product roadmap tying each design opportunity to specific OKRs, sequenced by priority.
Impact
-
Delivered validated user archetypes for each model team role — verified by the people in those roles
-
Produced a research-driven product roadmap with cross-functional buy-in
-
Shifted the organizational narrative from "the help desk needs more capacity" to "we need to address gaps upstream"
-
Aligned development, design, and business stakeholders around shared OKRs in a single workshop
-
Laid the foundation for an ongoing usability inspection program for the data submission portal
Takeaways
The stated problem is rarely the whole problem.
I was hired to reduce tickets. The actual problem was a fragmented communication ecosystem affecting three groups in three distinct ways. Getting there required staying in the research longer than felt comfortable — and resisting the pull toward solutions before the diagnosis was solid.
Constraints are data.
Not being able to talk directly to model participants forced me to be more rigorous about how I used proxy users and triangulated findings. The mismatch between conceptual roles and system roles was genuinely frustrating — but it turned out to be one of the most important things I found.
Co-creation isn't just good process. It changes outcomes.
The roadmap we built together in that workshop moved faster through the organization than anything I could have presented solo. People advocate for things they helped make.