01 The Problem 
There is a large knowledge gap between veterans and new hires of Pushpay's Customer Success team. This results in siloed knowledge and slower support time. A large number of resources have been provided to counter this issue, but none of them have been effective in centralizing knowledge. 

The initial scope of this project focused on the Service Delivery team, but has since expanded to include research of the CS team, and the prototyping of a knowledge and resource center across various teams.

Project Goals
Use user research to collect data on existing CS member habits in regards to solving cases and general day-to-day problem solving. 

Identify what resources are being used, and how to better improve them.

Reduce the time SD team members spend integrating feeds within their mobile apps and estimate time saved via reproducible testing. 

Reduce the time SD team members spend solving cases and estimate time saved via reproducible testing.

Reduce the impact of veteran/knowledgable team members leaving the team.

Reduce the impact of fluctuating task capacity due to sick/vacation leave. 

Provide a centralized location of knowledge/resources which shall be maintained regularly due to the distribution of ownership across the CS team. 
02 Discovery
 
User Interviews 
We spoke with veteran members to understand how they used existing knowledge resources to answer tickets. This helped us understand user attitudes towards tickets. 

Some of our questions: 
“How do you normally answer tickets?” 
“What was particularly challenging or easy about working in Confluence?”

Contextual Inquiries 
Next, we wanted to observe how new hires (< 90 days) answered tickets. This helped us understand user behavior.

Our goals were: 
How do participants find the answer to questions?
What is their process for troubleshooting tickets?
Why do they reach out for help?
Why/How do participants share learnings? 

Surveys 
To extract quantitative data, we also asked the ADS team to fill out a brief questionnaire after solving a ticket.
Our goals were: 
Which resources were utilized?
Where did participants learn about the relevant information?
03 Analysis and Insights
Our team created an affinity diagram with key takeaways from both the interviews and contextual inquiries. We later created a Miro storm board in order to more easily discern patterns and qualitative insights. The user surveys provided quantitative support for the patterns our affinity diagram uncovered, allowing us to put statistics alongside user quotes and journey narratives.

Summary
SD team members find the most touted knowledge resource to be too difficult, frustrating, and intimidating to navigate.
Instead members relied on reaching out to other team members when troubleshooting customer and internal issues. 
However, a culture of self-sufficiency has been encouraged which leads to a delay in asking for help, leading to lost time as members use less than optimal resources.
04 MVP
Now that we had completed our first round of user research, we were able to present our findings to senior members of the CS team and devise a plan for moving forward. We knew that we could consolidate many existing knowledge resources into a more navigable experience.
I lead an ideation sync with the team in order to pitch possible solutions for what content would best serve a minimum viable product. The goal was to create a framework which could be tested as soon as possible so the cycle of testing and iteration could get underway. 
Based on our research findings I created an MVP which would require minimal effort to navigate, meaning all the necessary information would be contained on the same screen.
05 Usability Testing
I recruited internal users, designed the testing script, and lead the in-person tests.
Goal
The goal of this study was to compare the amount of time it takes users to integrate feeds, or gain the knowledge necessary to solve cases. Users were tasked to use currently available resources, and then asked to complete the same task using the new resource. Time spent to complete tasks was recorded for the old vs. the new resource to determine if the new resource impacted case answering efficiency. User errors were also measured as they used the new resources in order to direct future iterations.
06 Findings & Next Steps​​​​​​​
The resource has undergone updates based on the received feedback. The next version has expanded in scope, including members of other teams who will help populate the resource with information relevant to their departments. We are currently in the process of mapping out the structure of the expanded resource, a task which the previous testing has greatly expedited. The project is ongoing, and will continue to iterate based on a cycle of iteration and usability testing.
​​​​​​​Thanks for looking!

You may also like

Back to Top