How we improved the collection and analysis of customer feedback

Alessio Casella
The Memrise Engineering Blog
6 min readJul 12, 2021

--

Collecting and addressing customer feedback is a core part of how all companies can improve in the long-term. Luckily, here at Memrise we’re always listening. Every month, we gather lots of feedback from various channels, and we’ve been working non-stop to improve how to organise and make sense of this trove of data, and how to share it with the wider team.

In this blog post, I’ll describe the challenges we faced and how we solved them, especially with the move from our previous Customer Support tool to our new one (Zendesk).

Behind the scenes

Our feedback-collection processes effectively run on Zendesk. We funnel all support emails, in-app messages, store reviews, social media DMs and forum posts in Zendesk, and we then categorise and analyse inbound messages from there.

In practice, the way we collect feedback is by adding one or more text strings (aka tags) to each ticket we receive. This is done either manually or automatically by using keywords or specific actions we take when resolving customer messages.

This creates a vast tapestry of complaints and observations on the product and the customer experience, as well as internal data on our operations, which then need to be categorised and analysed.

Issues and limitations with our old process

Back when we were using our previous CS tool, the collection and analysis of feedback were slightly confusing and time-consuming due to a few problems with the tools as well as our processes:

1. Analysing feedback data was a long and manual process prone to errors.

Our previous CS tool didn’t have a proprietary analytics software that would allow us to handle this kind of data, so we had to export the list of tags and number of reports every month and analyse them in Google Sheets.

This meant adding all the numbers manually to a spreadsheet every month, and then calculating percentages and trends with complex formulas. This made generating a monthly Voice of the Customer report a very long task which wasn’t efficient nor scalable.

2. Performance metrics and feedback couldn’t be analysed and presented in the same space

As above, limitations with the tool meant that we couldn’t report on operational metrics (such as volume of tickets) alongside feedback data. This posed an extra obstacle towards creating a unique report taking into account multiple data sources in the same tool.

3. We didn’t have a clear taxonomy of the feedback we were collecting

Although we had some general categories for tags, e.g. Design, Content, Payments etc, we were missing a clear overview of where each piece of feedback should sit. Due to limitations with our previous software, we also opted for creating ‘umbrella tags’ for specific categories (e.g. feedback, feature-requests) which however created more confusion than actual benefits.

4. Feature requests were getting lost among the rest of the feedback

Until a few months ago, we were collecting feature requests infrequently and without a specific process. Moreover, since we were recording these requests as tags (as we did with the rest of the feedback) we had issues reporting on them easily and clearly.

Zendesk and improvements to the process

When we moved over to Zendesk, one of the first challenges we wanted to tackle was to make improvements to the feedback collection and analysis. Zendesk has lots of robust tools and features that allowed us to scale up what we could do with the data and make the whole process speedier and easier for us.

Based on the pain points and limitations mentioned before, we set ourselves a few goals:

  1. Have a better overview and understanding of the feedback that we’re collecting (and as part of this, increase the coverage of the feedback we’re collecting)
  2. Give feature requests their own space
  3. Save time and effort to collect feedback and analyse it
  4. Save time and effort to visualise data and create dashboards and reports for the wider team

Achieving these would have allowed us to not only make everything more efficient, but also to improve the quality and visibility of these insights.

How we approached the project

Improving our process

First of all, it was crucial to develop a new process based on the new tool, and to clarify internally how we collected and categorised feedback. In Q1 2021, the Customer Support team run a workshop where we came together to:

1. Define a new taxonomy for feedback

We mainly reviewed the existing categorisation and added sub-categories to ensure no feedback was orphaned (i.e. all tags should have a parent and sub-category).

We now have a better structure that aims to cover all main areas of the Memrise product, which roughly correspond to specific teams or departments.

The hierarchy now has:

  • Level 1: Parent category (e.g. Product)
  • Level 2: Child categories (e.g. Design, Features, Learning/Tests)
  • Level 3: Individual tags/Topics (e.g. font_size_too_small)
A table with parent (in light blue, at the top) and child categories (in green, at the bottom). From left to right, it reads Product (parent category), with Design, Features and Learning/Tests as child categories; Marketing (parent category), with Payments/Subscriptions and CRM as child categories; Content (parent category), with Localisation, Official content and UGC as child categories; Customer Support (parent category), with Internal tracking and Bugs as child categories.
A peek into our feedback taxonomy

2. Define a better naming convention for the tags

We agreed on a few formatting rules and naming conventions to ensure that:

  • All tags could be read and understood by anyone in the company, e.g. no more acronyms whenever possible.
  • Tags for similar topics could be disambiguated easily. For instance, translation_errors_official_courses vs translation_errors_community_courses
  • We stuck to specific formatting moving forward. For instance, we agreed on using underscores rather than dashes, e.g. font_size_too_small instead of font-size-too-small

3. Define a new process for feature requests

Feature requests were being tracked in the same space of feedback, which meant it was difficult to report on them separately. As we value them as much as other kinds of feedback, we decided to start tracking these in an ad-hoc Jira board as tickets, where we could also collect richer information.

We now track explicit or assumed user needs alongside the description of the actual requests, to give designers and product managers more insights into them. Finally, we also packaged individual tickets into thematic Epics so all requests can be read and considered in context.

Producing the new Voice of the Customer report

Once we solved these issues, what was left then was to start building reports and dashboards to organise and surface our data (internally for our team and a digested version for the organisation). Luckily, Zendesk allows us to create queries and dashboards within their data analysis software (Zendesk Explore) without us having to export the feedback elsewhere.

Learning how Zendesk Explore works proved trickier than expected, but in the end we managed to create all the necessary queries and dashboards to address our needs.

Since tags were coming live from the Zendesk datasets and updated on their own every month, this meant no more copy-pasting endless lists of tags and numbers in Google Sheets, and lower chances to have errors or stale data. The calculations and queries offered by Zendesk out-of-the-box (as well as custom ones we created and reused) also meant no more writing of complex formulas to analyse month-on-month trends. This all has effectively been saving us precious time every month.

Issues and limitations with Zendesk

Although Zendesk Explore is a powerful tool that has solved most of our issues when it comes to feedback, we still had and have some obstacles:

  • The learning curve was steep, especially to build more complex queries.
  • We had to find workarounds for specific calculations and visualisations that weren’t supported out-of-the-box by Zendesk, but that we wanted to have in the report.
  • Some of the external data that we don’t have in Zendesk (e.g. store ratings and reviews) still needs to be added manually — however this doesn’t take us long.

Next steps and conclusions

Since we first implemented Zendesk Explore in Q1 2021, we’ve been hard at work to (1) learn how to best use it to address our needs, (2) add more metrics to our reports (such as customer satisfaction score), (3) tweak the queries and dashboard based on feedback and (4) automate the data analysis as much as possible. We still have plans to add even more granular data, e.g. further segmentation by Android, iOS and web customers, to make the reports even more insightful and useful to the wider team.

To conclude, at Memrise we consider customer feedback to be invaluable data if heard and acted upon — and this is why we decided to spend time and resources on this project. In the end, we successfully solved the pain points we had identified and we nailed the process to collect, analyse, and share data in a much more efficient way.

--

--