Featured Project of the Month: HaTS @ CARFORYOU

Our monthly feature of a cross-company product development at Tamedia.

What is HaTS?

HaTS is a tool to determine customer satisfaction from attitudinal data which is collected within the product and over a certain period, or, more simply said, a Happiness Tracking Survey. Google developed it and defined happiness as the “overall satisfaction, likelihood to recommend, perceived frustrations, and attitudes towards common product attribute among others” (cf.Sadley, Müller). The survey-based methodologies serve the desire to benchmark user satisfaction and track changes over time. Its advantage over NPS (Net Promoter Score) is that it works with subdivisions, enabling the researcher to pinpoint the origin of a negative perception of the product.

HaTS includes open answers, which gives the possibility to receive qualitative feedback. This feedback can be combined and specified with quantitative findings. With its 7-point Likert scale, (Likert = The guy who first created a scale to track people's subjective attitudes on a topic) it is a seamless solution to express satisfaction steps accurately and at the same time avoid inconsistent borders between the numbers, as opposed to the 11-point scale of the NPS.

HaTS analyzes the following components: The overall product satisfaction and likelihood to recommend the platform, areas of frustration and appreciation via open-ended questions, satisfaction with common attributes of product-specific tasks or features, and the respondents' characteristics.

HaTS implemented on CAR FOR YOU

CAR FOR YOU (CFY) is public since early this year. To achieve its place in the market, the new CFY product is based on the target group’s needs. CFY’s strategy for establishing its position is to take  iterative steps toward the maximum possible conversion rate. That is why it was decided to implement HaTS for consistent tracking of the user’s perception of the product. Under the management of Isabel Steiner, CTO at CFY, the PUX-team created a customized HaTS survey with the support of a research team at the University of Basel.

Isabel sees HaTS as a more actionable tool than NPS because it provides results where product owners can take direct measures for improvement as opposed to the NPS, where outcomes can only be improved in a long-term perspective. She was glad that Yanira from PUX was so persistent and pushed the implementation. After all, if you haven’t yet noticed, Yanira hates NPS...

The goal of the implementation of HaTS on the platform is primarily to measure the progress towards the product’s goals by responding to the user’s needs. The inventory of the new website is tested together with the ease of use: Can the users find what they are looking for? How clear is the workflow? How is the search experience? Moreover, the product detail pages are tested, and the conversion is tracked based on the outcomes of HaTS. And, last but not least, the trust component is analysed: How do users trust the platform and the dealers?

How did we implement HaTS and what’s important to consider?

Together with the Human-Computer Interaction Department at the University of Basel, Yanira and Laura (also UX-Researcher at PUX) developed a questionnaire with standardized and validated questions to understand the requirements mentioned above.

General questions are asked such as: “All in all, how satisfied or dissatisfied are you with CAR FOR YOU?” providing a 7-point Likert scale from “very unsatisfied” (1) over “nor unsatisfied, nor satisfied” (4) to “very satisfied” (7). That is followed by usability-specific items, which stem from a validated questionnaire broadly applied in UX research, providing a deeper insight in the user’s satisfaction about specific elements or features of the website: “Please indicate which of the following statements you approve.” On a 7-point Likert scale from “I do not approve at all” to “I approve completely”, it is asked if the functions of the platform are in accordance with the user’s expectations, if the user interface is a frustrating experience for the user, or easy to use, and if the user needs a long time to navigate through the website.

This granularity provides quantitative data which can be matched with the qualitative analysis of open-ended questions such as “What, if anything, do you find frustrating or unattractive about CFY? Which new possibilities would you like to see on the CFY platform?” and “What do you like about CFY?”

HaTS is sent to a representative group of users, currently 25 percent of the total user base, with the goal of receiving at least 100 valid responses for each monthly analysis. The overall goal is to set up the sampling quarterly, contacting 10 percent of the user base. That way, we can track the changes in users' attitudes and perceptions over longer periods and associate possible shifts with any changes made in the product.

The entry points play a key role in conducting the HaTS. To meet users in their journey at particular moments, we decided to implement the survey four times. For now, the first entry point is published, the others will follow soon. The PUX-team worked closely with the CFY-Designer Michel, who designed the mockups of how the implementation of HaTS should look on the platform.

First outcomes of the test run

The Beta testing program of CFY provided a welcome possibility to test our questionnaire in a small framework. We sent it out three times, asking the users to fill out the 5 to 10-minute survey. In the following, we analyzed all three rounds. In round I we also received valuable feedback about the questions in the survey and adjusted some wording to make it more easily understandable. In rounds II and III we got only positive responses about the clarity and appropriateness of the questions.

Laura analyzed the three Pre-HaTS rounds and compared the outcomes. Round III was closed on 31.01.19 and was just analyzed this week. Here’s a shining example of what can be found with HaTS referring to the question we presented you above (though, we do have to point out that this was a beta-testing and the numbers are not yet fully representative to the product perception). The total number of participants in all three survey rounds was 64, whereas in round III there were only 11, limiting the generalizability of the findings. Evaluating the responses of the usability-specific questions Laura found in round III an overall score of 5.35 out of 7.00. This shows high scores for overall usability, although the functionalities’ need fulfillment is very low compared to the other ratings. So, Laura recommended an implementation of qualitative feedback that caters to the users’ needs, which reinforces an improvement of their perception that the platform is useful and fulfills their requirements.

Another example where HaTS gave us valuable information about the users' perception is the satisfaction-rating „price valuation function as criteria“ where the score turned out a lot lower than expected. When comparing those measurements to the open-answers it became clear that the low score is due more to the users’ lack of understanding of how price valuation works, rather than to finding fault with the features in general.

Laura believes that the process of matching the quantitative data with the qualitative results in order to understand the user’s satisfaction score is the most challenging part, but also the most rewarding one. She sees the possibility to associate open-ended answers to numeric feedback as the greatest value of HaTS. Her favorite moment in the analysis she describes as the point “when qualitative and quantitative data add up and make sense, and I understand the users’ thoughts based on their answers in a way that just wouldn't have been possible otherwise”.

 So hats-off to HaTS, and to Laura - she'll definitely be keeping her HaTS on!

For more questions regarding HaTS please
reach out to us

Select all: