External
(based on user data)
External
(based on user data)
  • Analysis of ratings and reviews;
  • Competitor analysis;
  • Interviewing key users;
  • User survey;
  • Analysis of user behavior.


What is competitor analysis and how is it done?

Analysis of competitors (comparative testing) is a direct comparison with more successful competitors in your niche. This comparison reveals the interface drawbacks, your product advantages over competing solutions, and successful competitor practices that should be taken into account when developing a product.
In our laboratory, respondents fulfill a set of similar user tasks in different products or different product versions. We analyze the task success rate for each product / version, identify problems and successful solutions — to give recommendations for eliminating drawbacks.
The result is a report on the product comparison by usability parameters. It includes the best solutions of the products tested, a list of product drawbacks, and recommendations for eliminating the same.
What are the types of external testing?
  • By the moderator’s involvement:
    • Moderated testing: a moderator sets tasks, monitors their implementation progress, and asks clarifying questions;

    • Unmoderated: a special service sets tasks, collects metrics and feedback automatically, without involving a moderator.
  • By the respondent’s location:
    • On-site testing: the respondent and the moderator are in the same room (usually in a laboratory) and communicate immediately;

    • Remote testing: the respondent participates in testing from home or workplace.
What are the types of external testing?
  • By the moderator’s involvement:
    • Moderated testing: a moderator sets tasks, monitors their implementation progress, and asks clarifying questions;

    • Unmoderated: a special service sets tasks, collects metrics and feedback automatically, without involving a moderator.
  • By the respondent’s location:
    • On-site testing: the respondent and the moderator are in the same room (usually in a laboratory) and communicate immediately;

    • Remote testing: the respondent participates in testing from home or workplace.
  • By purposes:
    • Exploratory testing: it is carried out at the stage of developing the interface concept to check how understandable to users the overall concept is and if it meets their needs and expectations;

    • Control testing: it is carried out to find and eliminate usability problems or to evaluate the performance indicators of the finished product or its prototype (time spent to fulfill the tasks, user satisfaction, etc.); (time spent on task completion, user satisfaction, etc.);

    • Comparative testing: it is carried out to compare the performance of the new and old versions or two competing products (requires involving a large number of respondents to draw significant conclusions). It is often conducted in focus groups.

    • Exploratory - is carried out at the stage of interface concept development to check whether the concept is generally understandable to users, meets their needs and expectations;

    • Verification - is performed to find and fix usability problems or to evaluate the performance indicators of a finished product or its layout (time spent on tasks, user satisfaction, etc.);

    • Comparative - conducted to compare the effectiveness of a new and an old version or two competing products (requires a large number of respondents to draw meaningful conclusions). Sometimes conducted in focus groups
How is user testing carried out?
We need access to google-analytics or Yandex metrics. Having collected the requirements for the project, the specialist draws up a testing scenario, since we approach each system individually. This scenario is a document that contains instructions for the respondent, a list of tasks to be tested, and questions to be asked after completing each task and at the end of testing. The scenario is coordinated with the customer. Simultaneously with developing the scenario, we select respondents. After the scenario has been coordinated and the respondents are selected, we start the testing process.
Developing a scenario
Drawing up the Report
Upon completion of the testing sessions, a UX analyst prepares a report for the customer. Generally, the expert evaluation coincides by 70–80% with the user testing results.
Before starting testing, we interview the customer to learn more about the product to be tested, its audience, the testing purposes, and hypotheses that you would like to test
Interview with the customer
In the case of remote usability testing, the respondent participates in testing from home or workplace. If the moderator’s involvement is required, the analyst maintains video communication with the respondent. In all other respects, the testing process is the same.
Remote Testing
The respondent arrives at our office. The respondent gets the testing procedure explained; then, he or she reads the tasks outlined in the scenario, and we monitor their implementation. We record various usability metrics: the time and success of a task, the respondent’s satisfaction, and some operational psychological metrics. The set of metrics is discussed with the customer before launching the project. During testing, we record a video that shows everything that is happening on the respondent’s screen, his or her face, and his or her comments.
Testing Process
How many respondents do we need for testing?
For qualitative tests
For quantitative tests
For eye-tracking
(this is the standard number)
In rare cases, we need to increase
the number of respondents.
What are the qualitative tests?
What are the quantitative tests?
Where to find respondents?
These tests are chosen to get many different comments to understand easier the users’ ideas (to reveal hidden problems). Testing is based on clear and flexible questions. During testing, we conduct an interview that shows the respondent’s satisfaction. All answers of respondents are converted into scores and rated on an expectation scale from “I like it” and “I except it” to “I don’t like and cannot accept it”. Based on the results, a designer builds a graph that shows exactly what users think to be:
Taken for granted;
Survey of the customer’s clients;
Direct communications with the target audience via forums / interest groups.
Contacting special agencies to select respondents;
Product’s competitive advantage;
Functions that delight them;
Irrelevant.
These are tests that are always precise and aimed at obtaining numerical indicators. This could be the time spent to complete actions on the website or the percentage of respondents who completed a task. Testing participants are basically given the same tasks to make the indicators more reliable.
After determining the customer’s target audience, we choose the best selection method:
3-5
The customer pays for the respondent’s activities. A common practice is crediting a certain amount of money (points, miles, bonuses) to the respondent’s internal account or bonus card in an online store or service so that the respondent can spend it on the service being tested.
5-20
40