Leveraging Response Quality Dashboard

Updated 

Survey Analytics play a crucial role in generating actionable insights, and the Survey Response Quality Detection feature enhances this process by detecting and flagging low-quality responses. This ensures that all insights are based on the most trustworthy and accurate data. As soon as responses are collected, regardless of the channel, the feature begins analyzing each one using several key criteria: Survey Completion Time, to verify if responses were submitted within expected time frames; Open-Text Quality, to evaluate the relevance and depth of written answers; Answering Pattern Identification, to spot inconsistent or suspicious response patterns; and Bot Response Detection, to determine whether a response may have been generated by a bot.

Business Use Cases

  • Improving Response Quality for Accurate Insights: Survey Response Quality Detection helps filter out low-quality responses, such as irrelevant open-text answers or surveys completed too quickly. This ensures the analysis is based on reliable data and accurately reflects customer sentiment.

  • Preventing Skewed Analytics: The Survey Response Quality Detection feature identifies suspicious response patterns, such as selecting the same rating across all questions. This allows teams to exclude unreliable data from their analysis, ensuring more accurate and trustworthy insights for decision-making.

Filtering out low-quality responses ensures that analytics reflect genuine customer opinions, leading to more accurate and reliable insights. This, in turn, boosts decision-making confidence, as teams can implement strategies knowing they are based on trustworthy data. Organizations prioritising high-quality feedback are better equipped to address real customer concerns, ultimately enhancing customer satisfaction and building long-term trust.

Prerequisites

In order to access Response Quality Detection, you would need access to View Response And Analytics permissions at the Survey Level under the CFM App.

Navigation

  1. Navigate to Sprinklr Insights and go to Customer Feedback Management.

  2. Select a Survey and click View against it.

  3. Go to Responses.

  4. Click View Response Quality to find the Survey Summary. You can view the response quality of each individual response from the Responses tab within the survey.

  5. A dedicated column for Response Quality is automatically included in the default view of all survey responses. Each response is given a Response Quality as High, Medium, Low or Bot.

  6. An overall response quality score is also assigned based on the average quality of individual survey responses. This score can be viewed in the bar displayed at the top of the Responses tab.

Calculation for Response Quality

Response quality calculation begins once the first 100 responses are received, allowing the AI to establish benchmarks based on observed trends. Each response submitted after this point is evaluated across four key parameters:

  • Survey Taking Time

  • Quality of Open Text Responses

  • Logical Correctness

  • Bot Responses

Each response receives an individual score for the first three parameters. These scores are then combined using an equal weighting to calculate an overall average score, which determines the response quality tag assigned to that response.

Assigning individual scores for each parameter

  • Survey Taking Time: Survey Taking Time refers to the amount of time you spend completing a survey in a single session. It starts when you access the survey through a link and ends when you click the submit button.

    The scoring and labeling of your Survey Taking Time are based on the average completion time and standard deviation, which are calculated after at least 100 responses have been collected.

    Note: The minimum threshold after which the response quality calculation starts can be changed for each partner, based on request.

    Completed within Expected Time Frame (Normal):

    1. Condition: Completion time falls within one standard deviation (SD) of the average completion time.

      1. Range: [μ - SD to μ + SD]

    2. Score: 10/10

    3. Label: “Normal”

  • Completed Faster than Expected (Fast):

    1. Condition: Completion time falls between one and two standard deviations below the average completion time.

      1. Range: [μ - 2SD to μ - SD]

    2. Score: 8/10

    3. Label: “Fast”

  • Completed Excessively Faster than Expected (Too Fast):

    1. Condition: Completion time falls between two and 2.5 standard deviations below the average completion time.

      1. Range: [μ - 3SD to μ - 2SD]

    2. Score: 5/10

    3. Label: “Too Fast”

  • Instant Survey Completion (Possibly Fake/Bot):

    1. Condition: Completion time is more than 2.5 standard deviations below the average completion time.

      1. Range: [< μ - 3SD]

    2. Score: 0/10

    3. Label: “Instant”

    4. Special Note: Surveys falling into this category are not evaluated for other parameters.

  • Too Slow:

    1. Condition: Completion time is more than three standard deviations above the average completion time.

      1. Range: [> μ + 3SD]

    2. Score: 2/10

    3. Label: “Too Slow”

  • Quality of Open Text Responses: Open Text Quality measures how relevant and appropriate your responses are to open-ended survey questions. This assessment ensures that what you write offers meaningful insights and avoids inappropriate or nonsensical content.

    Sub-Parameters for Evaluation:

    1. Profanity:

      • Description: The response includes curse words or offensive language.

      • Impact: Such responses are flagged as inappropriate and scored accordingly.

    2. Gibberish:

      • Description: The response contains nonsensical or random text, such as "asdfjkl" or similar patterns.

      • Impact: These responses indicate low engagement or potentially automated inputs.

    3. Non-Sensical Answers:

      • Description: Responses that are irrelevant to the question’s context.

        • Example: If the question asks for feedback on food and the response is, "I live in Paris," it would be considered non-sensical.

      • Impact: These responses reflect a lack of engagement or understanding of the question.


    Scoring Mechanism:

    • Process:

      • The evaluation leverages AI+ technology to analyze responses based on the sub-parameters above.

    • Outcome:

      • Score:

        • By default, a score of "10" is assigned to a response.

        • If any one of the three parameters is detected, "-3.33" is subtracted from the score.

        • If two parameters are detected, "10 - 6.66" results in the score.

        • If all three parameters are detected, the score is "0."

      • Label:

        • 0: “Bad”

        • 10: “Good”

      • Tagging:

        • The parameters matched for the response are tagged to the response.

    Note: By default, each sub-parameter—Profanity, Gibberish, and Nonsensical Answer—is given equal weight, at 33% each. However, you can reconfigure these weights based on your specific needs. For example, you might choose to assign a higher weight to gibberish compared to the other two.

  • Logically Incorrect Answers: Logically incorrect answers are your responses that show inconsistencies or contradictions across related survey questions. These kinds of answers may suggest confusion, a lack of attention, or possibly dishonest feedback.

    Examples of Logically Incorrect Answers:

    1. Example 1:

      • Question 1: On a scale of 1-10, how satisfied are you with our product? [slider]

        • Reply: 10

      • Question 2: What did you like the most about the product? [text entry]

        • Reply: I am extremely disappointed with the product.

    2. Example 2:

      • Question: How satisfied are you with your new smartphone? [slider]

        • Reply: 10

      • Question: Rate out of 5, the following features of your smartphone. [matrix]

        • Reply:

          • Display: 1

          • Performance: 1

          • Battery: 1

          • Camera: 1


    Evaluation Criteria:

    1. Sentiment Consistency:

      • Responses across related questions should reflect consistent sentiment.

      • Example: A high satisfaction score should not be followed by a highly negative comment.

    2. Contextual Relevance:

      • Answers should align with the context of the question.

      • Example: If a product is rated highly overall, individual feature ratings should not all be poor unless justified.

    3. Response Integrity:

      • Contradictory responses are flagged as logically incorrect.


    Scoring Mechanism:

    • Process:

      • Responses are evaluated using AI to identify sentiment inconsistencies and contradictions.

      • Contextual analysis ensures logical alignment between answers.

    • Outcome:

      • Score:

        • 0: Logically Incorrect

        • 10: Logically Correct

      • Label:

        • 0: “Incorrect”

        • 10: “Correct”

  • Bot Response Detection: [Independent of all other Parameters]: To ensure your survey responses are authentic, a system is in place to detect and flag submissions made by bots. This helps maintain the integrity of the data and prevents survey results from being skewed by automated or malicious inputs. By filtering out non-human responses, the process ensures that only genuine feedback is included in the survey analysis, keeping the results reliable and accurate.

    Detection Mechanism: We utilize Google's ReCAPTCHA V3, which operates seamlessly in the background while a respondent is interacting with the survey link. This technology analyzes user behavior to differentiate between human users and bots.


    Scoring and Labeling:

    • If a bot is detected as the entity filling out the response:

      • The response is immediately assigned a score of 0.

      • The response quality is labeled as "Bot".

    • Such responses are excluded from further evaluation across all other parameters, as they are deemed Bot.

Overall response quality score and label calculation based all parameters together

  • An equal weightage is given to each sub parameter’s score and based on that a final score is calculated for each response.

  • Default weightage values:

    • Survey Taking Time: 33%

    • Open Text Quality: 33%

    • Logical Correctness: 34%

  • Response Quality = (Score of Survey Taking Time)*0.33 + Score of Open Text Quality)*0.33 + Score of Logical Correctness)*0.34

  • Labels

    • High : Score >7

    • Medium : Score<=7 & Score>3

    • Low: Score<3

    • Bot: Score = 0

  • If Bot is detected, Response Quality Score is set to “0”, irrespective of other scores.

    Note:The weightage for each parameter is configurable and can be adjusted based on specific requirement.

Response Quality Manager Dashboard

In addition to the tags shown for each response, you can access a detailed dashboard to analyze overall response quality by clicking the View Response Quality option in the top bar of the response table.

  1. Navigate to the Response tab and click Responses option on the top of the dashboard.

  2. You can select the time range from the Date Range option for which you wish to filter the Response Quality Manager results.

  3. You have access to a standard dashboard that helps you analyze the overall quality of the responses your survey is receiving. You will be able to find the following widgets:

    1. Survey Response Count: Shows Total Number of responses.

    2. High Quality Responses: Shows total number of High Quality responses.

    3. Medium Quality Responses: Shows total number of Medium Quality responses.

    4. Low Quality Responses: Shows total number of Low Quality responses.

    5. Bot Responses: Shows total number of Bot responses detected.

    6. Response Quality Score: You’ll see a score out of 10 that reflects the overall quality of the responses collected for your survey.

  4. You can view trend charts for each parameter used to assess response quality. These trend charts help you track how each aspect of quality changes over time. The charts include:

    1. Response Quality Score by Time: This chart represents the average response quality score.

    2. Bot Responses Over Time: This chart shows the trend in the number of bot responses detected in your survey over time, helping you identify any spikes or patterns in automated submissions.

    3. Survey Taking Time Trend: This chart displays the number of responses falling under each tag used in the Survey Taking Time check, allowing you to monitor how response durations are distributed.

      1. Normal

      2. Fast

      3. Too Fast

      4. Instant

      5. Too Slow

    4. Quality of Open Text Responses Trend: This chart represents quality of open text responses trend. Let's have a look at the list of possible tags:

      1. Profanity

      2. Gibberish

      3. Non-Sensical Answers

    5. Logically correct answer: This binary trend chart displays the consistency of responses over time. It uses the following tags:

      1. True: Indicates logically correct answers across related survey questions.

      2. False: Indicates logically incorrect answers, showing inconsistencies or contradictions.

Note:

  • No upfront setup is required, you can immediately view the response quality of your survey submissions and access the response quality dashboard.

  • If you want to make changes to the scoring weight configuration, you’ll need to request them through the Sprinklr team. These adjustments can be applied as needed.

Manage Response

Click on the Vertical Ellipsis() to access these:

  1. Archieve: You can archive a response, this will hide the response from the responses tab.

  2. Update Tag: This action can be used to update tags attached to each row. You can select from an existing list. If you want to create a new tag, they can input the tag name in the search bar and an option to create the new tag will appear in the dropdown.

  3. Edit Response Quality: You will also have the option to change the quality of a response based on their knowledge if they feel the response was wrongly tagged by Sprinklr’s AI. To edit the responses, click on the select box to select a response and then click on the vertical ellipsis (three dots menu) to edit. Upon clicking, you can choose the appropriate quality of the responses based on their knowledge and save it. This helps to change the quality tag for that particular response.

    Note: Currently changing the response quality doesn’t have an affect on the AI’s future results.

  4. View survey response: You can click the eye icon to view how respondent filled the survey, clicking on eye icon takes you to survey’s respondent UI and shows all responses.

Response Quality Filters

You have the option to use response quality filters to categorize responses and metrics/dimensions according to factors like survey completion time, the quality of open-text responses, logical consistency, and overall response quality. These filters can be found in the Responses tab, Standard Analytics Dashboard, Custom Dashboards, and more.

Column Actions

To access column action click on the Icon on top of a column. You can perform the following column actions:

  1. Sort ascending/descending: This action allows you to sort the responses tab based on column values.

  2. Hide column: This action hides the column.

  3. Freeze column: This action freezes the column.

  4. Add column to left/right: This action adds an empty column to the left/right of the column(added column will be a custom field or custom metric).

  5. Filtering: Low-scoring responses are flagged as potential low-quality entries. Flagged responses can be filtered out from the final analysis, ensuring only high-quality data is used for insights.

Additional Information

  • Once response quality calculation begins, you can manually verify the quality of few responses.

  • Review a few samples tagged with each type of quality label. If you find multiple responses that appear to be incorrectly tagged, please report the issue to the Sprinklr team.

  • Each response is labeled with a quality tag of High, Medium, or Low, to give you a clear and simplified view of its quality; however, the exact numerical score assigned to the response will not visible to you.

Key points to note:

  • The actual score assigned to each response is not visible to the user, rather tag showing quality of response as High, Medium or Low.

  • Response Quality enrichments are currently not supported for the imported responses

  • Response Quality feature is not supported for in progress responses. Response quality is only assigned once the response is marked as partially completed

  • The response quality dashboard and the overall response quality is visible only after the threshold has been reached.


FAQs

Sprinklr AI+ automatically reads all responses and performs this analysis across multiple criterions set in the backend.