Introduction to AQM Insights
Updated
Quality Managers and Supervisors derive actionable insights from the AI-generated quality scores that help them understand the reasons behind agent performance ratings. This enables them to identify areas for coaching and performance improvement. Additionally, agents can use these insights, along with AI-generated corrective actions and recommendations, to address their weak areas.
Understanding AI Insights
To explore detailed insights behind the AI-generated quality scores, refer to the following points:
AI Score Breakdown Widget
In the AI Score Breakdown widget, click on the AI score category. This displays the distribution of AI scores across different parameters, such as:Grammar
Agent Introduction
Tonality
Empathy
Feedback
And more
Detailed Explanations
In-depth explanations of how each parameter is evaluated, along with practical examples of behaviors that contribute to good performance are found here. For instance:Courtesy: Learn which phrases are perceived as courteous by AI and see examples to emulate.
Grammar: Gain insights into grammatical mistakes made by agents, with specific corrections to guide coaching.
Empathy: Discover which phrases customers find empathetic and what makes a strong opening and closing statement.
AI Score Colors in AI Breakdown Widget
Currently, the color representation for scores in the AI Breakdown widget is predefined and not configurable:
Quality Score 0–40: Red
Quality Score 41–70: Yellow
Quality Score 71–100: Green
Using the AI Recommendations
Agents can use the detailed corrective actions and AI-generated recommendations to work on their weaknesses. By understanding what constitutes success in each parameter, agents can improve their performance, leading to higher customer satisfaction and quality scores.

Note: For steps to add feedback on AI Scoring in AI Insight refer, Feedback on AI Scoring in AI Insights.
Once the checklist has been configured, the next step is to set up AI Scoring and begin generating scores for agent interactions. In this stage, you will define the scoring rule, apply it to real cases, and review the AI-generated evaluations. This enables the system to assess conversations based on the checklist logic you have established.
After scores are generated, it is important to interpret the resulting insights. The platform provides detailed outputs that help you understand the rationale behind each score and analyze agent performance effectively.
Additionally, you can provide feedback on the AI-generated results. This feedback plays a crucial role in improving scoring accuracy and can be used to refine and optimize checklist rules over time.
For more details on enabling scoring, analyzing AI insights, and submitting feedback refer, AI Scoring and Insights.
Various Checklist Items
For each Checklist Item, a detailed explanation will be provided for each Checklist Item within the insight card. An insight card could contain the following types of insights:
ML Insights
If AI or AI+ models are used in a checklist item, ML insights will be generated. ML insights will contain the following details
Proof Messages: These are the messages from the original messages, based on which insights were detected. This may or may not be present. These are always highlighted in Yellow. There can be multiple proof messages from one AI model or an AI+ model.
Explanation: These are the ML-generated explanations. For each AI model/AI+ model, there can be only one explanation.

Keyword/Regex-Based Insights
Only the matching keyword/regex will be highlighted. Highlighting color will be based on user-defined sentiment, For example, for Positive → green, Negative → Red, Neutral → Yellow.

Range-Based Insights
When a Checklist Item, such as Dead Air, Hold, Interruption, and Mute, is identified, the corresponding range is highlighted on the horizontal timeline, and within the Insight Card, a descriptive message is displayed.
Insight Card Message format: <Instance Type> has been detected for <time-duration>.” [Example: Dead Air has been detected for 2 mins 10 secs]
Instance Type can be one of the following,
Dead Air
Hold
Mute
Interruption
The message will always be highlighted in yellow.
Highlight in horizontal timeline: Corresponding time duration will be highlighted in the horizontal timeline for both voice and digital interaction. Hover message texts for various instances are listed below.
Dead Air message format:
Break between 2 Agent Messages (<Duration>)
Break by Customer after Agent’s Message (<Duration>)
Break by Agent after Customer’s Message (<Duration>)
Break between 2 Customer Messages (<Duration>)
Hold
Agent Initiated Hold Instance (<Duration>)
Customer Initiated Hold Instance (<Duration>)
Mute
Agent Initiated Mute Instance (<Duration>)
Customer Initiated Mute Instance (<Duration>)
Interruption:
Agent Interrupting (<Duration>)
Customer Interrupting (<Duration>)

Non-Text Type Insights
Current Setup: Currently, for Non-Text type messages, hardcoded messages are sent to ML for providing more context. For example, Call is put on mute is sent to ML when the agent puts the customer on hold on a voice call.
Non-text type messages, excluding Transfer messages
It is a sheet containing a list of all the non-text type contents for which hardcoded messages are sent to ML. Hardcoded messages are also listed in the sheet. These non-text type messages have UMIDs, which are used to generate insights. If these messages are detected as a proof message in ML response for the agent quality model, “This contains non-text type messages” will be used in the insight card along with ML explanation.
Transfer Messages in Voice Calls
General behavior: Whenever a transfer is initiated in a voice call, a UMID is created for this transfer event. When the transcript is sent to ML (this will happen if you use an AI/AI+ model in “Hit Sprinklr AI/AI+ model” and the “Include Transfer Activity” toggle is disabled in the AI+ model, the hardcoded message that is sent to ML is “Agent Initiated Transfer of call”.
Insight Card Behavior: If a transfer message is detected as a proof message in ML response for the agent quality model, the hardcoded message (which will depend on whether toggle was enabled for that model or not) will be used in the insight card along with ML explanation. Refer to the screenshot. Note that this enrichment is only supported in Agent Quality Models when the transfer toggle is enabled.

When the Transfer toggle is enabled:
Example: So, for voice cases where transfer-related details (such as Transfer Type, Transfer Queue, etc.) are needed for ML to accurately answer the question and Description, then for that Agent Quality Model, this toggle should be enabled. One example of such a scenario can be “Check if the customer is reaching out to the contact center regarding payment issues, then the agent should send it to 'Payment Issue IVR”.

If the toggle is enabled, the hardcoded message is enriched with all relevant details Agent initiated {Type of Transfer} transfer to {Call Transferred To}{ from {Initial Queue}}{ to {Transfer Queue}}{ ({Transfer IVR})}{ after selecting the skill {Skills}}.Some examples are listed below.
Agent initiated IVR transfer to IVR (ivr1).
Agent initiated Warm transfer to Queue from queue1 to queue2.
Agent initiated Warm transfer to Agent after selecting the skill Spanish
Note
{ from {Initial Queue}} only be populated if Initial Queue is not empty.
{ to {Transfer Queue}} only be populated if Transfer Queue is not empty.
{ ({Transfer IVR})} only be populated if Transfer IVR is not empty.
{ after selecting the skill {Skills}} only be populated if Skills is not empty.
{ ({Transfer Queue})} only be populated if Transfer Queue is not empty.
Activity Based Insights
Case Assignment Activity for digital interactions is also sent to ML for additional context when the “Include Transfer Activity” toggle is enabled.
Example: So, for digital cases where assignment-related details (such as assignment Type, assignment Queue, etc.) are needed for ML to accurately answer the question and Description, then for that Agent Quality Model, this toggle should be enabled. One example of such a scenario can be “Check if the customer is reaching out to the contact center regarding payment issues, then the agent should send it to 'Payment Issue IVR”.
Hardcode Message format: Interaction was assigned to {Transferred To}{ ({Transfer Queue})}. Some examples are:
Interaction was assigned to Agent
Interaction was assigned to Queue (Queue 1)
Note:{ ({Transfer Queue})} will only be populated if Transfer Queue is not empty.
Insight Card Text: For assignment activity in digital interactions, no UMID is created. A dummy UMID is used for this in AI scoring, and if the proof message in ML response of the Agent Quality Model includes this dummy UMID, then the above-mentioned hardcoded message will be used within the Insight Card.
This enrichment is only supported for assignment scenarios in digital interactions and not in voice interactions, as for voice, we use “Transfer Messages” to get insights.
This activity is only sent to ML if the toggle is enabled. If disabled, no context is sent to ML regarding assignment activity.
There will be no highlighting in the horizontal timeline, as there is no UMID for activities.
Message Custom Field-Based Insights
If the message-level custom field condition matches any message in the conversation, then that message will be highlighted based on user-defined values for “Highlight Text”. If text is to be highlighted, then the highlighting colour will depend upon “Show as insight with Sentiment” value, e.g., Positive → Green, Negative → Red, Neutral → Yellow. Refer to the table below to understand the behavior for different scenarios.


AI Insights Card Architecture: Grouping by Node/ Element
Each Condition and Action Element that generates insights is treated as a distinct group.


The group name corresponds to the Condition and Action Element title configured at the checklist level.
Groups appear in the Insight Card following the chronological order of node execution.
To enable this grouping functionality, the toggle, Enable grouping-based visualisation for item insights, can be enabled at the checklist level.
A dropdown on the insight card lists all elements that have generated insights.
