Knowledge Evaluation for Sprinklr AI Agents

Updated 

The Evaluate Knowledge feature lets you bulk-test the accuracy and effectiveness of your AI Agent configurations. Using the Generate Q&A capability within AI Agent Studio, the system creates question and answer pairs from knowledge articles, validates agent responses, and integrates with the RAG Evaluation Framework for continuous improvement.

Steps to Setup Knowledge Evaluation

  • Create a new AI Agent or click the Manage icon on an existing one.

  • In the Manage window, click View next to Evaluate or select Evaluate Knowledge from the left pane.

  • On the Knowledge Evaluations window, click the Caret (˅) icon next to the Generate Q&A button and select one of the following:

    • Generate Q&A: Automatically create relevant question answer pairs using Sprinklr AI.

    • Upload Q&A: Import Q&A pairs from an Excel file.

    • Add Q&A: Manually enter Q&A pairs.

  • Click Save.

    Note: When using FAQ+, you can add multiple Q&A pairs and leave the Answer field blank. If no answer is provided, the system auto-generates one using the Generate Expected Answer feature a banner within the interface will indicate this.

Generate Q&A Automatically

When you select Generate Q&A, the system initiates an automated background job. No further action is required. While processing:

  • The Generate Q&A button becomes disabled, and a loading indicator appears to show that the job is in progress. This prevents duplicate submissions and maintains system stability.

  • A pop‑up window displays each step being executed for full transparency:

    • Extract knowledge articles: Identifies the relevant content sources.

    • Generate Q&A pairs: Creates questions along with their expected answers.

    • Validate alignment: Confirms the generated pairs correspond only to the appropriate knowledge content.

    • Save dataset: Stores the final Q&A pairs in the RAG Evaluation Framework.

Once processing completes, the generated Q&A pairs appear in the dataset view.

From the dataset view, you can:

  • Review the generated pairs for relevance and accuracy.

  • Edit the questions or answers to refine clarity, context, or tone.

  • Export the dataset for evaluation workflows or integration with other systems.

Steps to run a Knowledge Evaluation

  1. Complete the steps outlined in Setting Up Knowledge Evaluation.

  2. In the Knowledge Evaluations window, click Run Evaluation in the top‑right corner.

  3. Enter a clear Prompt, add any required Additional Fields, or select from the available resources. Then click Save to begin the evaluation.