Content Instructions in AI Agent

Updated 

Content Instructions define how your AI Agent processes, interprets, and communicates using your training content. These instructions act as predefined rules that guide the agent’s tone, behavior, and response style when interacting with customers.

By configuring Content Instructions, you can:

  • Ensure consistent use of business-approved language.

  • Control how the agent references product details.

  • Define rules for handling sensitive topics.

  • Customize prompts to align with your organization's voice.

Once configured, the AI Agent automatically applies these instructions during conversations.

Steps to Configure Content Instructions

  1. Access the Knowledge Content Section

    • Expand the Build section of your AI Agent.

    • Click View under the Knowledge Content section.

  2. Open Content Instructions

    • Click Configure next to Content Instructions.

    • You will be redirected to the configuration page.

  3. Edit Default Instructions

    • In the Content Instructions window, click any default instruction title or prompt you want to modify.

    • Default Instruction Sets

      The following instruction sets are preconfigured:

      • Objective: Defines the primary purpose of the AI Agent, outlining what it should achieve during customer interactions.

      • Information Sourcing: Specifies how the AI Agent should retrieve and use information from available knowledge sources to provide accurate responses.

      • Clarifying Questions: Guides the agent on when and how to ask follow-up questions to better understand customer intent and provide relevant answers.

      • Communication Principles: Establishes rules for tone, language style, and professionalism to ensure consistent and brand-aligned communication.

      • Absolute Restrictions: Lists topics, phrases, or actions that the AI Agent must never use or perform under any circumstances.

      • Conversation Management and Tone: Provides guidelines for maintaining a smooth, engaging conversation flow while adhering to the desired tone (e.g., friendly, formal, empathetic).

      • Special Instructions and Edge Cases: Includes handling rules for unique scenarios or exceptions that fall outside standard conversation patterns.

  4. Customize Instructions

    Update details such as brand name, specific terminology, tone guidelines, or any business-specific rules.

  5. Add New Instructions

    • Click + Add Instructions button.

    • Enter an Instruction Title and define your content instruction.

  6. Click Save to apply your updates.

Smart FAQ Additional Configurations

The Smart FAQ node in AI Agents provides advanced configuration options that allow you to fine‑tune how FAQ+ questions are interpreted and answered by the Large Language Model (LLM). These options are especially useful when testing or upgrading LLM models to improve response quality and consistency.

When you change models or experiment with new configurations, evaluation flags may be triggered. You can manage these flags by adding custom Groovy logic, which gives you precise control over model behavior and response optimization.

Configure Additional Settings

You can add custom Groovy code in the Additional Configurations section of the Content Instructions window. Along with custom logic, you can configure the following AI and governance settings:

  • LLM Provider and Model: Specify the provider and model used by the AI Agent for generating responses.

  • PII Masking Template: Apply a masking template to protect sensitive or personally identifiable information.

  • Guardrails: Enforce response boundaries to ensure outputs remain compliant, safe, and on topic.

Configure LLM Parameters

In addition to selecting the provider and model, you can adjust key LLM parameters to control response behavior:

  • Max Tokens: Specifies the maximum number of tokens allowed across the combined request and response.

  • Temperature: Controls output variability. Lower values produce more deterministic responses, while higher values allow greater creativity.

  • Frequency Penalty: Reduces repetition by penalizing tokens that appear frequently in the generated text.

  • Presence Penalty: Encourages the model to introduce new topics by penalizing tokens that have already appeared.