Prompt Engineering Best Practices
Updated
This guide outlines best practices for writing effective prompts in AI+ Studio, including how to construct system and base prompts, test deployments, and optimize prompt performance across models and use cases.
What Is Prompt Engineering?
Prompt engineering is the practice of writing clear and effective instructions for a model so that it consistently generates relevant and usable responses. Because model outputs are non-deterministic, prompt engineering combines structured design with iterative testing to achieve reliable results.
In AI+ Studio, each AI deployment requires two types of prompts:
System Prompt – Defines global behavior and tone.
Base Prompt – Contains task-specific instructions.
Both types of prompts influence how the model interprets inputs and generates responses.
System Prompt
A system prompt sets the model's overarching persona, tone, and response boundaries for a given deployment. It is typically authored by admins or developers to enforce consistency across prompts.
Example:
“You are a helpful, concise assistant trained to provide enterprise-ready responses. Avoid speculation and cite data where applicable.”
Base Prompt
A base prompt is the user-level instruction that defines the specific task the model should perform. It is usually dynamic and incorporates contextual data, user inputs, or application logic using placeholders.
Example:
“Summarize the following customer conversation and highlight any negative sentiment expressed by the user.”
Best Practices for Prompt Engineering
Well-structured prompts improve the accuracy, consistency, and safety of AI outputs. This section outlines best practices for writing both system and base prompts in AI+ Studio.
Best Practices for System Prompts
1. Set a Clear Role and Persona
Clearly define the model’s function.
❌ Less effective: “You help with cases.”
✅ Better: “You are a multilingual support assistant who specializes in summarizing Care case conversations for customer service agents.”
2. Limit Ambiguity
Avoid vague instructions and provide specific expectations.
❌ Less effective: “Fix the text and make it better.”
✅ Better: “You are a professional writing assistant. Fix spelling and grammar while keeping the original tone and meaning unchanged.”
3. Define the Area of Expertise
Give the model domain-specific direction.
❌ Less effective: “You know how to write social posts.”
✅ Better: “You are a social media expert skilled in crafting hashtags based on trending topics and keyword relevance.”
Best Practices for Base Prompts
1. Use Separators
Clearly separate instructions from input context using visual markers like ### or """.
❌ Less effective: "Summarize the text below."
✅ Recommended: “Summarize the text below as a bullet point list of the most important points.”
###
[Insert text here]
2. Define Input and Expected Output
Be explicit about both input structure and output format.
❌ Less effective: “Process this text.”
✅ Better: “Given the campaign brief below, generate a LinkedIn caption under 30 words that highlights the product's key benefit.”
3. Provide Context
Include platform, purpose, and tone to avoid misinterpretation.
❌ Less effective: “Generate hashtags for this post.”
✅ Better: “Generate 5 relevant hashtags for the following brand marketing post written for Instagram. Prioritize engagement and relevance.”
4. Avoid Ambiguity
Use precise instructions for constraints.
❌ Less effective: “Make it short.”
✅ Better: “Rewrite the text in under 50 words, keeping all key points intact.”
5. Use Examples When Appropriate
If the task may be interpreted in different ways, provide a sample.
❌ Less effective: "Generate hashtags."
✅ Recommended: “Generate hashtags based on the following description:
‘Just launched our new vegan skincare line!’
Expected Output: #VeganBeauty #SkincareLaunch #EcoFriendly”
6. Provide Positive Guidance
Rather than listing what to avoid, describe what to do.
❌ Less effective: “Don’t add filler words.”
✅ Better: “Rewrite the paragraph in under 30 words. Focus on the core message and remove filler content.”
Testing Prompts and Deployments
Prompt testing ensures reliability across real-world inputs and different model providers. AI+ Studio provides tools to validate prompt behavior before going live.
Best Practices for Testing
1. Use Varied Input Types
Test with diverse inputs including:
Short and long text
Structured data (tables, lists)
Code snippets or URLs
Numerical data
2. Test Edge Cases
Include unusual or incomplete inputs, special characters, or contradictory content to see how the model responds.
3. Check for Output Consistency
Test the same input multiple times to verify if the output is consistent. This is especially important for use cases like summarization or translation.
4. Use Context Variations
Ensure the model handles changes in tone, language, or direction correctly, especially for dynamic inputs.
5. Validate Guardrails
Input sensitive or inappropriate test prompts to ensure the model:
Rejects unsafe content
Applies content filters
Complies with platform guidelines
6. Ensure PII Masking
If PII masking is enabled, verify that inputs are anonymized in requests. If de-masking is enabled, ensure that responses restore masked information correctly.
Provider and Model Specific Testing
Prompt behavior can vary across different providers such as OpenAI, Amazon Bedrock, and Google Vertex. A prompt that performs well with one model may behave differently with another.
Best Practice
If you change the model or provider, always re-test your prompts to ensure the deployment behaves as expected.
Using Templates for Prompt Design
AI+ Studio offers predefined templates for common use cases such as summarization, rewriting, classification, and hashtag generation.
Best Practice
Use these templates as a starting point. They reflect prompt construction best practices and are optimized by the Sprinklr AI team. You can modify them to suit your specific use case, but they provide a solid foundation for accuracy and stability.
Prompt engineering is an ongoing, iterative process. As models evolve and use cases become more complex, even well-performing prompts may require updates. Continually test, monitor, and refine your prompts to maintain accuracy, reliability, and safety.
Treat prompts as configurable assets—critical to the quality of your AI deployment. With proper design and testing, AI+ Studio can help deliver consistent, high-quality outcomes at scale.