Get Started with BYOM in AI+ Studio
Updated
Sprinklr follows an LLM-agnostic approach to Generative AI, giving you the flexibility to choose the language model that best fits your business needs. Along with Sprinklr’s in-house and partner models, you can use the Bring Your Own Model (BYOM) option to integrate your internally developed LLMs directly into AI+ Studio.
Why BYOM?
Leverage existing investments - Maximize ROI on your in-house LLM development.
Domain customization - Use models fine-tuned to your industry or brand-specific context.
Flexibility and control - Decide how your model is hosted, maintained, and scaled.
Note: Your organization is responsible for hosting and maintaining the LLM infrastructure, including compute costs, latency, and accuracy. While you can refine prompts via the Sprinklr UI, the overall performance of a BYO LLM depends on the accuracy of your model.
Prerequisites
To integrate a BYOM with AI+ Studio, Sprinklr requires:
Working API Endpoint – Your LLM must expose an API endpoint compatible with Sprinklr’s backend. If formats differ, Sprinklr will provide request-response adapters to facilitate communication.
API Details – Share input format, response structure, and a sample request/response.
Example Request
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Hello!" }
]
}'
Example Response
{
"id": "chatcmpl-123",
"object": "chat.completion",
"choices": [
{
"message": { "role": "assistant", "content": "Hello there, how may I assist you today?" }
}
],
"usage": { "prompt_tokens": 9, "completion_tokens": 12, "total_tokens": 21 }
}
Note: BYOM APIs must support function calling/tool calling to leverage features of Sprinklr’s Agentic AI.
Additional Notes
You can use BYOM along with Bring Your Own Key (BYOK) or Sprinklr-provided AI.
You cannot combine Sprinklr AI + BYOM/BYOK in the same deployment due to commercial implications.
Model switching is supported at use-case, agent, or copilot level. Dynamic runtime switching is not available.
By configuring Bring Your Own Model (BYOM) in AI+ Studio, you can extend Sprinklr’s generative AI capabilities with your own internally developed LLMs. This setup gives you control over hosting and performance while ensuring seamless integration into Sprinklr workflows.
Refer to Implementation Steps for BYOM in AI+ Studio for more details.