Configure CRON Scheduling in Sprinklr

Updated 

Cron‑based scheduling enables flexible, time‑based execution for importing data from external APIs, allowing precise control over when data synchronization runs. Earlier, the Source Import Data Connector supported only slot‑based scheduling, which restricted execution to predefined time windows and limited fine‑grained control.

With cron‑based scheduling, you can now define custom execution frequencies, such as minute‑level intervals, directly from the scheduling configuration. 

This capability allows more precise control over how often data is fetched, transformed, and ingested into Sprinklr, improving data freshness and synchronization reliability.

Cron‑based scheduling enables:

  • Minute‑level or custom execution intervals

  • Timezone‑aware scheduling

  • Precise start‑date control

  • Replacement of rigid slot‑based execution windows

  • Better alignment with external API rate limits and data availability

Prerequisities

Before you begin with CRON configuration, ensure that the following components are configured correctly.

Custom Entity

Create a Custom Entity in Sprinklr's Entity Studio to store the imported external data.

Note: Field data types should align with the API response to avoid transformation errors.

Purpose

  • Acts as the target schema for imported records

  • Defines attributes that map to API response fields

For detailed steps, refer to Create a Custom Entity.

External API

Define the External API that the Data Connector will invoke.

This External API will:

  • Store endpoint details, authentication, and request configuration

  • Act as the data source for the connector

For detailed steps, refer to Extensions Library.

Configure Request Adapter in External API

Configure a Response Adapter in the External API to enable the Data Connector to consume responses in a paginated and normalized format.

Responsibilities of the Response Adapter

  • Extract data records from the API response

  • Provide cursor or offset values (if applicable)

  • Return a standardized response structure consumable by the Data Connector

Configuring these prerequisites ensures that:

  • Data ingestion runs reliably on a CRON schedule

  • Pagination works correctly across large datasets

  • Imported data maps cleanly to platform entities

  • Scheduling failures due to schema or API misconfiguration are avoided

Configure CRON Scheduling in Unified Data Connector

Follow these steps to configure CRON‑based scheduling for an External API data connector.

Step 1: Navigate to Data Connector

  1. Open the Sprinklr Launchpad.

  2. Navigate to Data Connector.

Step 2: Select the Target Entity

On the Entity Selection screen:

  1. Select the Custom Entity that you configured for CRON‑based scheduling.

  2. Select Next.


Step 3: Select Integration Type

On the Entity‑Specific Settings screen:

  1. Select the integration type you want to create.

  2. For details on supported integration types, see Supported Entity Types.

  3. Select Next.

Step 4: Configure Source Selection

On the Source Selection screen, configure the following fields:

  • Entity Source: Select External API.

  • Connector Name: Enter a meaningful name for the connector.

  • Description: Provide a brief description of the data pipeline.

Select Next.


Step 5: Configure Source‑Specific Settings

On the Source‑Specific Settings screen, configure the following:

  • Endpoint ID: Select the External API endpoint to fetch data from.

  • Pagination Type: Select Cursor‑based.

  • Pagination Logic: The connector uses cursor‑based pagination, driven by the response size.

  • Pagination Rule: More data is available as long as the response size equals the limit passed in the request.

    Behavior

    • If response_size == request_limit, the connector fetches the next page

    • If response_size < request_limit, pagination stops

These settings determine how data is fetched and iterated during execution.

Once the endpoint is successfully validated, the system automatically opens the Mapping Configuration screen.


Step 6: Configure Data Mapping

On the Mapping Configuration screen:

  1. Map External API fields to the corresponding Custom Entity attributes.

  2. Ensure data types align correctly to prevent ingestion errors.

For detailed guidance, see Mapping Configuration Screen.

Select Next to continue.


Step 7: Configure Additional Settings

On the Additional Settings screen, configure the following:

  • Share Settings: Define access and visibility for the connector.

  • Schedule Settings: Define when and how often the connector runs. Configure the following fields in Schedule Settings section:

Field

Description

Schedule Type

  • Schedule Type determines the scheduling mechanism used for execution.

  • Select CRON Scheduling option to configure flexible, time‑based schedules such as daily, hourly, or minute‑level execution.

Start Date

Enter a date to specify when the schedule becomes active.

End Date

Enter a date to specify when the schedule stops executing.

  • If specified, the job will no longer run after this date.

  • If left empty, the schedule continues indefinitely.

Time Zone

The Timezone determines how the system interprets the schedule timing.

  • All scheduled executions follow the selected timezone.

  • This ensures consistent execution regardless of user location.

Schedule Frequency

The Schedule Frequency section defines how often the job runs.

  • Schedule Type: Select the recurrence pattern.

  • Time Configuration: Specify the exact execution time:

  • Hour – The hour when the job runs

  • Minute – The minute when the job runs

  • AM/PM – Time indicator

Example:

Daily at 12:00 AM means the job runs once every day at midnight in the selected timezone.

​After completing these steps:

  • The connector runs according to the defined CRON schedule.

  • Data is fetched from the External API, processed using pagination logic, and ingested into the Custom Entity.

Cron‑based scheduling significantly enhances data ingestion by replacing rigid slot‑based execution with flexible, time‑driven control. By combining cron scheduling with External APIs, Response Adapters, and Custom Entities, you can build reliable, scalable, and precisely timed data ingestion pipelines.