Importing Responses Data
Updated
The Import Survey Responses feature allows users to upload response data from external sources directly into a survey. This functionality is particularly useful for consolidating survey data collected through multiple channels or transitioning responses from legacy systems into a centralized platform. By supporting imports, the feature helps ensure that all relevant data is accounted for and accessible in one place.
You can map imported data to the survey's questions, ensuring consistency and accuracy in the dataset. This alignment preserves the integrity of analysis and reporting, even when responses are gathered outside the platform. Importing responses also enables you to incorporate historical data into your surveys, creating a more comprehensive view of trends and patterns over time.
By simplifying data consolidation and ensuring compatibility with the survey structure, this feature helps you to maintain organized and complete datasets. It supports use cases such as retrospective analysis, data migration, and multi-source data integration, all while ensuring the continuity of workflows and insights. Refer to the Survey Response Overview article for details on use cases and value additions.
Prerequisites
You would require access to Survey Level, View and Edit permissions at Survey Level under CFM App.
For setting a data connector, the following permissions are required:
You would need View, View Audit Logs, Create, Edit and Delete of Data Connectors under Setup.
Setting Up Data Connectors
Files can be imported through two methods:
Data Connector: A data connector can be set up for file-based data ingestion.
Response API: Responses can also be uploaded via an API. This will be covered in a separate article.
Let us have a look at the steps:
Navigate to the Global Settings and select Data Connectors under Program Settings.
After clicking the “Data Connectors” option, you can see the list of all previously configured data connectors for Customer Feedback Management, related entities like Hierarchy, Transaction, and Response.
You can click “+ Install Connector” option to create a new connector.
Go to the Entity Selection page and select an entity from the dropdown. The hierarchy entity is listed as “CFM Response”. Select and click on Next.
Go to Entity Specific Settings and select the details:
Survey: You can select the survey from the dropdown.
Integration Type: You can select the integration type as “Insert” since that is the only type supported, as the connector can only create new records.
Click Next.
Go to Source Selection page and fill in the details:
Entity Source: Currently, File Upload, SFTP, Azure Blob, S3, and GCS are supported through the Data connector. And for File based upsert they can use the native “Update data” flow within the Hierarchy screens.
Connector Name: Enter a connector name of a data connector.
Description: Add a description to the data connector.
Click Next.
Go to the Source Specific Settings page and fill in the details. Here, you must enter the actual connector details based on the type of source being used:
Refer to these articles for more details:
File Upload
Configuring a Google Cloud Storage (GCS) Connection
Configuring Azure Blob Connection
Selecting Directories for SFTP/S3/GCS/Azure Blob Connections
Bucket: Enter a bucket name.
Authentication Details: Select the authentication type from the dropdown. Select between Access Key id and Secret and IAM Role.
Access Key id and Secret: Enter the Access key ID and Access Key secret.
IAM Role: Enter IAM Role ARN.
Go to Source Files Director Path:
Go to Successful Files' Directory Path and browse and select.
Go to Failed Files' Directory Path and browse and select.
Go to Directory Delimiter and add a delimiter.
You can Upload Sample files, for that download the template and fill in the details.
You can add Source File Prefix.
Go to Advanced Settings and toggle Is Data Encrypted:
Encryption Algorithm: Select it from the dropddown either RSA or PGP.
Original File Extension: Select it from the drop down either XLSX and XLS.
Click Next.
Go to Mapping Configuration page and fill in the details:
The Mapping Screen provides the functionality to align headers from the source files with the appropriate destination fields in Sprinklr. This process is crucial for ensuring the accuracy and integrity of the data being imported.
Moreover, the Mapping Screen offers more than just basic mapping; it includes robust tools for validating, enriching, and modifying data prior to its entry into Sprinklr. These features improve the dependability and applicability of imported data across different workflows.Header: This column displays all the details fetched from the source, providing a comprehensive view of incoming data.
Destination Field: This column contains all the destination fields available in Sprinklr, allowing users to map the source data to the appropriate fields.
You can map the header to a field present in one of the following buckets:
Standard Fields: These include Distribution Channel, Response Time and Tags.
Survey Questions: These include all the dimensions for the selected survey.
Response Fields: Response Custom Fields.
Transaction Fields: Transaction Custom Fields.
Profile Fields: Profile Custom Fields.
If a question is marked as “Mandatory” in the survey builder, you can’t create a data connector without mapping each and every question marked as “Mandatory” in the builder.
If no value is received for a “Mandatory” question in the ingestion file, then the response is tagged as “Partially Completed”.
Sample Value Field: The Sample Value Field offers a glimpse of the data that is being mapped. These sample values are taken from the file uploaded by the user. For example, the sample value of the "ID" column in the header field matches the value found in the first row of the "ID" column in the uploaded file. This field acts as a quick reference to verify that the data is properly aligned with the intended destination fields.
Validation Rule: The Validation Rule Field on the Mapping Screen enables users to verify the accuracy and consistency of imported data by utilizing one of three validation types.
Regex Validation:
If you select Regex, a screen pops up where you can define and add the specific regular expression to validate the data format.
This option provides flexibility to create custom validation rules tailored to specific requirements.
Email Validation
This is a static validation that ensures the data in the selected field adheres to a standard email format.
Phone Number Validation
Similarly, this static validation checks that the data matches the structure of a valid phone number.
No Duplicates
This will allow two or more records with identical values in this field during a single connector execution.
Note: From the toggle, you can also mark the required field as mandatory.
If you mark any field as mandatory but it is not available, then it will be skipped in the final result.
Mapping Screen also provides various other functionalities for a seamless integration. Refer to Functionalities of Mapping Screen for a detailed knowledge.
Value Mapping in Response Data Connector
The Value Mapping feature enables you to standardize and normalize survey response data received from external systems. Numerous external systems record responses in coded or recoded formats (such as A1, Option 1, 001). Through Value Mapping, these codes can be translated into human-readable terms (like Strongly Agree, Option 1) to guarantee uniform reporting and analysis within Sprinklr.
Example:
External system sends: A1
Value Mapping rule maps A1 → Option 1
Sprinklr stores and reports: Option 1
This ensures clean data ingestion and eliminates inconsistencies caused by different coding formats.
How does it work?
Enable Value Mapping
Go to the Response Data Connector setup.
Select the relevant survey field that supports value mapping.
Click the downward arrow next to the validation rule to expand the Value Mapping option.
Define Mappings
In the Value column, enter the coded value received from the external system (e.g., A1).
In the Destination Value column, select the corresponding response option (e.g., Strongly Agree).
Add multiple mappings as required.
Note:
Value Mapping is applicable only to fields that are compatible with this feature.
The dropdown menu in the Destination Value column displays all the valid response options that are available for the chosen field.
The arrows illustrate the quantity of value mappings performed for a specific field.
Mapping guarantees uniformity in reporting, filters, and drilldowns, no matter how external systems transmit response data.
Mappings are implemented in real-time as data is ingested.
List of fields that support value mapping option
Distribution Channel - Standard Field
CSAT - Single and Multi Statement
MCQ - “,” used as default delimiter
NPS - Single and Multi Statement
Rating Scale - Single and Multi Statement
Matrix - Single and Multi Statement
Running and Activating the connector
You can click the Vertical Ellipsis (⋮) to access the data connector management options:
Edit: You can make edits to the data connector details and run intervals. Once set up, the user cannot edit the source type.
Delete: You can delete a connector.
Run: You are requested to upload a file according to the specific configuration of the connector. The system will process this file to either create or update the required entities for a one-time on-demand execution.
Activate: You can enable the connector using this option, and once activated, it will automatically operate according to the time intervals specified in the additional settings screen.
View Activity: You can access the logs detailing the modifications made to the connectors, as well as the run history, which includes success and failure files for each execution.
If a record contains a value that is unacceptable for the chosen field, is not included in the value mapping, or does not comply with the validation rule, the entire record will not be imported into the platform. These records can be assessed using the failure file.
After the connector run, the responses are brought into the survey.
The “Ingestion Connector Name” field is filled with the name of the data connector that is accountable for importing that specific response.
The "Response Type" is configured as "Imported."
If the "Distribution Channel" field is left blank, it will automatically be filled in as "Data Ingestion."
Note: Open Responses Tab to check whether the new survey responses have been ingested.
Note: This feature is still in development, so this document only includes information about the details that have been implemented so far.
Key points to note
Upload Codes for Distribution Channel:
EXTERNAL_APPLICATION
QR_CODE
WEBSITE
WHATSAPP_BUSINESS
IN_APP
PERSONALISED_LINK
Accepted Date and Time Formats
The following date and date-time formats are supported when uploading or mapping data in the Response Data Connector.
By default, 00:00 of GMT is considered while ingesting response if time is not added.
By default, GMT is considered while ingesting response if time zone is not added.
Date Only
dd MMMM yyyy → 25 December 2025
MMM dd yyyy → Dec 25 2025
MMM d yyy → Dec 5 2025
EEE, MMM dd, yyyy → Mon, Dec 25, 2025
dd-MM-yyyy → 25-12-2025
dd/MM/yyyy → 25/12/2025
MM/dd/yy → 12/25/25
dd/MM/yy → 25/12/25
MM-dd-yyyy → 12-25-2025
yyyy-MM-dd → 2025-12-25
Date + Time (24-Hour)
dd-MM-yyyy HH:mm:ss → 25-12-2025 14:30:59
yyyy-MM-dd HH:mm:ss → 2025-12-25 14:30:59
dd-MM-yyyy HH:mm → 25-12-2025 14:30
dd-MM-yy HH:mm → 25-12-25 14:30
MM-dd-yy HH:mm → 12-25-25 14:30
MM/dd/yy HH:mm → 12/25/25 14:30
MM-dd-yyyy HH:mm:ss → 12-25-2025 14:30:59
MM/dd/yyyy HH:mm → 12/25/2025 14:30
MM-dd-yyyy HH:mm:ss (with dashes or slashes)
Date + Time (12-Hour with AM/PM)
MM/dd/yyyy hh:mm:ss aa → 12/25/2025 02:30:59 PM
dd-MM-yyyy hh:mm:ss aa → 25-12-2025 02:30:59 PM
dd-MM-yyyy hh:mm aa → 25-12-2025 02:30 PM
dd-MM-yyyy hh:mm:ss aa (with dash separator)
MM-dd-yyyy hh:mm:ss aa (with slash separator)
Extended Formats
ISO 8601 full format → 2025-12-25T14:30:59Z
ISO with milliseconds (GMT) → 2025-12-25T14:30:59.123Z
Timestamp with milliseconds → 2025-12-25 14:30:59.123
With timezone → 25-12-2025 14:30:59 GMT+05:30
With offset → 25-12-2025 14:30:59 +0530
yyyy-MM-dd HH:mm:ss.SSSZ → 2025-12-25 14:30:59.123+0530
Human readable format → 25 Dec 2025, 2:30 PM
h:mm a d MMM yyyy → 2:30 PM 25 Dec 2025