How Sprinklr Helps Identify and Measure Toxic Content with AI

Pavitar Singh

March 21, 20234 min read

Share this Article

New AI Model can analyze publicly available digital data to detect the presence of toxicity and provide context on the scope and impact of toxic content.

The challenges brands face to capture and analyze online conversations, and advertise alongside them, have continued to accelerate. The sheer volume of conversation on social and digital platforms is a daunting barrier to entry for many. Toxicity on social media – rhetoric motivated by hatred, prejudice or intolerance – is unfortunately another persistent reality and growing concern for brands, especially as it relates to “brand safety”. 

Brand safety refers to how well a brand is protected from being associated with inappropriate or offensive content. Offline, brand safety concerns are why advertisers may drop celebrity endorsements due to personal behavior or controversy.  Online, brand safety is of paramount concern to advertisers selecting where to allocate their digital marketing budgets. Sprinklr platform partners, advertisers, and customers all need a way to evaluate and understand toxicity in online conversations and the implications for brand safety. 

To solve the challenge, Sprinklr is excited to announce a new AI-based Toxicity Model that can help any organization detect the presence of toxicity and measure its reach– when data is available. The AI model goes far beyond simply identifying key words or terms and identifies toxic messages by understanding the context and intent of the message. 

Table of Contents

Understanding Toxicity using Sprinklr’s AI 

Sprinklr’s proprietary artificial intelligence can analyze unstructured customer experience data from 30+ digital channels and millions of data sources every day. We help enterprises capitalize on actionable insights at unprecedented scale, and our customers have used our AI to understand customer sentiment and customer intent for years. In 2022, we realized that brands needed a way to measure more than brand sentiment. Measuring toxicity and its reach have become critical for brand safety and advertising decisions. 

Sprinklr’s new toxicity model analyzes data and categorizes content as “toxic” if it is used to demean an individual, attack a protected category or dehumanize marginalized groups. Integrating factors such as reclaimed language and context allowed our model to eliminate false positives and negatives as well. The model uses AI to determine intent and context around the flagged keywords, to help brands understand what is really toxic.

We will continuously evolve this model to adapt to additional social channels and languages, thereby expanding the scope of toxicity measurements. 

Sprinklr’s AI-based Toxicity Model - X, formerly Twitter Case Study

We understand that the goal of our partners at X, formerly Twitter is to understand, measure and reduce toxicity on the platform and to promote brand safety for advertisers. This made X, formerly Twitter a great early customer and partner for this capability.

X, formerly Twitter provided Sprinklr with a list of 300 english-language slur words. The list of terms was designed to capture hateful slurs and language that targets marginalized and minority voices. Sprinklr analyzed every english-language public tweet between January and February 2023 and identified 550,000 tweets that included at least one word from the list provided. 

Our analysis of the X, formerly Twitter data set provided these findings:

  • When compared to non-toxic tweets in the dataset containing slur keywords, toxic tweets received 3 times fewer impressions on average.

    Toxic tweets Non Toxic tweets Impressions

  • In multiple instances, we observed that the impressions received by toxic tweets were fewer compared to non-toxic tweets written by the same user. For example, a toxic tweet from one particular user received 22 views in comparison to the average of 5K+ views received on other tweets.

  • As the Sprinklr Toxicity model understands the context of the slur keyword usage, the majority of tweets with slurs were accurately classified as non-toxic. Some examples:

    • Tweet from a major US news outlet - The slur keyword is part of the name of a town

    • Tweet from a popular US sports influencer - The slur keyword is part of the name of an individual

    • We also identified examples of toxic speech used in the context of a news piece and examples of reclaimed speech from members of marginalized communities

  • The percentage of tweets identified as toxic in the data set containing slur keywords was in the range of ~15% over the analyzed time frame. Despite every tweet containing an identified slur word, they are primarily used in non-toxic contexts like reclaimed speech or casual greetings. 

    Toxic vs NonToxic Tweets Distribution 1

    Michael O'Herlihy, Director of Product for Trust & Safety at X, formerly Twitter had this to share:


    "For months, we’ve been publishing data on the reach of Tweets that X, formerly Twitter deems toxic and that contain slurs. Sprinklr’s analysis shows that the reach of toxic content is actually lower than X, formerly Twitter’s own first-party estimates. Refining our approach to reducing hate speech remains an ongoing commitment."

Toxicity Detection and Beyond

Every technology is a tool, with the potential for great advantage and serious abuse. Developers have to commit to building technology that is ethically responsible and to help provide guardrails that will allow innovation to flourish in a safe way. Ultimately, every brand and advertiser have to make their own decisions about where they want to spend money to reach customers while safeguarding their brand. 

Sprinklr’s goal is to provide our customers and partners with the best possible tools to reach customers online and to capture, analyze, and act on conversations and customer feedback to improve their marketing, service and sales. With the Sprinklr AI-based Toxicity Model, users will be empowered to conduct an independent assessment of conversation on any social or digital platform. We will also support our channel partners as they work to understand toxicity on their platforms

Sprinklr’s Toxicity Model will be available to all our platform partners and advertising partners immediately. We will also provide this feature to a select number of customers who are interested in a definition partnership for this first phase of early release, with general availability to follow.

Share this Article

Related Topics

How Sprinklr Leverages Advanced RAG to Unlock Generative AI for Enterprise Use CasesSprinklr’s continued commitment to responsible AI: Crafting stellar customer experiences with robust governanceDemocratize AI: 4 DIY use cases you can build using Sprinklr AI Studio