Configuring Grounding in FAQ+
Updated
In this article, we will explore how you can configure grounding to ensure that the LLM’s responses are based on the knowledge content provided.
Note:
Grounding reduces, but does not complete eliminate the risk of hallucinations.
Brands will lost the ability to do small talk with FAQ+
The solution presented below is based on the understanding that if 'a' was to be referred for chunking the answer, then rawResponse component from the Generative AI API Response (More about locating the filed here ) would have references to it in the form of 【2】
Example
"rawResponse": "Yas Island is a vibrant destination offering a variety of activities. You can enjoy thrilling experiences at theme parks like Warner Bros. World™ Abu Dhabi, Yas Waterworld, and Ferrari World Abu Dhabi【9】. For outdoor fun, visit Yas Beach or stroll along Yas Marina【2】. The island also hosts exciting events and offers over 160 dining options【2】【1】. For a luxurious stay, consider W Abu Dhabi – Yas Island【7】. \n\n🚗 **Transportation Tip:** Use the complimentary Yas Express shuttle to explore the island effortlessly!"
Steps to configure Grounding
Parse the raw response (variable used is json).
fullResponse = return JSON_UTILS.parseJson(fullResponse);
Extract rawResponse component from api raw response (variable used is text2).
def res = fullResponse.result.suggestedResponses.addtional.rawResponse[0]
return res;
Check if rawResponse has pattern 【2】etc (variable used kbArticleExists)
def tempText=text2
def regex= /【\d+】/
def matches = tempText.findAll(regex)
if (matches.size() > 0) {
return 'TRUE'
} else {
return 'FALSE'
}
If True, the response is grounded in knowledge content, and you can publish the repsonse to the end user.
If False, treat the reposes as Fallback.