Page 1 of 1

Overview of Trusted NLG – SalesforceAI Research

Posted: Sun May 25, 2025 4:53 am
by rochona
At Salesforce, trust is our #1 value, and we at Salesforce AI Research have helped lead the way in understanding what it means to create trusted natural language generation (NLG) by analyzing existing methods, improving modeling approaches, fine-grained evaluating, and implementing solutions into the Salesforce Trust Layer. The Salesforce Trust Layer is equipped with best-in-class security guardrails from the product to our policies. Designed for enterprise security standards, the Einstein Trust Layer allows teams to benefit from generative AI without compromising their customer data.

AI has been at the core of Salesforce products, as shown in the image below, and even before this recent wave of LLM work, Salesforce has been a leader in AI, producing impactful afghanistan phone number list research to make AI models more trusted.

INTERNAL Salesforce AI Research _ First Call Deck (FCD) (3).png
For example, our earlier work focused on reducing gender bias in AI models. We also called for a more robust evaluation of these models and an understanding of factual errors found in tasks such as summarization. To further help the community evaluate and understand their models, we have released open-source tools such as Robustness Gym and SummVis. Robustness Gym is a simple and extensible evaluation toolkit that unifies several robustness evaluation paradigms, while SummVis is a visualization tool focused on summarization that enables users to understand and visualize metrics such as factual consistency that ground a summary in its input context.