Originally used in military, nuclear energy, and aviation contexts, “human in the loop” (HITL) referred to the ability for humans to intervene in automated systems to prevent disasters. In the context of generative AI, HITL is about giving humans an opportunity to review and act upon AI-generated content.
Our recent studies reveal, however, that in many cases, humans — employees at your organization or business — should be even more thoughtfully and meaningfully embedded into the process as the “human-at-the-helm.”
Diagram of human in the loop describes the active roles people play within AI systems. Icons of a bot, human, and a group of humans are connected by arrows in a feedback loop.
A sample ‘human in the loop’ scenario might be: A customer america phone number list service agent prompts the generative AI system to generate a reply for a customer support conversation, reviews the output, prompts the system with feedback or more context, and then sends the revised output to a customer. The customer interaction, in turn, can yield more feedback that influences the system. [Graphic/Salesforce]
Human involvement increases quality and accuracy and is the primary driver for trust
An important learning from this research is that customers and users understand generative AI isn’t perfect. It can be unpredictable, inaccurate, and at times can make things up. Not knowing the data source or the completeness and quality of that data also degrades customers’ and users’ trust. Not to mention, the learning curve for how to most-effectively prompt generative AI is steep.