Reasoning via Planning (RAP): This strategy uses LLMs as both the reasoning engine and world model to predict the state of the environment and simulate the long-term impact of actions. It integrates multiple concepts, like exploration of alternative reasoning paths, anticipating future states and rewards, and iteratively refining existing reasoning steps to achieve better reasoning performance. RAP-based reasoning Opens in a new windowboasts of superior performance over various baselines for tasks that involve planning, math reasoning, and logical inference.
These are just a few of the most promising strategies today. The process of applying these strategies to a real life AI application is an iterative one that entails tweaking and combining various strategies for the most optimal performance.
It is quite exciting to have LLMs function as reasoning afghanistan phone number list engines, but how do you make it useful in the real world? To draw an analogy with humans, if LLMs are like the brain with reasoning, planning, and decision-making abilities, we still need our hands and legs in order to take action. Cue the “AI agent” — an AI system that contains both reasoning as well as action-taking abilities. Some of the prevalent terms for action-taking are “tools,” “plug-ins,” and “actions.”
There are two kinds of AI agents: fully autonomous and semi-autonomous. Fully autonomous agents can make decisions autonomously without any human intervention and act on them as well. These kinds of agents are in experimental mode currently. Semi-autonomous agents are those agents that involve a “human in the loop” to trigger requests. We are starting to see the adoption of semi-autonomous agents primarily in AI applications like conversational chatbots, including Agentforce Assistant, ChatGPT and Duet AI.