AI Content Chat (Beta) logo

1 SALESFORCE FUTURES ANTICIPATING PERSONAL AGENTS: FEASIBILITY Consider the core function of the An opposing view suggests LLMs, “planning skills” Lilian Weng and others due to their lack of an internal world refer to when it comes to the agents of model to predict states, may never today. Heuristic search and reinforcement achieve a level of reasoning sufficient learning have limitations in dynamic, to power autonomous agents. real-life environments. For agents to Those who believe this are pursuing truly deliver Samantha-like capabilities, different approaches. For example, they’ll need to move from heuristics to some researchers are developing models inference, reasoning, and metacognition. that train AIs to learn about the world One can argue that the latest LLMs more the way humans do, which may already exhibit emergent behaviors that provide an alternative path towards sometimes look like reasoning. According generalized reasoning and planning. In to a survey by Huang & Chang, a number the future, we expect autonomous agents of past research papers have already made to have more sophisticated reasoning claims for the reasoning capabilities in and problem solving capabilities. some models. Prompting to generate intermediate steps to solve problems (e.g. Daniel Kahneman popularized Chain of Thought and Tree of Thought) a simplified model of how has produced promising results. However, thinking works in the brain, LLMs still struggle with tasks that are easy using the idea of two for humans, such as performing complex different modes of thinking: math and generating action plans to complete tasks in given environments. System 1, “thinking fast”, is intuitive and Engineers are attempting to overcome unconscious. System 2, “thinking slow”, is these limitations by incorporating planners deliberate and rational. This model offers a that break down tasks into steps and useful way of thinking about agentive chaining multiple systems together. capabilities. These systems coordinate so Early tools like BabyAGI and Auto GPT that when a problem is too complex for triggered questions about trust, privacy, System 1, System 2 kicks in with more accuracy, and reliability, but but they sophisticated logic and reasoning. inspired an innovation race to deliver Characteristics of System 1 are analogous to greater autonomy using similar methods. the strengths of current deep learning models, while System 2 maps to the weaknesses of the existing LLMs. ISSUE 1: PERSONAL AI | 12

Salesforce Futures Magazine  - Page 12 Salesforce Futures Magazine Page 11 Page 13