Definition - What does Deliberative Agent mean?
A deliberative agent is a software agent that contains the symbolic world and makes decisions via symbolic reasoning.
A deliberative agent has two types of attitudes: informative, which involves information about the environment, and pro-attitudes, which guide the actions of the agent.
Safeopedia explains Deliberative Agent
Unlike a reactive agent, a deliberative agent can reach its goal by reacting automatically based on the external environment. The main difference between a reactive and deliberative agent is the latter maintains a symbolic representation of the world. It can process the images of the external environment, which enables it to plan its actions.
Intentional systems are the base for deliberative agents. Deliberation decides what should be achieved, and means-end reasoning decides how to achieve. Both deliberation and means-end reasoning involve computer processing, so it is best to ensure that computational resources are in an efficient use and that they are committed to achieving the chosen situation.
The state of affairs chosen by a deliberative agent intentionally is known as an intention. Intentions derive means-end reasoning. Intentions stimulate an attempt to execute the intention, and hence they lead to an action. If one plan of action fails, another course of action will be used to achieve intention. An agent does not give up an intention unless it believes that it cannot achieve it or there is no good reason for achieving it. Intentions can also influence beliefs.
Means-end reasoning is known as automatic planning that includes a task to achieve, the agent’s belief, and available actions. In the end, a deliberative agent can be explained as observing the world, updating beliefs, deciding on intentions, determining available options, filtering the best option, and making and executing a plan.