Durfee gives another example of ``Blissful Ignorance,'' mentioning it explicitly in the title of his paper: ``Blissful Ignorance: Knowing Just Enough to Coordinate Well'' [24]. Now rather than referring to resource usage, the saying applies to the limited Recursive Modeling Method (RMM). As mentioned above in the context of the pursuit domain, RMM could recurse indefinitely. Even if further information can be obtained by reasoning about what agent A thinks agent B thinks agent A thinks ..., endless reasoning can lead to inaction. Durfee contends that for coordination to be possible, some potential knowledge must be ignored. As well as illustrating this concept in the pursuit domain [90], Durfee goes into more detail and offers more generally applicable methodology in [24].
The point of the RMM is to model the internal state of another agent in order to predict its actions. Even though the agents know each other's goals and structure (they are homogeneous), they may not know each other's future actions. The missing pieces of information are the internal states (for deliberative agents) and sensory inputs of the other agents. How and whether to model other agents is a ubiquitous issue in MAS. In the more complex multiagent scenarios presented in the next sections, agents may have to model not only the internal states of other agents, but also their goals, actions, and abilities.
Although it may be useful to build models of other agents in the environment, agent modeling is not done universally. Schmidhuber advocates a form of multiagent reinforcement learning (RL) with which agents do not model each other as agents [77]. Instead they consider each other as parts of the environment and affect each other's policies only as sensed objects. The agents pay attention to the reward they receive using a given policy and checkpoint their policies so they can return to successful ones. Schmidhuber shows that the agents can learn to cooperate without modeling each other.