One of the most important issues to consider when designing a multiagent system is whether the different agents will be benevolent or competitive. Even if they have different goals, the agents can be benevolent if they are willing to help each other achieve their respective goals [31]. On the other hand, the agents may be selfish and only consider their own goals when acting. In the extreme, the agents may be involved in a zero-sum situation so that they must actively oppose other agents' goals in order to achieve their own.
Some people only consider using selfish agents, claiming that they are both more effective when building real systems and more biologically plausible. Of course if agents have the same goals, they will help each other, but people rarely consider agents that help each other achieve different goals for no apparent reason: when agents cooperate, they usually do so because it is in their own best interest. As we have already seen in the pursuit domain, Korf advocates using greedy agents that minimize their own distance to the prey [46], and similarly, Levy and Rosenschein use Game Theory to study how the predators can cooperate despite maximizing their own utilities [48]. Some advocates of selfish agents point to nature for their justification, claiming that animals are not altruistic, but rather act always in their own self-interest [46]. On the other hand, Ridley provides a detailed chronicle and explanation of apparent altruism in nature (usually explainable as kin selection) and cooperation in human societies [64].
Whether or not altruism exists, in some situations it may be in an animal's (or agent's) interest to cooperate with other agents. Mor and Rosenschein illustrate this possibility in the context of the prisoner's dilemma [55]. In the prisoner's dilemma, two agents try to act so as to maximize their own individual rewards. They are not actively out to thwart each other since it is not a zero-sum game, yet they place no inherent value on the other receiving reward. The prisoner's dilemma is constructed so that each agent is given two choices: defect or cooperate. No matter what the other agent does, a given agent receives a higher reward if it defects. Yet if both agents cooperate, they are better off than if they both defect. In any given play, an agent is better off defecting. Nevertheless, Mor and Rosenschein show that if the same agents come up against each other repeatedly (iterated prisoner's dilemma), cooperative behavior can emerge. In effect, an agent can serve its own self-interest by establishing a reputation for being cooperative. Then when coming up against another cooperative agent, the two can benefit from a sense of trust for each other: they both cooperate rather than both defecting. Only with repeated play can cooperation emerge among the selfish agents in the prisoner's dilemma.
In the prisoner's dilemma, the agents are selfish but not inherently competitive: in specific circumstances, they are willing to act benevolently. However, when the agents are actually competitive (such as in zero-sum games), cooperation is no longer sensible. For instance, Littman considers a zero-sum game in which two players try to reach opposite ends of a small discrete world. The players can block each other by trying to move to the same space. Littman introduces a variant of Q-learning called Minimax-Q which is designed to work on Markov games as opposed to Markov Decision Processes [49]. The competitive agents learn probabilistic policies since any deterministic policy can be completely counteracted by the opponent.
The issue of benevolence (willingness to cooperate) vs. competitiveness comes up repeatedly in the systems described below. Were a third dimension to be added to the categorization of MAS (in addition to degrees of heterogeneity and communication), this issue would be it.