When agents communicate, they may decide to cooperate on a given task or for a given amount of time. In so doing, they make commitments to each other. Committing to another agent involves agreeing to pursue a given goal, possibly in a given manner, regardless of how much it serves one's own interests. Commitments can make systems run much more smoothly by providing a way for agents to ``trust'' each other, yet it is not obvious how to get self-interested agents to commit to others in a reasonable way. The theory of commitment and decommitment (when the commitment terminates) has consequently drawn considerable attention.
For example, Castelfranchi defines three types of commitment: internal commitment--an agent binds itself to do something, social commitment--an agent commits to another agent, and collective commitment--an agent agrees to fill a certain role [12]. Setting an alarm clock is an example of internal commitment to wake up at a certain time.
Haddadi discusses commitment states as planning states: potential cooperation, pre-commitment, and commitment [34]. Agents can then use means-ends analysis to plan for goals in terms of commitment opportunities. This work is conducted within a model called Belief/Desire/Intention, or BDI.
BDI is a popular technique for modeling other agents. Other agents' domain knowledge (beliefs) and goals (desires) are modeled as well as their ``intentions,'' or goals they are currently trying to achieve and the methods by which they are trying to achieve them. Rao and Georgeff use the BDI model to build a system for air-traffic control, OASIS, which has been implemented for testing (in parallel with human operators who retain full control) at the airport in Sydney, Australia [63]. Each aircraft is represented by a controlling agent which deals with a global sequencing agent. OASIS mixes reactive and deliberative actions in the agents: they can break out of planned sequences when coming across situations that demand immediate reaction. Since agents cannot control their beliefs or desires, they can only make commitments to each other regarding their intentions.
Finally, groups of agents may decide to commit to each other. Rather than the more usual two-agent or all-agent commitment scenarios, Zlotkin and Rosenschein study situations in which agents may want to form coalitions [97]. Since this work is conducted in a game theory framework, agents consider the utility of joining a coalition in which they are bound to try to advance the utility of other members in exchange for reciprocal consideration. Shehory and Kraus present a a distributed algorithm for task allocation when coalitions are either needed to perform tasks or more efficient that single agents [78]. Sandholm and Lesser use a vehicle routing domain to illustrate a method by which agents can form valuable coalitions when it is intractable to discover the optimal coalitions [72].