COORDINATING AGENT POPULATION
|How many agents? ||10s||100s||1000s||10s||10s||10s|
|Are human agents part of the population?||yes||no||yes||through surrogate processes||no||through teamcore|
|Heterogeneous beliefs? ||yes||yes||yes||yes||yes||yes|
|Heterogeneous capabilities? ||yes||yes||yes||yes||yes||yes|
|If heterogeneous, overlapping capabilities? ||yes||yes||N/A||yes||usually||yes|
|Heterogeneous preferences? ||yes||yes||yes||yes||yes||yes|
|Necessarily conflicting preferences? ||no||no||no||no||no (but uninteresting if not yes)||no|
|Heterogeneous languages? ||no||no||yes||no||n/a (bids in common)||yes|
|Heterogeneous ontologies? ||yes||N/A||yes||no||n/a (bids in common)||no|
|Heterogeneous architectures? ||yes||yes||yes||no||no||yes|
|Dynamically changing population: Agent arrivals and departures? ||both||both||both||both||both||both|
|Dynamically changing population: During coordination? ||yes||yes||yes||yes||yes||yes|
|Of environment: Static aspects? ||yes||yes||no||yes||possibly||yes|
|Of environment: Dynamic aspects? ||yes||partial||no||partial||possibly||yes|
|Explicit model of self: Capabilities? ||yes||yes||yes||yes||possibly||yes|
|Explicit model of self: Beliefs? ||yes||yes||no||yes||yes||yes|
|Explicit model of self: Preferences? ||yes||yes||no||yes||yes||yes|
|Explicit model of self: Plans? ||yes||yes||no||*yes*||possibly||yes|
|Explicit model of others’: Capabilities? ||yes||yes||no||no||no||yes|
|Explicit model of others’: Beliefs? ||no||n/a||no||not yet||no||yes|
|Explicit model of others’: Preferences? ||no||indirect||no||no||yes||partial|
|Explicit model of others’: Plans? ||partial||no||no||*partial*||no||partial|
|Knowledge of env/self/others learned?||yes||yes||yes||no||yes||N/A|
|Knowledge of env/self/others learned through observation? ||N/A||yes||no||env||yes||partial for env/others|
|Knowledge of env/self/others learned through communication? ||N/A||no||yes (manual intervention)||others||no||others|
|How semantically rich is the communication language (e.g., a number (price) is low, while plan is high). ||high||low||low||high||low||high|
|Are the messages of different types? ||yes||no (few)||yes||no||no||yes|
|Is point-to-point communication used? ||yes||yes||yes||yes||yes||yes|
|Is broadcast communication used? ||sometimes||no||yes||no||could be||yes|
|Is multicast communication used? ||yes||no||yes||no||could be||yes|
|Is communication asynchronous?||N/A||N/A||yes||yes||N/A||N/A|
|Mapping of preferences into actions: Is given? ||no||no||application dependent||no||no||no|
|Mapping of preferences into actions: Is based on current beliefs? ||yes||yes||application dependent||yes||yes||yes|
|Mapping of preferences into actions: Requires planning? ||yes||*yes*||application dependent||*yes*||yes||may be|
|Mapping of preferences into actions: Requires learning? ||no||some||application dependent||no||yes||could be, but not supported|
|Number of different kinds of objectives an agent is capable of achieving (“one” means the agent has a very specific role in the network; “few” means the agent can fulfill any of a number of roles (achieve different kinds of tasks) in the network; “many” means that the agent could take on any of most of the roles (tasks) in the network). ||N/A||N/A||many||few||many||few|
|Agent capabilities allow alternative ways of accomplishing an objective? ||yes||*yes*||yes||*yes*||*yes*||yes|
|An agent can respond to domain dynamics by choosing an alternative way of accomplishing an objective unilaterally, at runtime? ||N/A||N/A||yes||*yes*||in some cases||yes|
|An agent can determine whether its current choice of how to accomplish an objective is failing, and can unilaterally change its choice? ||N/A||N/A||yes||*yes*||in some cases||yes|
COORDINATION PROBLEM COMPLEXITY
|Are agents different processes running on different machines? ||yes||yes||yes||yes||could be||yes|
|Can agents fail to accomplish their tasks? ||yes||rarely||yes||yes||yes||yes|
|Is coordination: Episodic? ||yes||no||could be||yes||yes||could be|
|Is coordination: Periodic? ||yes||no||could be||yes||yes||could be|
|Is coordination: Continual? ||yes||*event driven*||could be||not yet||yes||could be|
|Fraction of possible issues each agent is involved in coordinating over at the same time? ||small||small||small||small||can be large||small|
|How many agents are involved in coordinating over a particular issue at the same time? ||few||few||can be large||few||can be large||application dependent|
|Does coordination involve allocating/scheduling sufficient resources/capabilities so as to meet some performance measure(s)? ||yes||yes||yes||yes||yes||N/A|
|Are tasks statically assigned to agents? ||no||no||no||so far||so far||no|
|Are tasks and their needs known at outset? ||no||yes||no||tasks yes, needs no||typically||no|
|Are tasks and their needs discovered over time? ||yes||no||yes||tasks yes, needs no||in some cases||yes|
|Are sources for satisfying needs known at outset? ||no||no||no||no||typically||N/A|
|Are sources for satisfying needs discovered over time? ||yes||yes||yes||yes||in some cases||N/A|
|Can needs or sources for satisfying them arrive and disappear dynamically? ||both||yes||both||both||yes||yes|
|Is there uncertainty in how well particular needs will be satisfied by particular sources? ||yes||yes||application dependent||yes||yes||could be|
|Are there complementarities (how much one thing is needed depends on acquiring other things)? ||yes||yes||application dependent||yes||yes||could be|
|Are there externalities (how much one thing is needed depends on whether others covet/acquire some things)? ||partial||yes||application dependent||seldom||yes||could be|
|Can allocation/scheduling decisions lead to some agents being unable to achieve their goals to meet performance measures? ||yes||sometimes||application dependent||yes||yes||could be|
|Can allocation/scheduling decisions lead to some agents being unable to achieve their goals at all? ||partial||rarely||application dependent||seldom||possibly||could be|
|Is an acceptable solution to the coordination problem: optimal? ||no||nearly||application dependent||no||approximately||no|
|Is an acceptable solution to the coordination problem: satisficing (meets some threshold measure)? ||sometimes||yes||application dependent||sometimes||possibly||application dependent|
|Is an acceptable solution to the coordination problem: satisfactory (minimally satisfies constraints/goals)? ||always||yes||application dependent||always||no||application dependent|
|Is a solution to the coordination problem monitored and repaired/replaced if it is recognized that it becomes suboptimal or fails altogether?||N/A||N/A||yes||not yet (it shouldn’t fail – below)||no (but learning methods deal with this to some extent)||if fails altogether (suboptimality not covered)|
|Is an acceptable solution to the coordination problem robust in the face of changing conditions? ||yes||yes||application dependent||*yes*||not necessarily||N/A|
|Is an acceptable solution to the coordination problem achieved at any cost? ||no||no||application dependent||no||in some of the techniques||no|
|Is an acceptable solution to the coordination problem achieved at a cost that is less than the cost of failure to coordinate? ||partial||yes||application dependent||*yes*||in some of the techniques||some decision theoretic reasoning used|
|Is an acceptable solution to the coordination problem achieved at the lowest possible cost? ||no||no||application dependent||no||in some of the techniques||no|
|How is the cost of solving the coordination problem measured: in elapsed time? ||sometimes||yes||application dependent||yes||yes (sometimes captured as coordination actions costs)||sometimes|
|How is the cost of solving the coordination problem measured: in number of messages exchanged? ||sometimes||yes||application dependent||sometimes||no||sometimes|
|How is the cost of solving the coordination problem measured: in some measure of total effort expended? ||no||yes||application dependent||no||no||Domain experts may provide feedback|
 This is order of magnitude of the number of agents to which the technique has actually been applied to date.
 These are beliefs about the external world, including other agents, that affect coordination/control. If they know the same things about the world (see the same world), or if they all know everyones preferences (a commonly known payoff matrix, for instance), or if they have no beliefs, then the answer to this is no.
 If agents are essentially interchangable in what they can do (any task can be done equally well by any agent), then they do not have heterogeneous capabilities.
 If capabilities are unique, such that, for all tasks, a task can only be accomplished by one agent, then capabilities do not overlap. When tasks can be assigned to any of a number of agents (possibly at different costs or quality of service levels), the population is more complex.
 If all agents would agree on the best outcome of their joint activities (if they have identical preferences to their coordination decisions), then preferences are not heterogeneous.
 If the coordination technique only helps in cases where there is conflict (over resource allocations, for example), then this is yes.
 These are languages in which coordination is being done. So, if they all talk “prices” or “plans,” then this would be “no.” If some talk about prices, others about plans, others about organizational roles, then this would be yes.
 This refers to the semantics of their languages. If there is a shared understanding of what it means to provide resource x or capability y, or what a bid of z means, then this would be “no.” If agents can’t count on a common ontology, then this is “yes.”
 This refers to the basic decision-making techniques of the involved agents as currently implemented. If they all work basically the same (they all compute optimal bids, or generate plans, etc.), then this is “no.”
 Can agents arrive into and depart from the system over time, so that a static model of available capabilities and their allocation in the system is impossible.
 While agents are engaged in making coordination decisions (converging on joint plans, contracting tasks, seeking equilibrium prices), can some of the agents involved depart and can others arrive, without triggering a complete restart of the coordination process?
 These refer to aspects that influence coordination/control decisions, such as statically defined organizational roles, or positions in a hierarchy, or nearby “acquaintances.”
 These refer to aspects that affect coordination/control decisions, such as changes to available resources/capabilities, or running prices, etc.
 Can an agent represent what it can do, so that it can tell whether it can accomplish a particular task, and can it potentially advertise its abilities to others?
 Can an agent access its own belief structure, to know what it believes and what it does not believe, and to potentially communicate its beliefs to others?
 Can an agent access its own preferences, such that it can anticipate what states of the world it would prefer over others, and describe/explain these to other agents?
 Can an agent inspect its own plans to anticipate its sequential actions, and can it potentially tell others about its intended plans?
 Can an agent represent and utilize information it receives/learns about what other agents are capable of doing?
 Can an agent assimilate beliefs conveyed by another agent?
 Does an agent explicitly model the goals/preferences of other agents in order to coordinate better with them?
 Does an agent represent the inferred/communicated planned activities of others and use these to coordinate with them?
 Which of environment/self/others does an agent form models of through observation/experience?
 Which of environment/self/others does an agent form models of through communication?
 As per Katia’s suggestion. I’m a little leery about this, since in some contexts a number can convey a lot of information while a bucket of text very little, depending on the ontology. But let’s see what other people think.
 I assume these are message types of the kind that KQML-like languages would refer to. I’m a little unclear on interpretting this, since I think all would say “yes” to this (market-based systems would have both bids and some kind of clearing/matching message, etc.). Perhaps this should instead ask whether the communication language permits versatile communication plans (I would think, then, that many approaches would say “no” in the sense that most would follow well-defined protocols).
 I interpret this as asking whether point-to-point communication is expected for coordination.
 I interpret this to ask whether broadcast communication is assumed possible for coordination.
 I interpret this to ask whether multicast communication is assumed available for coordination/control.
 Since this refers to coordination actions, it asks whether an agent knows how to coordinate simply by knowing what its goals/preferences are, irregardless of its current environment.
 This asks whether an agent considers both its current beliefs and preferences in making decisions. This would be done, for example, by a reactive agent.
 In addition to beliefs and preferences, this asks whether an agent is assumed to construct plans (possibly partially-ordered, conditional, probabilistic) to put actions together in combinations that together achieve preferences in the current/expected circumstances. The actions are assumed to be those that affect coordination.
 If, on top of everything else, an agent does not initially know how its actions might lead to satisfaction of its preferences, and therefore has to lean the effects of actions, then this would be “yes.”
 This gives a sense of how complicated each agent is. Note, this question is not asking whether the same agent architecture can achieve various objectives, but rather whether a particular agent instance can. Typically, coordination is easier if each agent is a specialist (they either match a task or they don’t) or if each agent is a generalist (assignment is based on availability rather than suitability). The in-between is harder, as availability and suitability need to be balanced given the options.
 An agent is generally more complex if, given a task, it could accomplish the task in a number of ways. Since this could require scheduling/allocating alternative combinations of resources/capabilities, selecting the right alternative is important. An agent could even explore multiple alternatives at the same time, leading to complexities in coordination as agents might “test the waters” with many others.
 An agent is more complex if it leaves its options open until runtime, so that its choice of how it will accomplish its goals is made during execution. Control in this case requires either rapid on-line coordination, or prior coordination decisions that will work for any of the choices available to the agent.
 This makes things even more challenging, in that not only might an agent make choices of methods at runtime, but that it might change its mind partway through if things aren’t working out.
 I assume that this will be a “yes” across the board, but it could be that some current systems “simulate” the agents rather than actually having separate processes running them.
 If failure is a possbility, then the coordination mechanism needs to be robust in the face of failure.
 Episodic means that there are clear start and end times to each “coordination episode” such that there is no carryover between episodes.
 Periodic means that there are specific points in time (either clock-driven or event-driven) where the coordination process runs, where this could occur many times over an overall episode. For example, in a market-system, it could be that the market clears periodically.
 By continual, I mean that there is no clear starting or ending point for coordination activity, but rather that coordination is ongoing and there might never be a time when the whole system is “coordinated.”
 At one extreme, every possible resource/capability/action could be of concern to all agents. At the other extreme, each agent might only be concerned about one resource/capability/action.
 If all agents are concerned with the same resource/capability/etc., then as the number of agents grows, the coordination problem generally is more complex.
 I think this is a given in the Grid, but just in case…
 Suggested by Katia. If tasks could be assigned dynamically, then an agent might accept a subtask from another only to discover that it has been assigned a new task by the “system” unexpectedly. Such possibilities obviously make coordination more complex.
 This is assumed to refer to externally-given tasks (as opposed to the tasks that agents might form and pass around as they decompose and solve problems). An agent might only discover what resources/capabilities it needs as it pursues tasks (if it has alternative ways of accomplishing tasks).
 See comment of previous item.
 When an agent identifies a needed resource/capability/etc., does it know (or does some entity in the system know) all the possible places where the need can be met?
 If agents (presumably with capabilities/resources/etc.) arrive into the system and depart from the system over time, then this would have to be yes.
 Related to previous item.
 This was intended to mean whether an agent would know how satisfied it will be with the services delivered by an agent that claims to provide a desired service. In market terms, is it assumed that all of the goods in the market are substitutable? In a brokering system, is it assumed that an agents that advertise the same capability will achieve the same result if given the same task? In a plan-based system, is it assumed that some agents’actions might be non-deterministic?
 The coordination problem generally gets harder if each agent needs several resources/capabilities/etc. over some period of time, where if it fails to get one, it has little or no use for the others.
 This says that an agent’s preferences for a particular capability/resource/etc. depend on the preferences of others for the same thing. This would imply that an agent’s preferences change dynamically as others’ demands or acquisitions change. For example, if agent A learns that agent B has acquired service x, then the value of service x to agent A changes even if agent A is still pursuing the same task.
 Basically, is the problem hard enough that some coordination decisions could make it such that some agents cannot perform their tasks well enough to satisfy performance measures, or is the system flush enough with resources (or are performance measures lax enough) that practically any coordination decisions could lead to success.
 Basically, is the problem hard enough that some coordination decisions could make it such that some agents cannot complete their tasks at all, or do bad coordination decisions simply degrade some performance measures?
 Is the coordination mechanism responsible for optimally allocating/scheduling resources/capabilities/etc.?
 Is there some level of aspiration that the coordination decisions (such as allocation/scheduling of resource/capabilities/etc.) needs to achieve, but it can stop when it gets to that level rather than seeking an optimum.
 Can the coordination mechanisms end as soon as any (combination of) coordination decisions that lead to the satisfaction of constraints and goals, without seeking an optimum or even a solution above an aspiration level. (Essentially, this is satisficing with the lowest consistent aspiration level.)
 This asks whether the coordination mechanisms are supposed to find a “solution” that does not need to change even if the environment undergoes change. This is in contrast to mechanisms that detect such changes and generate new solutions. (In planning terms, the difference between a robust plan that will work under a variety of conditions and a plan-repair/replanning methodology that revises the plan when conditions change.)
 Will the coordination mechanism run to completion without monitoring or adjusting its own costs?
 Does the coordination mechanism monitor/predict its own costs and make adjustments to increase the chances that the costs it incurs are less than the costs that failure to coordinate would be expected to incur?
 In this case, the coordination mechanism is optimizing its own performance.
 The longer it takes to solve the coordination problem, the worse the mechanism’s performance.
 If bandwidth is at a premium or communication delays are significant, this can be important. This is especially important if the resources being coordinated over are communication resources!!
 These are notoriously hard to define.