Presentation of Results

The results we present here compare three different agent algorithms at eight population sizes (1, 2, 4, 7, 10, 25, 50 and 100). All data is calculated from 400 independent runs of each parameter setting with random initial agent placement. A complete table of our data is presented in the appendix; particular results are extracted here for discussion.

Single Agent Results

Our first experiment was conducted on a population of only one agent. There is no opportunity for agent cooperation here; these results simply show the basic behavior of the two core types of algorithm, random and conscientious. (In a population of size one, conscientious agents are equivalent to superconscientious agents.) Unsurprisingly, the random agent algorithm performs quite poorly when compared to conscientious agents. Random agents take an average of 5300 time steps to map the graph with a standard deviation of 3500. By contrast, a lone conscientious agent finishes the map in an average of 480 steps with a standard deviation of 101. A comparison between these two types of agents of the curves for knowledge over time (figure 2) shows the average relative performance of these two algorithms as well as the distribution of independent runs.

  figure38
Figure 2: Knowledge over time for single agent systems. Avg. of 400 runs and 20 representatives.

Effect of Collaboration

The single agent results provide a basis for comparison of the effects of agent cooperation. When multiple agents work together, coexisting on the graph and sharing knowledge as they meet, the gap between the random and conscientious strategies narrows significantly. As seen in the table in the appendix, two random agents interacting and learning from each other do almost twice as well as one alone, finishing on average in 3100 steps instead of 5300. And while two conscientious agents working together are better than two random agents by more than six-fold, when the population size increases to 100 the conscientious agents finish only 32% more quickly than the random agents.

Collaboration is good. The reason is simple: an agent alone can only learn a limited amount each time step, while agents that share data often acquire a large amount of new information in a single interaction. This effect is clearly visible in figure 3 where the near-vertical jumps in individual knowledge reflect an agent learning from an experienced neighbor. Sharing information is so beneficial that for larger population sizes the positive effect of learning from peers almost compensates for the very poor movement strategy of the random agents.

  figure47
Figure 3: Knowledge over time for one run of 15 conscientious agents.

Effect of Population Size

As discussed in section 2, our results ignore issues of resource scarcity. In our model increasing the number of agents has no cost. While distributed systems such as this tend to scale well, in a more detailed simulation (or a real-world situation) per-agent consumption of bandwidth and cpu cycles would need to be accounted for. With these restrictions the point where decreasing returns no longer justify increases in agent resources becomes important. As a graph of finishing time for different populations shows (figure 4), the marginal benefit of increasing population size is far more substantial for small populations than for large ones.

  figure56
Figure 4: Finishing time at different population sizes. Average of 400 runs.

Superconscientious Agents: A Surprising Result

Our third class of agents are ``superconscientious'': they use both first-hand and peer-obtained information to make movement decisions. We expected that these agents would be the most successful of the three types. After all, the more information that is factored into a decision the better that decision should be.

It turns out that in small populations superconscientious agents do perform the best, but only just barely. Ten superconscientious agents map the graph in 245 time steps on average, while ten conscientious agents take 271 time steps to finish. But the slight edge that the superconscientious agents have with small populations disappears as the number of agents increases; 100 superconscientious agents average 74 steps to finish and 100 conscientious agents take only 58. It should be noted these differences are subtle; for all but the largest populations the averages are within a standard deviation of each other.

The result that superconscientious agents are no more efficient at mapping than the less-deliberative conscientious agents prompted additional investigation into the behavior of the two algorithms. Upon examination, one difference was immediately obvious: superconscientious agents meet far more often over the course of a run than conscientious agents do. A population of 100 superconscientious agents averages 192 meetings per time step, while 100 conscientious agents average only 42 meetings. A closer look at these meeting-event statistics over time (figure 5) shows that the number of meetings increases over the course of a run in the case of the superconscientious agents, while it remains steady throughout the run for conscientious agents. Similarly, the number of nodes occupied by at least one agent decreases throughout a superconscientious run but stays roughly constant for a conscientious agent trial. At the end of a typical run, 100 conscientious agents are distributed across 82 different nodes on average, while in the superconscientious runs the 100 agents are crowded together on only 36 nodes.

  figure65
Figure 5: Meetings over time for 100 conscientious and superconscientious agents. Average over 400 runs.

Clearly, these two types of agents behave quite differently. Superconscientious agents tend to cluster together over the run while conscientious agents remain evenly dispersed. Because of this effect, superconscientious agents tend to duplicate one another's exploratory efforts and the system as a whole is less efficient.

Why do superconscientious agents cluster together? When two agents meet, they share all information. Conscientious agents assimilate this data but still make their movement decisions based entirely on their own first-hand experience. In contrast, superconscientious agents use all information they have learned in making their decisions. So two superconscientious agents that meet and share information often find themselves making subsequent choices based on identical sets of data, and therefore choose identical paths. Two such twin agents diverge only when multiple equally-good options present themselves (in which case each chooses randomly).

Our results show that the performance of a multi-agent algorithm depends not only on how efficiently each individual agent acts but how the agent population as a whole distributes its effort. Specifically, superconscientious agents are inefficiently distributed when they follow one another around in clusters -- they are duplicating effort because they are not using their knowledge effectively. More generally, diversity of behavior is important for the effectiveness of cooperating multi-agent systems. If an agent population is too homogeneous then the benefits of having many agents working together is lost. Effective cooperation requires division of labor.

As a preliminary investigation we studied what would happen if superconscientious agents had a 20% chance of ignoring what they knew and moving randomly. We found that a population of 100 of these agents completed the task in 57 time steps on average, better than the 74 time steps of the ordinary superconscientious agents and about the same as the 58 time steps of 100 ordinary conscientious agents. By contrast, conscientious agents with a 20% random movement probability only completed the task in 62 time steps, no improvement over ordinary conscientious agents. Randomness is an artificial means to induce behavioral diversity; superconscientious agents benefit from it because they are too homogeneous, but conscientious agents already distribute themselves efficiently and so do not benefit from randomness.


next up previous
Next: Related Work Up: Cooperating Mobile Agents for Previous: Experimental Model

Formatted: Sun May 24 17:37:20 EDT 1998
Nelson Minar