Computational Approaches to Complex Systems

In the sciences, especially in the study of complex systems, computer programs have come to play an important role as scientific equipment. Computer simulations -- experimental devices built in software -- have taken a place as a companion to physical experimental devices. Computer models provide many advantages over traditional experimental methods, but also have several problems. In particular, the actual process of writing software is a complicated technical task with much room for error.

Early in the development of a scientific field scientists typically construct their own experimental equipment: grinding their own lenses, wiring-up their own particle detectors, even building their own computers. Researchers in new fields have to be adept engineers, machinists, and electricians in addition to being scientists. Once a field begins to mature, collaborations between scientists and engineers lead to the development of standardized, reliable equipment (e.g., commercially produced microscopes or centrifuges), thereby allowing scientists to focus on research rather than on tool building. The use of standardized scientific apparatus is not only a convenience: it allows one to ``divide through'' by the common equipment, thereby aiding the production of repeatable, comparable research results.

In complexity research, at the Santa Fe Institute and elsewhere, we rely heavily on computers in the course of our investigations. We spend a lot of time constructing our own experimental apparatus in software, the computational equivalent to blowing our own glassware. Unfortunately, computer modeling frequently turns good scientists into bad programmers. Most scientists are not trained as software engineers. As a consequence, many home-grown computational experimental tools are (from a software engineering perspective) poorly designed. The results gained from the use of such tools can be difficult to compare with other research data and difficult for others to reproduce because of the quirks and unknown design decisions in the specific software apparatus. Furthermore, writing software is typically not a good use of a highly specialized scientist's time. In many cases, the same functional capacities are being rebuilt time and time again by different research groups, a tremendous duplication of effort.

A subtler problem with custom-built computer models is that the final software tends to be very specific, a dense tangle of code that is understandable only to the people who wrote it. Typical simulation software contains a large number of implicit assumptions, accidents of the way the particular code was written that have nothing to do with the actual model. And with only low-level source code it is very difficult to understand the high-level design and essential components of the model itself. Such software is useful to the people who built it, but makes it difficult for other scientists to evaluate and reproduce results.

In order for computer modeling to mature there is a need for a standardized set of well-engineered software tools usable on a wide variety of systems. The Swarm project aims to produce such tools through a collaboration between scientists and software engineers. Swarm is an efficient, reliable, reusable software apparatus for experimentation. If successful, Swarm will help scientists focus on research rather than on tool building by giving them a standardized suite of software tools that provide a well equipped software laboratory.


next up previous
Next: Multi-agent Discrete Event Simulation Up: The Swarm Simulation System: Previous: The Swarm Simulation System:

Formatted: Wed Jun 11 18:08:29 EDT 1997
Nelson Minar