The objectives of the resource manager, a runtime process distributed across all the processor nodes in a MagicEight system, are the following :
The resource manager is responsible for deciding at runtime which processing unit should be used to execute a given task. This decision is based on how efficiently a particular processing unit will perform the task (taking into account any code or data already local to a given processor), the amount of local storage required for efficient pipelining, and the complement and current load of processing units in the system. In addition to balancing the processing load of an application across a MagicEight system, this provides a tolerance of faults in individual processing units.
Long-term (longer than the execution of a single task) memory resources associated with a process are allocated using the resource manager. Streams are a special case of these which may grow dynamically and be stored in a distributed manner. This support allows an algorithm to operate independently of a particular memory architecture -- the resource manager will allocate storage in a location it deems optimal for that algorithm and architecture. In addition, automatic deallocation (``garbage collection'') of memory objects greatly simplifies the application programmer's task.
The manner in which communication resources are managed varies with machine architecture. Some architectures use communication resources such as shared buses, or packet-switched networks, which rely mainly on fast hardware arbitration. Others use semi-static routing -- such as a circuit-switched crossbar -- or DMA channels, which must be scheduled. The resource manager both takes the availability of the communication resources into consideration when scheduling algorithm segments, and performs any initialization of communications channels (e.g. configuring the crossbar, or DMA controller) required.
While the above functions of a resource manager are identical to those of a modern operating system, there are additional functionalities required. It is responsible for performing the demand driven evaluation, or eduction, of the dependency graphs which represent an application. As part of this evaluation, the manager must also determine the granularity of scheduling appropriate for the algorithm and architecture being used. It then partitions streams to match the granularity selected, and pipelines the stream access to eliminate as much sample overlap as possible. This pipelining must be done at run-time, after processing unit capabilities and other concurrently-executing tasks are known.