Parallel and Distributed Discrete Event Simulation

Main Article Content

Carl Tropper

Abstract

Discrete-event simulation has long been an integral part of the design process of complex engineering systems and the modelling of natural phenomena. Many of the systems which we seek to understand or control can be modelled as digital systems. In a digital model, we view the system at discrete instants of time, in effect taking snapshots of the system at these intstants. For example, in a computer network simulation an event can be the sending of a message from one node to another node while in a VLSI logic simulation, the arrival of a signal at a gate may be viewed as an event.

Each event in a discrete-event simulation has a timestamp associated with it. When an event is processed, it is possible that new events are generated as a consequence of this processing. These new events have larger timestamps, obtained by adding a simulation time advance to the timestamp of the event which it had prior to processing. The events in the simulation are stored in a heap and are processed in order of the lowest timestamp first.

Digital systems such as computer systems are naturally susceptible to this approach. However, a variety of other systems may also be modelled this way. These inclued transportation systems such as air-traffic control systems, epidemological models such as the spreading of a virus, and military war-gaming models.

As the systems and phenomena we want to model increase in size and complexity, the memory demands of these simulations increase and it becomes increasingly difficult to obtain acceptible execution times. The circuits which we want to simulate now contain hundreds of millions of gates. A detailed simulation of the Internet would make inordinate demands on a workstation. In order to accomodate the grwoing need for the simulation of larger models, it became necessary to make use of distributed and parallel computer systems to execute the simulations. In the early 90's parallel machines were made use of while more recently the use of clusters of workstations (Beowulf, Myrinet) are utilized as they represent a more cost-effective approach then parallel machines. In addition, shared memory multiprocessors are now much more cost-effective.

Like any other distributed program, a distributed simulation is composed of processes which communicate with one another. The communication may occur via shared memory or via message passing. The processes in a distributed simulation are each intended to simulate a portion of the system being modelled and are referred to as Logical Processes (LPs). An LP creates events, sends events to other LPs and receives events from other LPs in the course of a simulation. Associated with each LP are input queues used to store messages from other LPs.

The advance of time in a distributed simulation poses an intriguing problem because of its inherant lack of global memory. The events of a distributed simulation are spread among the processors and consequently time is advanced in each process independantly of the other processes. This is accomplished via the notion of Local Simulation Time (LST) which is maintained by each LP. When an event is processed at an LP, the LST takes on the value of its timestamp. The LP processes events from its input queues by selecting the event bearing the smallest timestamp from all of its queues.

The treatment of time in a distributed system is of fundamental importance to the understanding and building of distributed systems. [15] contains a discussion of the nature of time in a distributed system for the interested reader. It was necessary to develop synchyronization strategies for the distributed and parallel programs which execute distributed simulations. As mentioned before, in a uniprocessor events are stored in a heap and simulated in the order of the smallest timestamp first. Since the events of a distributed simulation are spread among the processors it is possible to execute events out of their correct order. This happens if an event with a smaller timestamp then an event which has already been processed arrives at a processor from another processor. In a military simulation, it matters in which order the events aim the gun and fire the gun are executed. The central problem of Distributed simulation is the development of synchronization algorithms which are capable of maintaining causality and which do so with minimal overhead. The interested reader should consult the proceedings of the IEEE Workshop on Parallel and Distributed Simulation (PADS) and a recent book which describes the field [9].

Two primary approaches to synchronization algorithms have been developed, the conservative and optimistic classes of algorithms. A conservative algorithm is characterized by its blocking behavior. In a conservative simulation, if one of the input queues at an LP is empty the LP blocks awaiting a message from another LP. This behavior exacts a price, however, in the form of increased execution time and the possible formation of deadlocks. A deadlock forms if a group of LPs is arranged in the form of a cycle such that each of the LPs is awaiting a message from its predecessor in the cycle. More generally, a deadlock occurs if the LPs are arranged in the form of a knot. Hence it becomes necessary to either find means to avoid deadlocks or to detect and break them. There are a plethora of algorithms for each approach. We describe several algorithms.

In the null message approach [8], each time an LP sends a message to a neighboring LP, it also sends a message (the null message) to its other neighbors containing the timestamp of the message, thereby causing the neighbors to advance their LST's. As a consequence, all of the LPs are aware of the earliest simulation time that a message can arrive at an empty input queue. The LP can use this information to avoid blocking by comparing the smallest timestamp in each of its input queues to this value. If the smallest timestamp in all of the input queues is samller then this value then it can process the associated event. Otherwise it must block. The drawback of this approach is clearly the large number of messages which must be sent; as a consequence much work has been done to avoid the sending of a large number of null messages. [12] and [13] are efforts in this direction. Another form of deadlock avoidance is a time window approach in which time windows are successively generated with the property that the events contained in these windows are safe to process [1].

The deadlock detection and breaking approach relies upon algorithms to detect deadlocks. For example, in [18], a knot detection algorithm is used to detect a deadlock, and the event bearing the minimal timestamp in the knot is detected as well. This event is safe to execute.

The other major class of synchroniztion algorithm is known as optimistic. Time Warp [3] is the prime example in this category. In optimistic algorithms LPs maintain one input queue and all of the events which arrive from other LPs are stored in the queue. The events are processed without any concern for the arrival of events with smaller timestamps. If such a straggler eventarrives at an LP, the LP restores its state just prior to the arrival of the straggler and continues with the simulation from that point, a process known as rolling back. The LP must maintain checkpoints of previous states in order to roll back. In addition, it is necessary to to cancel messages which were produced subsequent to the straggler as they may well be incorrect. In order to do this, each LP maintains an output queue in which it stores copies of messages which it has already sent. Upon the arrival of a straggler, the LP sends these copies to the same LPs which received the original messages. If the message and its copy meet in an input queue, the two messages cancel (anihilate) one another. For this reason, the copy is known as an anti-message. If the anti-message arrives after the original message has been processed, the destination LP rolls back to the time of the anti-message and sends out its own anti-messages. Clearly, the memory demands imposed by storing copies of LP's states and maintaining anti-messages in the output queue are onerous. Hence techniques to reduce the amoutn of memory used by Time Warp are fundamental to its success. One important technique involve the computation of the smallest simulation time to which any LP in the simulation may roll back, known as the Global Virtual Time (GVT). If we define the LVT as the timestamp of the last message processed at an LP, then the GVT is the minimum of (1) the LVT values of all of the LPs in the simulation and (2) the minimum timestamp of all of the events which have been sent but which have not been processed at a given point in real time. Since no LP can roll back prior to the GVT, all of the memory allocated prior tothe GVT may be released. Many algorithms for computing the GVT have been developed [6]. Another technique to reduce the amount of memory employed is to periodically checkpoint the states of an LP, instead of after every event.

In a shared memory multiprocessor the use of direct cancellation [11] is used to control the use of memory. Here pointers are used to link the descendants of an event to the event, thereby eliminating the need for anti-messages. Recently, a direct cancellation technique for distributed memroy environments has been developed [16]. Memory management techniques are also used to reclaim space from LPs so that a stalled simulation may be allowed to continue [4,5,11].

In recent years the emphasis in the field has changed from developing efficient synchronization techniques to the application of these techniques to real world problems. In the area of computer networks, an on-going effort is simulation of the Internet [7]. The intention of the project is to provide a realistic test-bed for the development of new Internet protocols and to locate performance problems. A detailed distributed simulation of the Internet provides an environment in which experimental conditions can be controlled and in which it is possible to replicate experiments. To date, it has been necessary to experiment on the actual network itself.

The simulation of VLSI circuitry provides another fruitful area for the application of distributed simulation. A simulation environment for VHDL circuitry is described in [14]. A similar project for Verilog is underway.

Military simulations are benefitting from advances in distributed simulation. An important application is the uniting of existing simulations of different aspects of combat together, e.g. tank warfare, close air-support and infantry warfare. The inclusion of live exercises into this environment is also being pursued. The field of distributed interactive simulation (DIS) is devoted to these advances. A number of conferences are devoted to this area, among them the IEEE Real Time and Distributed Interactive simulation conference.

Yet another area is the simulation of large manufacturing environments, such as the production of VLSI wafers.

The articles in this issue are representative of the advances in these fields. In The Development of Conservative Superstep Protocols for Shared Memory Systems, Gan et al provides strong evidence for the utility of (conservative) distributed simulation of a wafer fabrication process. The performance of several different synchronous conservative protocols are compared on a number of models based on data sets supplied by Sematech. Sematech is a consortium of semiconductor manufacturing companies that does research for its members in semiconductor manufacturing. In theImplementation of a Virtual Time Synchronizer for Distributed Databases on a Cluster of Workstations, Boukerche et al study the performance of a distributed database system implemented on a network of workstations and synchronized by an optimistic protocol. In Applying Multilevel Paritioning to Parallel Logic Simulation, Subramaninan et al describe a partitioning algorithm for logic simulation and examine its performance making use of the optimistic simulation kernel which lies at the heart of the VHDL simulator referred to above. In Self-Organizing Criticality in Optimistic Simulation of Correlated Systems. Overeinder et al examine the relationship between the rollback behavior in Time Warp and hte physical complexity of Ising Spin systems. Ising Spin models are used to simulate the magnetization of ferro-metals. Finally, Moradi et al study the DOD's High Level Architecture (HLA) in the context of a simulation of an air-traffic control system.

Guest Editor
Carl Tropper

References

[1] Turner S. and Xu M., Performance Evaluation of the Bounded Time Warp Algorithm, Proceedings of the SCS Multi-conference on PADS, volume 22, pages 117-126, 1992.

[2] Das S. and Fujimoto R., An Adaptive Memory Management Protocol for Time Warp Parallel Simulation, Proceedings of ACM SIGMETRICS Conference on Measurement and Modeling of Computer Systems, volume 22, pages 201-210, 1994.

[3] R. Fujimoto, Time Warp on a Shared Memory Multiprocessor, Transactions of the Society for Computer Simulation, Vol. 6, No. 3, pp. 211-239, July 1989.

[4] D. A. Jefferson, Virtual Time, ACM Transactions on Programming Languages and systems, Vol. 7, No. 3, pp. 404-425, July 1985.

[5] D.A. Jefferson, Virtual Time II: The Cancelback Protocol for Storage Management in Time Warp, Proc. 9th Annual ACM Symposium on Principles of Distributed Computing, pp 75-90, ACM, 1990.

[6] Y-B Lin, E. Lazowska, Reducing the State Saving Overhead for Time Warp Parallel Simulation, T-R 90-02-03, Dept. of Computer Science and Engineering, Univ. Washington, Seattle, WA, 1990.

[7] Mattern F., Efficient Algorithms for Distributed Snapshots and Global Virtual Time Approximation, Journal of Parallel and Distributed Computing, 18: 423-434, 1993.

[8] D. Nicol, J. Crowe, A. Ogielski, Modelling of the Global Internet, Computer Science and Engineering, vol1, no1, Jan-Feb 1999, pp. 42-50.

[9] K. Chandy, J. Misra, Distributed Simulation: A Case Study in the Design and Verification of Distriuted Programs, IEEE Trans. Software Eng S-5, Sept. 1979, pp. 440-452.

[10] R. Fujimoto, Parallel and Distributed Simulation Systems, Wiley Interscience, 2000.

[11] H. Avril, C. Tropper, Clustered Time Warp and Logic Simulation, Proceedings of the 9th Workshop on Parallel and Distributed Simulation,1995, pp112-119.

[12] S. Das, R. Fujimoto, K. Panesar, D. Allison, M. Hybinette, GTW: A Time Warp System for Shared Memory Multiprocessors Proceedings of the 1994 Winter Simulation Conference, 1994.

[13] J. Misra, Distributed discrete event simulation, ACM Computing Survey 18,1, March 1986, pp 39-65.

[14] W. Cai, E. Letertre, S. J. Turner, Dag Consistent Parallel Simulation: a Predictable and Robust Conservative Algorithm, Proc. 11th Workshop on Parallel and Distributed Simulation (PADS97), pp 178-181, Lockenhaus, Austria, June 1997.

[15] P. Wilsey, et al, Analysis and Simulation of Mixed Technology VLSI Systems, Special Issue of Journal of Parallel and Distributed Computing, to appear, April 2002.

[16] L. Lamport, Time, Clocks, and the ordering of events in a distributed system, Communications of the ACM, vol21(7), pp. 558-565.

[17] J-L. Zhao, C. Tropper, The Dependence List in Time Warp, Proc. Workshop on Parallel and Distributed Simulation (PADS01), to appear, Los Angeles, California, May 2001.

[18] D. E. Martin, R. Radhakrishnan, D. M. Rao, M. Chetlur, K. Subramani, and P. A. Wilsey, Analysis and Simulation of Mixed-Technology VLSI Systems, Journal of Parallel and Distributed Computing (in press).

[19] A. Boukerche, C. Tropper, Parallel Simulation on the Hypercube Multiprocessor, Distributed Computing, Springer-Verlag, vol.8, no.4, pp. 181-191.

Article Details

Section
Introduction to the Special Issue