BRICS · Contents · Programme · References · Lecture Notes · Registration

Evolving Algebras

A BRICS Mini-Course
August 7-10, 1995

Lectures by
Yuri Gurevich
University of Michigan

Egon Börger
University of Pisa


Course Contents

Evolving Algebras From First Principles, Yuri Gurevich

Computation models and commercial specification methods seem to be worlds apart. The evolving algebra project started as an attempt to bridge the gap by improving on Turing's thesis. We sought more versatile machines able to simulate arbitrary algorithms in a direct and coding-free way on their natural abstraction level (rather than implementing algorithms on lower abstraction levels).

The EA thesis asserts that evolving algebras are such versatile machines and suggests an approach to the correctness problem in mathematical modeling of discrete dynamic systems: Construct an evolving algebra that reflects the given system so closely that the correctness can be established by observation and experimentation. (There are tools for running evolving algebras.) The use of the successive refinement method is facilitated by the ability of evolving algebras to reflect arbitrary abstraction levels.

There is an active and growing evolving algebra community. Evolving algebras have been used to specify languages (e.g. C++, Prolog and VHDL), to specify real and virtual architectures (e.g. APE, PVM and Transputer), to validate standard language implementations (e.g. of Prolog, Occam), to verify numerous distributed and/or real-time protocols, to prove complexity results, etc.. Some examples may be found in the proceedings of IFIP 1994 World Computer Congress.

In the lecture series, we define evolving algebras from scratch, argue for the EA thesis and survey EA applications, but most time will be devoted to case studies of distributed algorithms.

Literature (roughly in the order to be used during the course, except the guide [4] will be used in all four lectures): [1,2,3,4,5,6,7,8].

Applications of Evolving Algebras, Egon Börger

The Logical Structure of Pipelining in RISC Architectures

We provide a logical specification of Hennessy's and Patterson's RISC processor DLX and prove the correctness of its pipelined model with respect to the sequential ground model. In the first part we concentrate our attention to the provably correct refinement of the sequential ground model DLX on the pipelined parallel version DLXp in which structural hazards are eliminated. Then we extend the result to the full pipelined model DLXpipe in which also data and control hazards are treated.

Our specification proceeds by stepwise refinement, the proof method is modular. The specification and verification method is general and can be applied to other microprocessors and pipelining techniques as well.

(Joint work with S. Mazzanti)

Correctness of Compiling Occam Programs to Transputer Code

We present a mathematical analysis of the Transputer Instruction Set architecture for executing Occam together with a correctness proof for a general compilation scheme of Occam programs into Transputer code. We start from a formal Occam model for an abstract processor, running a high and a low priority queue of Occam processes which formalizes the semantics of Occam at the abstraction level of atomic Occam instructions. We present increasingly more refined levels of Transputer semantics, proving correctness (and where possible also the completeness) for each refinement step. The proof assumptions collected along the way turn out to be natural conditions for a compiler to be correct.

The case of Occam and the Transputer is paradigmatic; the proposed mathematical framework for the study of compilation techniques can be used also for other programming languages with nondeterminism and concurrency and provides a rigorous technique for design and analysis of instruction set architectures.

(Joint work with I. Durdanovic, University of Paderborn)

A Mathematical Specification of the APE100 Parallel Architecture

The APE100 parallel processor has been developed by a group of physicists in Pisa and Rome as a dedicated machine for floating point intensive scientific applications, in particular numerical simulation in Lattice Gauge Theory. It constitutes an interesting example of a lock-step parallel architecture, providing - assisted by an appropriate high level language - a very smooth transition from sequential to parallel programming and computing.

We give a full mathematical description of the APE100 architecture. The description consists of several models, at different levels of abstraction, corresponding to views of the architecture provided by different languages used within the APE compilation chain (a crucial part of the software environment of APE).

The primary model is based on the relevant subset of APESE, a high level language specially designed for APE and constituting the source language in the compilation chain: APESE reflects closely the APE model of parallel execution and is therefore adequate for an abstract description of the main features of the latter. By stepwise refinement we lead from this APESE model to the hardware level of APE, going through macro assembler and micro code level down to optimized loadable executable code.

Use of evolving algebras as specification method allows to make these mathematical models of the APE100 architecture understandable to the user of the machine; there is no combinatorial explosion and only basic facts from programming and machine architecture are presupposed.

(Joint work with G. Del Castillo)

Programme

Monday August 7, 1995, 10:15-12:00

The EA computation model

Tuesday August 8, 1995, 10:15-12:00

Sequential Ealgebras

Tuesday August 8, 1995, 14:15-16:00

The logical structure of pipelining in RISC architectures

Wednesday August 9, 1995, 10:15-12:00

Parallel and distributed Ealgebras

Wednesday August 9, 1995, 14:15-16:00

Correctness of Compiling Occam Programs to Transputer Code

Thursday August 10, 1995, 10:15-12:00

EA Proofs

Thursday August 10, 1995, 14:15-16:00

A Mathematical Specification of the APE100 Parallel Architecture

References

For further information on the Evolving Algebras framework, see
this.

  1. Yuri Gurevich. Evolving Algebras: An Attempt to Discover Semantics. In EATCS Bulletin, volume 43, pages 264-284. European Assoc. for Theor. Computer Science, February 1991. (The Tutorial).
  2. Andreas Blass and Yuri Gurevich. Evolving Algebras and Linear Time Hierarchy. In B. Pehrson and I. Simon, editors, IFIP 1994 World Computer Congress, volume I: Technology and Foundations, pages 383-390. Elsevier, Amsterdam, 1994.
  3. Yuri Gurevich. Evolving Algebras. In B. Pehrson and I. Simon, editors, IFIP 1994 World Computer Congress, volume I: Technology and Foundations, pages 423-427. Elsevier, Amsterdam, 1994.
  4. Yuri Gurevich. Evolving Algebra 1993: Lipari Guide. In E. Boerger, editor, Specification and Validation Methods. Oxford University Press, 1995.
  5. Yuri Gurevich and James Huggins. The Semantics of the C Programming Language. In CSL '92 (Computer Science Logics), number 702 in Lecture Notes in Computer Science, pages 274-308. Springer Verlag, Berlin, 1993.
  6. Yuri Gurevich and James K. Huggins. Evolving Algebras and Partial Evaluation. In B. Pehrson and I. Simon, editors, IFIP 1994 World Computer Congress, volume I: Technology and Foundations, pages 587-592. Elsevier, Amsterdam, 1994.
  7. Yuri Gurevich and James K. Huggins. Equivalence is in the eye of the beholder. University of Michigan EECS Department Technical Report is coming, to be submitted to Theoretical Computer Science.
  8. Egon Börger, Yuri Gurevich, and Dean Rosenzweig. The Bakery Algorithm: Yet Another Specification and Verification. In E. Boerger, editor, Specification and Validation Methods. Oxford University Press, 1995.

    Lecture Notes

    BRICS-NS-95-4

    Registration

    The course and lectures are open to all. BRICS can assist in arranging accommodation, and possibly with local expenses in a few needy cases. For further information and to register your attendance, please contact BRICS@brics.dk.