Computer Science Seminars, 96-97


Details are available for

Seminars

Semester 1 External Seminars

Titles

Abstracts

Extended ML: a framework for specification and development of modular Standard ML programs.

Don Sannella (LFCS, Edinburgh University)

Extended ML (EML) is a framework for the formal development of programs in the Standard ML (SML) functional programming language from high-level specifications of their required input/output behaviour. EML is a completely formal framework with a very extensively-developed mathematical basis in the theory of algebraic specifications. It strongly supports "development in the large", producing modular programs consisting of an interconnected collection of generic and modular units. The EML framework includes a methodology for formal program development which establishes a number of ways of proceeding from a given specification of a programming task towards a program. Each such step (modular decomposition, etc.) gives rise to one or more proof obligations which must be discharged in order to establish the correctness of that step.

The EML language is a wide-spectrum language which encompasses both specifications and executable programs in a single unified framework. It is a simple extension of the SML programming language in which axioms are permitted in module interfaces and in place of code in module bodies. This allows all stages in the development of a program to be expressed in the EML language, from the initial high-level specification to the final program itself and including intermediate stages in which specification and program are intermingled.

Formally developing a program in EML means writing a high-level specification of a generic SML module and then refining this specification top-down by means of a sequence (actually, a tree) of development steps until an executable SML program is obtained. The development has a tree-like structure since one of the ways to proceed from a specification is to decompose it into a number of smaller specifications which can then be independently refined further. In programming terms, this corresponds to implementing a program module by decomposing it into a number of independent sub-modules. The end-product is an interconnected collection of generic SML modules, each with a complete and accurate specification of its interface with the rest of the system. The explicit interfaces enable correct reuse of the individual modules in other systems, and facilitate maintainability by making it possible to localize the effect on the system of subsequent changes in the requirements specification.

I will give an overview of the Extended ML project. Enough background will be given about Standard ML to make the talk self-contained.

Recursion and Petri nets

Maciej Koutny (Newcastle University)

This talk describes a general theory underpinning the denotational Petri net semantics of a process algebra similar to CCS. The main part of the talk describes a denotational approach to the Petri net semantics of recursive expressions. A domain of nets is identified such that the solution of a given recursive equation can be found by fixpoint approximation from some suitable starting point. In turn, a suitable starting point can be found by fixpoint approximation on a derived domain of sets of trees.

Preservation theorems in finite model theory

Natasha Alechina (Birmingham University)

I am going to talk about joint work with Yuri Gurevich. We look at various preservation theorems of classical logic (principally the Los-Tarski theorem) which fail in the context of finite models, and examine the extent of their failure. In some cases, even if the old correspondence between syntactical and semantical properties does not hold, one can find a new semantical counterpart for the syntactical property, and vice versa - a new syntactical counterpart for the semantical property.

Models for Explicit Substitutions in Linear Logic

Eike Ritter (Birmingham University)

We present a novel categorical model of intuitionistic linear logic that is geared towards the construction of categorical abstract machines. Standard categorical models, like Bierman's and Seely's, use the tensor product to model both contexts and the tensor products of the logic. We will motivate why the separation of these concerns is important for the design and for the correctness proof of categorical abstract machines. In the case of the simply-typed lambda-calculus, indexed categories provide the right kind of framework for this separation and this corresponds to the notion of explicit substitutions in the calculus. But because indexed categories are inherently non-linear, they cannot be used directly for modelling linear logic. We present versions of Lambek's multicategories, to deal (in parallel) with intuitionistic and linear logic contexts. Then we modularly add the logical connectives tensor and linear implication (or intuitionistic conjunction and implication) and exponentials to obtain a categorical structure that does distinguish contexts and tensor (or categorical) products. This is joint work with Valeria de Paiva.

Specifying and Verifying AMULET in CCS

Graham Birtwistle (Leeds University)

Asynchronous hardware design is attracting renewed research and industrial interest because of its compositional properties and potential for low power consumption (much of today's hardware is battery driven and asynchronous circuits only do work when there is work to do).

AMULET is an (already working) asynchronous version of the ARM micro designed by Steve Furber's team at Manchester University. ARM is now a standard TI cell, used by IBM to control disks, and in the Apple Newton. Its outstanding characteristic is perhaps its very low power consumption (132 MIPS per watt). The medium term expectation is that AMULET will fare even better.

The talk will present work completed on specifying and verifying the major subsystems of AMULET (address interface, register bank, execution pipeline, data interface) in CCS and modal mu respectively at the RTL. CCS does not handle data well, so we concentrated our efforts on the way blocks (a)synchronise. Once the major subsystems were specified they were tested for deadlock and livelock, bus contention, etc. using modal mu.

The work was carried out with Ms Ying Liu (now at BNR) and with the full cooperation of the Manchester AMULET team.

Using CSP for authentication protocol analysis: the Needham-Schroeder Public Key Protocol.

Steve Schneider (Royal Holloway)

Authentication protocols are notoriously difficult to understand and reason about, as shown by the number of flaws in published protocols. This talk discusses recent work on the application of CSP to the modelling, analysis, and verification of such protocols. Work in this area until now has focussed (successfully) on the search for attacks. The work presented here is concerned with the construction of a theory for the CSP framework, within which protocols may be proven correct. Such a proof also provides insight into why a protocol is correct, and makes assumptions explicit. The material is illustrated with the emerging benchmark protocol: the Needham Schroeder Public Key Protocol.

Reconfigurable mesh architectures.

Heiko Schroder (Loughborough University)

The diameter is an important parameter in the evaluation computer architectures. The new concept of dynamically reconfigurable meshes leads to a diameter of 1, thus better than the hypercube and as good as the PRAM model.

In this talk sorting in constant time is presented to demonstrate that the reconfigurable mesh model is more powerful than the hypercube and the PRAM . Such algorithms are of little practical value due to the high number of processing elements they need. Provably optimal algorithms for the reconfigurable mesh that are relevant for practical use are given for sorting, routing and sparse matrix multiplication.

Semester 2 External Seminars

Titles

Abstracts

Duration Calculus

Anders Ravn (Technical University of Denmark)

The duration calculus is a dense-time, interval temporal logic. It was developed by Zhou, Hoare and Ravn to reason about properties of real-time systems at a requirements level. A distinctive feature of the logic is that atomic formulas are formed by relations over real numbers and durations of states in an interval. A state is here interpreted as a Boolean function of time, and the duration within an interval is the measure of set of interval points where the state is true. Properties of subintervals are expressed using a dense time version of the binary `chop' connective of Moszkowski's ITL. With small extensions, the logic can also express properties of hybrid systems where differential equations govern evolutions of continuous states.

The talk covers the underlying mathematical structures, the syntax, semantics and proof system for the language. The talk will also summarise completeness and decidability results for the language.

Depending on time, its application in a specialised design calculus will be outlined. Finally some experience with mechanising the logic will be summarised.

Event and cycle semantics of hardware description languages

Mike Gordon (Cambridge University)

Modern hardware desciption languages (HDLs) are interpreted in different ways for different purposes. Classical simulation uses an event semantics to model detailed asynchronous behaviour. However, current compilers from HDLs to circuits generally target clocked synchronous implementations and so use a synchronous cycle-based semantics, as do high speed cycle simulators. The talk will start with a tutorial on event and cycle semantics (aimed at computer scientists, not electrical engineers) and then outline current work at Cambridge on developing semantics for Verilog, the most widely used HDL (62 percent market share).

Formal Specification and Verification using Higher-Order Algebraic Methods

Jason Steggles (Newcastle University)

Higher-order algebra provides a natural framework in which to formally develop computing systems and has been shown to be substantially more expressive than first-order algebraic methods. In this talk we present a brief introduction to higher-order algebra and demonstrate the application of higher-order algebraic methods by considering a detailed case study of the specification and verification of a dataflow algorithm for computing the Hamming stream. This case study illustrates the expressive power of higher-order algebraic methods by making use of a non-constructive equational specification in which a discontinuous second-order search functional is used to model first-order existential quantification.

A Calculus for Cryptographic Protocols: The Spi Calculus

Andrew Gordon (Cambridge University)

We introduce the spi calculus, an extension of the pi calculus designed for the description and analysis of cryptographic protocols. We show how to use the spi calculus, particularly for studying authentication protocols. The pi calculus (without extension) suffices for some abstract protocols; the spi calculus enables us to consider cryptographic issues in more detail. We represent protocols as processes in the spi calculus and state their security properties in terms of coarse-grained notions of protocol equivalence. This is joint work with M. Abadi.

A Fully Abstract Game Semantics for Idealized Algol

Guy McCusker (Oxford University) and Samson Abramsky (Edinburgh University)

The manipulation of objects with state which changes over time is all-pervasive in computing. Perhaps the simplest example of such objects are the program variables of classical imperative languages. An important strand of work within the study of such languages, pioneered by John Reynolds, focusses on the study of "Idealized Algol", an elegant synthesis of imperative and functional features.

This talk presents a novel semantics for Idealized Algol using games, which is quite unlike traditional denotational models of state. The model takes into account the irreversibility of changes in state, and makes explicit the difference between copying and sharing of entities. As a formal measure of the accuracy of our model, we obtain a full abstraction theorem for Idealized Algol.

A brief history of selection

Mike Paterson (Warwick University)

The selection problem, determining the k'th largest out of a set of n elements, is less fundamental than sorting, but has still been studied extensively over several decades.

Our focus will be on the worst-case complexity of selection and median-finding - measured by the number of comparisons required. In contrast to sorting, there is a considerable gap between the upper and lower bounds known for this problem. Although the comparison count is not of prime importance for overall efficiency in most applications, some of the lower bound methods and algorithms derived over the last twenty-five years are very pretty and of combinatorial interest.

There has been recent progress in the selection problem, and in median-finding in particular, after a lull of ten years. In this talk I shall review some ancient and modern results on this problem, and suggest possibilities for future research.

Extended RTL for Hybrid Systems Controller Design

Rogerio de Lemos (Newcastle University)

Extended Real Time Logic (ERTL) is proposed for the modelling and analysis of hybrid systems, taking as a basis Real Time Logic (RTL). RTL is a first order logic with uninterpreted predicates which relate events of a system to the time of their occurrence, thereby providing the means for reasoning about absolute timing properties of real-time systems. The extensions provided by ERTL allow reasoning about system behaviour in both value and time domains through predicates defined in terms of system variables. We illustrate the use of ERTL through the modelling and analysis of an industrial press.

How Hard is Constraint Satisfaction?

Peter Jeavons (Royal Holloway)

One way to unify a great many traditional computational problems is to view them as examples of a general form of problem known as a `constraint satisfaction problem'. Examples include: timetabling and scheduling problems, electronic circuit design, industrial optimisation problems, machine vision, and database retrieval.

In this talk we examine the insights which arise from seeing problems like these in a common abstract framework, consisting of variables, values, and `constraints'. In particular, we will look at new techniques for studying the computational complexity of combinatorial problems by examining structural and algebraic properties of the constraints. Using these techniques we identify tractable cases which arise when the constraints are restricted in various ways.

This research provides a new way of understanding the distinction between polynomial problems and NP-complete problems, and so throws new light on the boundary between tractability and intractability.

Reasoning about mobile processes

David Walker (Warwick University)

The pi-calculus, a descendant of the Calculus of Communicating Systems (CCS), is a theory in which one may naturally express systems whose structure changes during computation. This talk will describe the calculus and some techniques for reasoning about systems expressed in it. The techniques will be illustrated by considering the correctness of transformation rules for programs of a concurrent object-oriented language.

On the intuitionistic force of classical search

Lincoln Wallen (Oxford University)

The combinatorics of proof-search in classical propositional logic lies at the heart of most efficient proof procedures because the logic admits least-commitment search. The key to extending such methods to quantifiers and non-classical connectives is the problem of recovering this least-commitment principle in the context of the non-classical/non-propositional logic; i.e., characterizing when a least-commitment classical search yields sufficient evidence for provability in the non-classical/non-propositional logic.

I shall present such a characterization for the (-->, /\)-fragment of intuitionistic logic using the lambda,mu-calculus: a system of realizers for classical free deduction due to Parigot.

I will show how this characterization can be used to define a notion of uniform proof, and a corresponding proof procedure, which extends that of Miller et al. to multiple-conclusioned sequent systems. The procedure is sound and complete for the fragment of intuitionistic logic considered and enjoys the combinatorial advantages of search in classical logic.

By means of an application, I briefly indicate how (partial) intuitionistic realizers may be computed for classical resolution-based search yielding a measure of the intuitionistic force of classical resolution.

This is lecture is a report of joint work with Eike Ritter and David Pym.

Probabilistic Data Refinement

Annabelle McIver (Oxford University)

We consider programs that contain a probabilistic choice operator, selecting one of its operands for execution based on an explicitly given distribution. The usual demonic (non-deterministic) choice is retained. For example the probabilistic choice

prog [p] prog'

executes prog with probability p and prog' with probability (1-p), whereas the demonic choice

prog [] prog'

also executes either prog or prog', but the choice between them is completely unpredictable. The interaction between the two forms of choice makes data refinement more challenging, but also more interesting: the issue of 'who can see what' in a program becomes especially prominent.

I will explain the probabilistic/demonic model, illustrate data refinement between probabilistic data types and demonstrate some interesting problems specific to this context.

Proof Systems for the pi-Calculus

Huimin Lin (Sussex University)

We examine proof systems for various bisimulation equivalences (late/early, strong/weak) in several fragments of the pi-calculus (with/without recursion, with/without mismatch), and discuss their soundness and completeness. The emphasis will be on the Unique Fixpoint Induction rule and the axioms for removing unguarded recursions.

Internal Seminars

Lecture Courses

Reading Groups


[MCSDept Piccy] [MCSDept Piccy]

Time-stamp: <97/08/28 12:39:56 rlc3>