Computer Science Seminars, 98-99


Details are available

Seminars

Semester 1

The default time and place for seminars is Friday, 2:30pm in room G3 of the MCS Department (which is in the Mathematics Building). Additional seminars given by members of the Department often take place on Thursdays, 1:30pm in room G3.

Titles

Abstracts

Generalising Universal Algebra

Neil Ghani (University of Leicester, MCS)

Universal algebra concerns itself with the study of sets carrying extra structure --- this structure takes the form of operations and equations. However, in computer science we are often interested in defining structure over other kinds of mathematical object. For instance domain theory typically deals with order structures carrying extra operations while models of (typed) lamba calculi tend to consists of categories with extra structure. I shall explain how we can generalise the notion of universal algebra to cover these examples.

Bisimulation in classical lambda calculus: Bohm trees and bisimulation up to context

Soeren Lassen (University of Cambridge)

The talk is about operational methods for reasoning about program equivalences in the pure untyped lambda calculus. Notions of bisimulation and trees characterising operational equivalences are surveyed, covering classical and lazy call-by-name theories. The classical theory is based on reduction to head normal form, including reduction under lambda abstraction. It is treated in depth in Barendregt's book. Classical operational equivalence is characterised by Bohm trees (modulo eta conversion). The lazy theory is based on reduction to weak head normal form, where reduction stops at lambda abstraction. It was introduced and studied by Abramsky, Ong and Howe. The principal operationally-based method for reasoning about lazy operational equivalence is the co-induction principle of applicative bisimulation.

On the basis of an operational bisimulation account of Bohm tree equivalence, I will present a new operationally-based development of the classical Bohm tree theory, akin to Howe's development of the lazy theory of applicative bisimulation. I give a new, elementary congruence proof for Bohm tree equivalence - hence Bohm tree equivalence implies operational equivalence. The proof is based on a syntactic proof principle, called bisimulation up to context, which also yields a simple syntactic proof of the least fixed point property of fixed point combinators.

Proof Theory and Complexity

Stan Wainer (University of Leeds)

This talk will describe joint work with my students G. Ostrin and N. Cagman on the proof theory of low subrecursive classes. The basis is "A new recursion theoretic characterization of the polytime functions" by Bellantoni and Cook (1992), in which it is shown that a natural two-sorted re-interpretation of the usual primitive recursion schemes characterizes polynomially bounded computation. We show that if Peano Arithmetic is instead formulated in this two-sorted fashion, with quantification only allowed over one sort (safe variables) and induction allowed only over the other (normal variables), then the provably terminating functions are exactly the Kalmar elementary ones E(3). The provably terminating functions of the n-quantifier inductive fragments of the theory turn out to be closely related with the levels of Ritchie's hierarchy of "Predictably computable functions" (1963), with the \Sigma_1 Inductive fragment corresponding to Grzegorczyk's class E(2) or LinearSpace. This work is related to results of Buss, Leivant, Bellantoni and others, but in addition it clearly illustrates the use of classical proof-theoretic techniques at this low level. The difference with classical Peano Arithmetic is that it is now the Slow Growing Hierarchy which supplies the bounding functions, rather than the Fast Growing Hierarchy which arises in the classical case.

Efficient Approximation Algorithms for the Hamming Center Problem

Leszek Gasieniec (University of Liverpool)

The Hamming center problem for a set S of k binary strings, each of length n, is to find a binary string B of length n that minimizes the maximum Hamming distance between B and any string in S. The decision version of this problem is known to be NP-complete. We provide several approximation algorithms for the Hamming center problem. Our main result is a 4/3+E-approximation algorithm running in polynomial time if the Hamming radius of S is at least superlogarithmic in k. Furthermore, we show how to find in polynomial time a set B of O(log k) strings of length n such that for each string in S there is at least one string in B within Hamming distance not exceeding the radius of S.

(This is joint work with Jesper Jansson and Andrzej Lingas, Lund University, Sweden)

Towards a completeness result for model checking of security protocols

Gavin Lowe (University of Leicester)

A security protocol is an exchange of messages between two or more agents, with a goal such as establishing a cryptographic key, or achieving authentication; these protocols are intended to succeed, even in the presence of a hostile intruder, who is able to overhear messages, prevent messages reaching their intended recipient, or introduce fake messages of his own.

Over the past few years, a number of groups have analyzed security protocols using model checking. The basic approach is to model a small system running the protocol, together with a hostile intruder, and then to use the model checker to search the state space, looking for insecure states.

This approach has proved very successful at discovering attacks upon protocols; however, it has proved less successful at proving protocols secure--if no attack is found upon the small system analyzed, this does not, in general, imply that there is no attack upon some larger system. This is the question I will address in the talk: I will present a result that shows that for a restricted class of protocols, if there is no attack upon a particular small system, then there is no attack upon any larger system; thus model checking of this small system is enough to verify the security of the protocol.

Structural Operational Semantics with Orderings

Irek Ulidowski (University of Leicester)

We present a general method for defining structural operational semantics (SOS) of process operators by traditional Plotkin-style transition rules equipped with {\em orderings\/}. This new feature allows one to control the order of application of rules when deriving transitions of process terms.

Our method is powerful enough to deal with rules with negative premises and copying. We show that rules with orderings, called {\em Ordered SOS\/} rules, have the same expressive power as GSOS rules.

We discuss a class of process languages where operators are defined by ordered rules in the setting with silent actions and divergence. We prove that eager bisimulation relation is preserved by all operators in this class.

Mixed Metaphors - The Meaning of Authentication

Dieter Gollmann (Microsoft Research, Cambridge)

Authentication is regarded to be a notoriously difficult problem. Suggestions to avoid the pitfalls in designing authentication protocols range from robust engineering techniques to the application of formal methods. This talk will argue that the problem is not inherently difficult, rather it poorly understood. I will try to explain why this is the case by showing that the term authentication is used in different areas with different meanings. This discussion also has a bearing on authorisation structures for distributed security environments like the world wide web.

Simplifying Transformations for Security Protocols

Mei Lin Hui (University of Leicester)

A typical security protocol is made up of 2-7 messages, each message containing between 2-10 fields. These protocols have been analyzed for several years with much success. A problem arises when we analyse large commercial protocols, which typically contain several layers of encryption and around 30 fields in each message. With the model checking approach this leads to an explosion in the state space and message space.

We have defined some simplifying transformations which can be applied to a large protocol to reduce it to a size that can be analyzed with a model checker. I will present a result showing that if a simplifying transformation satsifies a couple of properties then it will be considered to be safe in the sense that if the original protocol is insecure then so is the simplified version. I will illustrate the use of these safe simplifying transformations in the analysis of a large commercial protocol.

Real functions computable by finite automata

Achim Jung (University of Birmingham)

Based on the work of a number of researchers, Peter Potts and Abbas Edalat have developed a framework for real-number computation which is both expressive and elegant. In his thesis, Peter Potts shows how to compute all standard mathematical functions in this framework. His algorithms are derived from classical formulas for continued fractions. In his presentation, a calculation always corresponds to a (potentially infinite) expression tree, which consists of tensors and matrices at its nodes, into which the input stream(s) are fed at (potentially infinitely many) leaves, and which creates an output stream from its root.

Reinhold Heckmann showed that in the simple situation where the tree consists of a single matrix, this matrix must contain larger and larger entries as the computation proceeds. He gives necessary and sufficient conditions for when the entries can be bounded by a constant.

In this talk, I report on work done by Michal Konecny at Birmingham, who has extended Heckmann's work in two directions: He considers general finite automata rather than expression trees, and he allows arbitrary digit systems to be used. His main result is that a differentiable function, which is computed by a finite automaton, must be equal to a linear fractional transformation. (By Heckmann's results, this linear fractional transformation must furthermore be of a very special kind.)

(Asynchronous) (BI)Automatic (Semi)Groups

Michael Hoffman (University of Leicester)

There has been a great deal of interest in recent years in the theory of automatic and biautomatic groups. The definitions extend naturally from groups to semigroups but new proof techniques are often required. A major open question in group theory is whether an automatic group is necessarily biautomatic. In this talk we will give an example of an automatic semigroup that is not biautomatic. We will also describe how this sheds light on the theory of asynchronous automatic groups.

Computing in Industry: A Small-Company View

John Botham (Tarragon Ltd, Cambridge UK)

This talk will be about life at a small software company in the automotive industry. In describing our work I will cover a number of topics that may interest teaching members of a university computing department. These will include the extent to which we find useful the computing theory tought in universities and how well prepared we find interviewees for junior technical posts. Job-hunting undergraduates and post-graduates can gain from an insight into the activities of a fairly typical software company.

Semester 2

The default time and place for seminars is Friday, 2:30pm in room G3 of the MCS Department (which is in the Mathematics Building). Additional seminars given by members of the Department place on Thursdays, 10:30pm in room G3.

Abstracts

Verification of data independent systems by model checking

Ranko Lazic (Oxford University Computing Laboratory)

A concurrent system is said to be data independent with respect to a type X if the only things it can do with values of type X are to input them, store them, output them, and perform equality tests between them. We present a number of theorems which allow such a system to be verified (for all instantiations of X) by a single run of a model checker. The techniques used to prove the theorems include symbolic operational semantics, logical relations and symmetry elimination.

The research has been taken further by applying it to the verification of security protocols, of security as noninterference, and of parallel networks of arbitrary size. This has involved weakenings of the concept of data independence to allow language constructs such as arrays.

We shall also indicate a number of directions for future research, which will be tackled in the course of our forthcoming EPSRC project.

(The seminar is about joint work with Bill Roscoe, Gavin Lowe, Philippa Broadfoot and Sadie Creese.)

Some subclasses of the context-free groups

Duncan Parkes (Leicester)

Interesting subclasses of the context-free groups may be defined by formal language properties of the word problem, or certain subsets of it, by the kind of string-rewriting system required to present the group, or by algebraic properties. We investigate the relationships between some of these classes.

A semantics for imprecise exceptions

Simon Peyton-Jones (Microsoft Research Labs, Cambridge, UK)

Some modern superscalar microprocessors provide only imprecise exceptions. That is, they do not guarantee to report the same exception that would be encountered by a straightforward sequential execution of the program. In exchange, they offer increased performance or decreased area (which amount to much the same thing).

This performance/precision tradeoff has not so far been much explored at the programming language level. In this paper we propose a design for imprecise exceptions in the lazy functional programming language Haskell. We discuss various simpler designs, and conclude that imprecision is essential if the language is still to enjoy its current rich algebra of transformations. We sketch a precise semantics for the language extended with exceptions.

From the functional programming point of view, the paper shows how to extend Haskell with exceptions without crippling the language or its compilers. From the point of view of the wider programming language community, we pose the question of whether precision and performance can be traded off in other languages too. Embeddings among finitely generated virtually free groups. Volodya Shavrukov (Leicester)

When is a finitely generated virtually free (fgvf) group F1 embeddable in a fgvf group F2? Word problems for which groups lie in a given subray of context-free languages? How is this related to graphs of groups and to Culler realizations of fgvf groups?

Those attending the talk (including, hopefully, the speaker) may have a better understanding of these and similar questions after the talk than they had before it.

Insertions, deletions and syntactic monoids

Rick Thomas (Leicester)

For any language L, we can define a new set of words which are those with the property that, whenever they are inserted in any position of any word in L, the resulting word always lies in L. We can make a similar definition with "insertion" replaced by "deletion". The syntactic congruence of a language L is the coarsest congruence on the set of all strings such that L is a union of congruence classes; the syntactic monoid is what results when we take the set of all strings and factor out the syntactic congruence.

The idea of this talk will be to discuss these concepts briefly and then try and tie them up with the idea of the word problem (particularly in the case when the syntactic monoid is a group). I shall discuss some problems Duncan Parkes and I have been looking at (though this is very much work in progress).

Reliable Computer Mathematics?

Simon Thompson (University of Kent)

Symbolic mathematical systems, like Axiom, Maple and Mathematica, are in widespread daily use by practising scientists and engineers. These systems are large and complex, and their behaviour is often flawed.

This talk will first explore some of the background to this problem. It then addresses ways of adding reasoning to symbolic systems, and in particular work going on in this area at the University of Kent.

Within ARM's Reach: Compilation of Rewrite Systems

Wan Fokkink (University of Wales, Swansea)

A new compilation technique for left-linear term rewriting systems is presented. It assumes the rightmost innermost rewriting strategy, and adopts a rule order in which a rewrite rule has higher priority if it has more syntactic structure. First, the rewrite rules are transformed into so-called minimal rewrite rules, using the pattern-match automaton of Hoffmann and O'Donnell. These minimal rules have such a simple form that they can be viewed as instructions for an abstract rewriting machine (ARM). The equational programming language Epic has been implemented using ARM. This is joint work with Jasper Kamperman and Pum Walters.

Fold and unfold for program semantics

Graham Hutton (University of Nottingham)

In this talk I will explain how recursion operators can be used to structure and reason about program semantics within a functional language. In particular, I will show how the recursion operator fold can be used to structure denotational semantics, how the dual recursion operator unfold can be used to structure operational semantics, and how algebraic properties of these operators can be used to reason about program semantics. The techniques will be illustrated using a simple language of concurrent processes. This talk will be aimed at a general audience, and aims to give new insights into recursion operators, program semantics, and the relationships between them.

Automating Diagrammatic Reasoning

Matejaj Jamnik (University of Birmingham)

Theorems in automated theorem proving are usually proved by formal logical proofs. However, there is a subset of problems which humans can prove by the use of geometric operations on diagrams, so called diagrammatic proofs. Insight is often more clearly perceived in these than in the corresponding algebraic proofs; they capture an intuitive notion of truthfulness that humans find easy to see and understand. We are investigating and automating such diagrammatic reasoning about mathematical theorems. The user gives the system, called DIAMOND, a theorem and then interactively proves particular concrete instances of it by the use of geometric operations on the diagram. These operations are the ``inference steps'' of the proof. DIAMOND then automatically induces from these proof instances a general schematic proof. The constructive omega-rule is used as a mathematical basis for such general schematic proofs. This approach allows us to embed the generality in the schematic proof so that operations need only be applied to concrete diagrams. The final step is to confirm that the abstraction of the schematic proof from the proof instances is sound. We define a theory of diagrams where we prove the correctness of a schematic proof.

An evolution among actions

Anders P. Ravn (Aalborg University, Denmark)

How can we add continuous variables with evolutions governed by differential equations to a conventional action system with a Dijkstra style weakest-(liberal)-precondition semantics? This talk gives one specific answer to this question: \[ wlp[e: \dot(X) = F(X)](Q) = \\ \ \ \exists \Phi . \Phi(0) = 0 \wedge \dot{\Phi} = F(\Phi) \\ \ \ \ \ \ \ \ (\forall \tau \geq 0}: (e \vee Q)[\Phi(\tau)/X] \vee \\exists 0\leq \deltaReadingGroups

Lecture Courses


[MCS Home] [Top of Page]

Time-stamp: <98/09/11 12:56:09 rlc3>