Computer Science Seminars, 97-98


Details are available

Seminars

Semester 1

The default time and place for seminars is Friday, 2:30pm in room G3 of the MCS Department (which is in the Mathematics Building).

Titles

Abstracts

Capturing Complexity Classes Using Generalized Quantifiers

Argimiro Arratia-Quesada (Universidad Simon Bolivar, Venezuela),

I will describe a technique, first developed by Immerman (1980's) and later exploited by Stewart, and Stewart and myself, for syntactically interpret the computational complexity classes log-space, nondeterministic log-space, P, NP and Polynomial Space, via extensions of first order logic (FO) by generalized quantifiers (or Lindstrom's quantifiers) corresponding to complete problems in the respective complexity classes listed above. I will explain a collapsing property for the logics thus formed, which occurs with respect to ordered finite structures, and then explain in the case of PSPACE, what happen if the order is not present. This is all Finite Model Theory.

Computability Theory and Continuous Datatypes

John Tucker (University of Wales Swansea),

Continuous data includes real numbers, streams of bits and reals in discrete and continuous time, waveforms, and operators. Continuous data types arise almost everywhere in computing from scientific modelling to multimedia. How do we compute with such data "exactly"?

In the algebraic theory of data, continuous data types are modelled by topological algebras. In this lecture I will survey recent work on models of computability for continuous data types. Two kinds of model are examined and compared: concrete models based on computing with a choosen representation and abstact models that are based on computing with any data. I will report recent results by J I Zucker (McMaster) and Viggo Stoltenberg-Hansen (Uppsala) and myself.

Combing groups up the Chomsky hierarchy

Sarah Rees (University of Newcastle Upon Tyne)

A group is automatic if it can be represented by a regular language of words which satisfy a fellow traveller property (that is, words which correspond to closely related group elements are also close as strings). The property is possessed by many mathematically interesting groups, but others, including almost all nilpotent groups, fall through the net. Luckily, much is recovered by generalising the idea from the regular languages to other formal language classes.

The project xSLAM so far: a linear lambda-calculus with explicit substitutions

Valeria de Paiva (University of Birmingham)

I will describe the EPSRC project xSLAM, which started in Birmingham in Jan 97 and explain some of the main challenges we face. Then I will describe our solution to one of these challenges, namely the integration of linearity with explicit substitutions.

Explicit substitutions are a theoretical tool for the design of abstract machines, which makes the implementation of these machines easier to prove correct. The implementation of an efficient linear functional programming language has been the goal of several recent experiments in functional programming and is the main goal of the project xSLAM.

But the integration of linearity features into traditional calculi with explicit substitutions causes major difficulties as some of the rules for the standard calculus of explicit substitutions are inherently non-linear. We solve these problems, first by using an idea from Plotkin/Barber: a linear lambda-calculi which distinguishes linear and intuitionistic variables and then by taking the categorical semantics as a guide in defining the calculus. This way we obtain a confluent and normalising calculus which has as normal forms, normal terms of Barber's linear lambda-calculus and which is also sound with respect to the categorical semantics.

This is joint work with Eike Ritter and Neil Ghani.

Type-Checking Security Protocols in the Spi-Calculus

Simon Gay (Royal Holloway, University of London)

The spi-calculus, introduced recently by Abadi and Gordon, provides a framework for defining and reasoning about security protocols. Abadi has proposed a type system for the spi-calculus, capturing a number of principles of secrecy in a collection of typing rules, such that every correctly typed spi-calculus process satisfies a certain secrecy property.

We will describe and demonstrate an implementation of a type-checker for Abadi's type system, and illustrate how it may be used to facilitate verification of secrecy properties. By considering several protocols we will highlight the strengths and limitations of the type-checking approach to security analysis.

Software Quality: What is it and how can we measure it ?

Paul Krause (Philips Research Laboratories)

In contrast to most manufactured products, the precise factors which contribute to the "quality" of a software product are hard to identify and asses. Nevertheless, there is increasing awareness that software quality issues must be addressed in order to guarantee customer satisfaction and hence obtain and maintain a competitive advantage.

This talk will provide an overview of some of the issues that are raised by attempts to assess, control and improve the quality of software. I will argue that quality assessment is a total life-cycle activity, in particular identifying some of the biases a quality judgement may be subject to if made in the presence of incomplete and inadequate data.

Because of its intangible nature, direct measurement of software quality is not possible. Instead, the factors which contribute to software quality must be identified, together with measures for assessing the factors themselves. In general, these factors will cover aspects of the software process, as well as the software products. In total, this is a model building activity; to enable us to predict the quality of a software product that results from a specific development environment. The reason for emphasising this is that for the model to be usable, as with all models, it must be validated against the real world.

The production of validated models and measures will enable us to place software improvement recommendations on a scientific basis. Nevertheless, actually effecting changes in the development process of a specific team can be problematic if due account is not taken of the various cultural and sociological factors that are always present. The talk will conclude with a discussion of some of the issues that need to be raised in order to initiate an effective process improvement action.

Patterns for Technical Software Design: A provisional catalogue and three case studies

Michael Pont (Leicester University, Dept of Engineering)

This talk will consider whether the success rate of technical software projects could be enhanced through the use of appropriate 'design patterns'. The origins of design patterns, and how they may be used to support reuse, will be discussed. A provisional catalogue of patterns suitable for use in technical software projects will then be presented. The use of these patterns will be illustrated through three case studies.

Semester 2

Abstracts

A Computational Interpretation of the lambda-mu-calculus

Gavin Bierman (Computer Laboratory, Cambridge University)

Recently Parigot introduced the $\lambda\mu$-calculus, which is a simple extension of the $\lambda$-calculus. The set of types of all closed $\lambda\mu$-terms enumerates all classical tautologies. In this talk I shall present a simple computational interpretation of this calculus. I shall demonstrate the richness of this interpretation by considering translations of various control operators. I shall show how one can present this interpretation as a simple transition system which leads to both a neat implementation and a simple but powerful operational theory.

DNA Computation

Martyn Amos (Liverpool University)

The idea that living cells and molecular complexes can be viewed as potential machinic components dates back to the late 1950s, when Richard Feynman delivered his famous paper describing sub-micro-scopic computers. Recently, several papers have advocated the realisation of massively parallel computation using the techniques and chemistry of molecular biology. Algorithms are not executed on a traditional, silicon-based computer, but instead employ the test-tube technology of genetic engineering. By representing information as sequences of bases in DNA molecules, existing DNA-manipulation techniques may be used to quickly detect and amplify desirable solutions to a given problem.

In this talk we present an introduction to molecular computing, describing initial abstract models and laboratory implementations. We then review more recent work, concentrating on our DNA-based simulation of Boolean circuits.

A categorical semantics for P-time

Robin Cockett (DPMMS, Cambridge University)

The universal property of a (strong) natural number object in any category with products delivers into the setting all the primitive recursive functions. It has proven difficult to obtain - in a reasonable categorical fashion - settings which have arithmetic power below primitive recursive. However, such settings are emerging: notably Jim Otto in his thesis provided a number of examples.

The purpose of this talk is to argue that finitistic universes, that is extensive categories with a full internal subcategory of finite sets, seem to provide a natural setting in which to express subrecursive arithmetics. The talk will explore these settings and decribe how one may add arithmetic ... without adding too much arithmetic!

Verified Run-Time Hardware Reconfiguration

Tom Melham (Glasgow University)

This talk will sketch the main ideas behind a project at the University of Glasgow to develop new methods for realising circuits as hardware which is modified at execution time and whose correctness is formally assured. The project is based on a novel combination of the technology of Field Programmable Gate Arrays and the functional programming technique of Partial Evaluation. I shall aim to be fairly non-technical, concentrating on the overall vision of the project, but I will also try to give an idea of the combination of hardware specification/verification and reasoning about algorithms we propose to use.

Swans Reflecting Elephants: Principles of Program Analysis

Chris Hankin (Imperial College, London University)

The ability to analyse a program to determine some aspect of it's behaviour has several applications. The classic one is in optimising compilers, where knowledge about a program's run-time behaviour allows the compiler to generate faster code. Another application is in program verification. Program analysis allows us to verify safety (nothing bad ever happens) and liveness (something good will happen) properties. Most recently program analysis has been used to help verify the ARIANE V software before the (successful) second launch. Yet another application is as a basis for formal debugging and testing.

It is important that we should have confidence in the results of program analysis. One way of gaining this confidence is to construct the analysis in a way such that it is verifiable. Our approach, in common with many others, is to base analyses on a formal model of the program's behaviour. Unfortunately, most program properties are undecidable and so we cannot construct an analyser which gives accurate results for all programs. As a consequence, program analysis uses approximate models of programs; in this way, the information from the analysis can be guaranteed to be safe but may not be as informative as we would wish.

In this lecture we will identify the main approaches to program analysis. We will present some highlights in the development of these techniques. The lecture will include an introduction to a new approach that we have been investigating.

Index for subsemigroups: combinatorial and computational issues

Nik Ruskuc (University of St. Andrews)

The main aim of this lecture is to discuss the question of how to define the index of a subsemigroup in a semigroup. The underlying idea is that taking subsemigroups and extensions of `finite index' should preserve `reasonable' finiteness conditions (such as finite generation and finite presentability). I will consider various possible definitions, and discuss their properties and relationships. I will also discuss various related decidability results and computational methods.

On the Propagation of Gossip in Distributed Systems

Isi Mitrani (University of Newcastle upon Tyne)

A number of cites in different geographical locations send state information to each other by means of random 'gossip' messages. The recipient of each message is chosen according to some probability distribution which may depend on the sender. Of interest are certain average intervals describing the propagation of information among the sites. Some analysis and results concerning those intervals are presented. The motivation for this work comes from consistency maintaining protocols in replicated systems.

Formulas, Circuits and Sets.

Anuj Dawar (University of Wales Swansea)

A simple model of computation is obtained by considering families of Boolean circuits, one for each possible input size. In order to make this into a sensible model, we need to restrict the complexity of the family itself. This introduces a new resource, namely uniformity, as a measure of computational complexity that can be controlled independently of other resources such as time and space. Uniformity turns out to be related to what "numerical predicates" we allow in formulas defining classes of structures. In the talk, I will explain these connections, and finally show how some unexpected results can be obtained by looking at set membership as a numerical predicate. The talk will be of a survey nature, and not assume any specialist knowledge.

The Constrainedness of Search

Ian Gent (University of Strathclyde)

The word "constrained" is used informally in Artificial Intelligence in two senses. The first refers to the status of individual problem instances. "Underconstrained" instances are easy to solve, "overconstrained" ones are easy to show have no solution, while "critically constrained" problems may or may not have solutions and it is hard to determine whether they do or not. The second meaning is in the study of heuristics, where it is generally believed that the `most constrained variable' should be picked next. Members of the APES research group at Strathclyde University have proposed a formal definition of constrainedness which seems to capture both informal uses of the word. We have used it in many experimental studies of search problems, and have also used it to design new heuristics and understand old ones.


Reading Groups


[MCSDept Piccy] [MCSDept Piccy]

Time-stamp: <98/01/21 17:14:16 rlc3>