ABOUT 
Computer Science Seminars and Short Courses — 2002/03The Computer Science Group holds regular advanced and research level meetings, mainly during the two University teaching semesters. These meetings take the form of Research Seminars, which consist of one hour talks given by invited speakers or members of the Department on current research; Reading Groups which involve a small number of staff and research students studying an advanced book or research paper; and Lecture Courses which are usually delivered by a member of staff and pitched at advanced postgraduate level. Such lecture courses are also taken by staff and research students. Semester 2Seminar programme
Seminar detailsSoftware Components: Going beyond Objects? Dr KungKiu Lau (The University of Manchester, Computer Science Department) Componentbased Software Development (CBD) has been hailed as the "Industrial Revolution for IT", aimed at delivering software engineering from a cottage industry into a "mechanised" manufacturing industry. However, current component technology is by and large a distributed object technology (for integrating objects over networks). Is this enough for CBD to achieve its ultimate goal of thirdparty assembly of independently produced software components? In this talk, I will share some thoughts on this. Finding bugs in mobile phone specifications Dr Bill Mitchell (Motorola UK Research Labs) The reality of requirements specifications make verification next to impossible. In this talk I will explain how Motorola UK Research labs have valiantly attempted to do the impossible. The talk will look at reasoning about requirements scenarios that describe how individual features should work in isolation. Motorola have developed a tool called FATCAT that can sometimes piece together the concurrent behaviour of such scenarios and analyse this to detect defects in the specifications. So far the prototype tool has been used successfully on 3G designs, and is scheduled to be deployed on a new Linux based phone platform for next year. Pr Ralph Kopperman (City College of New York, Cuny, Department of mathematics)
"...it would be a description of several
times when I've felt the need to get help to solve problems in
category theory that came up in my work. I don't yet know the
solutions to these problems, and they suggest the possibility of
joint work with category theorists."
Towards Modelling and Verifying Reconfigurations of Access Control Systems Dr Mark Ryan (School of Computer Science, University of Birmingham) Access control systems (such as file access contol in OSs, firewalls, distributed databases) often need to be reconfigured, because the requirements of the organisation changes when new people arrive or are deployed in different ways, or when new services are added. It is undesirable to have a single superuser who makes all the changes; this would lead to a bottleneck, and a security vulnerability if the superuser is compromised. We are investigating access control systems in which permissions are themselves data which are subject to access permissions, in order to model delegated or devolved authority and to avoid the superuser. Such a system can have indirect paths; for example, an agent might not have a certain permission, but may have sufficient permissions about permissions to execute a sequence of actions which gives him the desired permission. (The unix file system has such an example: if you do not have write access to a file f but you have write access to its directory, you can by a sequence of actions gain ownership and write access to f.) This work is rather at its beginning. I will describe two parts: 1. A concrete model for permissions, which outputs SMV code so that the existence of indirect paths may be verified. 2. A logic which axiomatises permissions, and provides two kinds of models (abstract and concrete). We aim to show correspondences between the models and completeness of the axioms, though this is not yet done. Induction and Coinduction in Sequent Calculus Dr Alberto Momigliano (Department of Mathematics and Computer Science, University of Leicester) Joint work with Dale Miller and Alwen Tiu. Proof search has been used to specify a wide range of computation systems. In order to build a framework for reasoning about such specifications, we make use of a sequent calculus involving induction and coinduction. These proof principles are actually based on a proof theoretic notion of definition, following on work by Hallnas and SchroederHeister, Girard, and McDowell and Miller. Definitions are essentially logic program which are stratified so as to avoid cyclic calling through negations. The introduction rules for defined atoms treat the definitions as defining fixed points. The use of definitions also makes it possible to reason intensionally about syntax since the inference rules that introduce defined atoms make use of unification and matching, and both of these operations can be used on firstorder terms or higherorder lambdaterms. The full system thus allows inductive and coinductive proofs involving higherorder abstract syntax. We extended the earlier work by allowing induction on general definitions and show that cutelimination holds for this extension. For example, proofs about the operational semantics for miniML are quite natural using such induction. We extend the logic further by adding coinduction rules. To maintain consistency, mutually recursive definitions involving induction and coinduction need to be avoided. Cutelimination holds thanks to a (fairly complex) coreducibility argument. A prototype implementation of much of this logic is available in the Hybrid system implemented on top of Isabelle/HOL and some examples concerning applicative bisimulation will be demonstrated. We consider two additional extensions to this proof system. One is the use of circular proofs as a way to lazily determine the postfixpoint to use in coinductive proofs. Another is to integrate the nabla quantifier, a recently proposed quantifier that provides a direct way to deal internally with the notion of freshness, thereby allowing proofs involving higherorder abstract syntax to be used in an even more direct way. Architectural Primitives for Distribution and Mobility Dr Antonia Lopes (Department of Informatics, Faculty of Sciences, University of Lisbon) The purpose of this talk is to present current work within the FET/IST project AGILE (Architectures for Mobility), namely an extension of CommUnity, a language for parallel program design, and its semantics, that is being developed in order to represent distribution and mobility explicitly in system architectures that are structured in terms of components and coordination connectors. In such cases, components model the computational entities of the system and the coordination connectors capture the mechanisms that coordinate their behaviour so that the properties required of the global behaviour of the system emerge from the computations performed locally by the components. In the developed extension of CommUnity, patterns of distribution and mobility of components (or groups of components) can be explicitly represented in architectures through a primitive that we have called distribution connectors. Furthermore, new abstractions for component interaction that have been proposed in order to facilitate the design of mobile systems can be modelled as coordination connectors and used in systems architectures together with the connectors that model traditional coordination primitives. These new abstractions include coordination patterns that are locationdependent or even involve the management of the location of the coordinated parties. Ordered Linear Logic and Applications Dr Jeff Polakow (Carnegie Mellon) Ordered Linear Logic conservatively extends Intuitionistic Linear Logic with ordered hypotheses which are both linear and ordered so that neither weakening, contraction, nor exchange hold for them. The addition of ordered hypotheses allows concise logical specifications of various systems involving simple data structures such as stacks and queues. This talk will introduce and motivate Ordered Linear Logic as a natural progression from Intuitionistic Logic to Intuitionistic Linear Logic to Ordered Linear Logic. After introducing the basic system, I will discuss logic programming and show a few example programs. Finally, I will show how the logic can be used as the basis for an LFstyle logical framework. A lexically scoped distributed picalculus Pr Vasco Thudichum Vasconcelos (University of Lisbon  COGS Sussex) We define the syntax, the operational semantics, and a type system for lsdpi, an asynchronous and distributed picalculus with local communication and process migration. The calculus follows a simple model of distribution for mobile calculi, with a lexical scoping mechanism that provides for both remote communication and process migration, and makes explicit migration primitives superfluous. Beyond the logic of domain: infinite intersection types Dr Marcello Bonsangue (Leiden University) A type theory with infinitary intersection and union types for an extension of the lambdacalculus is introduced. Types are viewed as upper closed subsets of a Scott domain and intersection and union type constructors are interpreted as the settheoretic intersection and union, respectively, even when they are not finite. The assignment of types to lambdaterms extends naturally the basic type assignment system. We prove soundness and completeness using a generalization of Abramsky's finitary logic of domains. Finally we show some application of the framework to applicative transition systems, parametric polimorphism with subtypes, and recursive subtypes. Dr Ondrej Sykora (Loughborough University) During WWII in a forced work camp, Paul Turan introduced the crossing number problem, in particular the Brick Factory Problem, which asks for the crossing number of complete bipartite graphs. The present work surveys the few known results and proposes open problems on a variant of the crossing number, the biplanar crossing number, and solves the biplanar version of the Brick Factory Problem for K_{5,q} exactly. Motivated by the printed circuit boards, Owens (1971) introduced the biplanar crossing number of a graph G. Biplanar crossing number is the minimum number of crossings in a drawing of a graph in two planes taken over all subgraphs whose union is G. He described a biplanar drawing of the complete graph on n vertices which provides a quartic upper bound to the biplanar crossing number with the constant 7/1536 at the main term. One can define kplanar crossing number of a graph G similarly, making G a union of k subgraphs, which have an application in the design of multilayer VLSI circuits but perhaps the case k=2 is the most interesting and little explored so far. Groebner bases and the structure of cyclic codes over Galois rings Dr Ana Salagean (The University of Loughborough) A cyclic error correcting code over a Galois ring R = GR(p^a,r) is an ideal in R[x]/ (x^n1). We generalise the Groebner basis theory to Galois rings (and further to principal ideal rings) and describe a set of generators for a cyclic code which is also a Groebner basis. This result unifies and extends previous results on the structure of such codes. A cyclic code is called multipleroot or simpleroot depending on whether n is divisible by p or not. It was known that simpleroot cyclic codes are always principal ideals. We show that multipleroot cyclic codes are not principal in general. Dr Peter Wood (King's College London) Hybrid AspectOriented Programming Dr Awais Rachid (The University of Lancaster) Over the recent years aspectoriented programming (AOP) has found increasing interest among researchers in software engineering. It aims at easing software development by providing further support for modularisation. Aspects are abstractions which serve to localise any crosscutting concerns e.g. code which cannot be encapsulated within one class but is tangled over many classes. A number of AOP approaches have been proposed. Although all these approaches form suitable candidates for separating crosscutting concerns in a system, one approach can be more suitable for implementing certain types of concerns as compared to the others. In this talk I will discuss a hybrid approach to AOP. The approach is based on using the most suitable technique for implementing each crosscutting concern in a system. The talk will start by providing an overview of AOP and the various main AOP approaches. This will be followed by a suitability matrix defining the effectiveness of individual approaches to handle certain types of concerns. Some models of using multiple techniques will be discussed. These will be complemented by examples of employing three different AOP approaches: composition filters, adaptive programming and aspect language based mechanisms to implement crosscutting concerns in SADES: a customisable and extensible object database evolution system developed at Lancaster. Semester 1Seminar programme
Seminar detailsDr Alexandru Baltag (Oxford Computer Lab) Coalgebra and (Completely) Iterative Theories Stefan Milius (Institut fur Theoretische Informatik,Technical University of Braunschweig) One approach to capture (potentially infinite) computation algebraically are the (completely) iterative theories studied in the 70ies by Elgot et. al. Infinite trees over a given signature form a completely iterative theory, i.e. any guarded system of recursive equations of the form x_i = t_i(x_1, x_2, ..., y_1, y_2, .....) (i in I) where the t_i are (possibly infinite) trees with variables x_i and parameters y_j, has a unique solution. It has been proven by Bloom, Elgot and Tindell that infinite trees form the free completely iterative theory on a given signature. Recently, it has been discovered that by using methods from coalgebra it is not only possible to generalize these results, but also to prove them in a much nicer and conceptionally easier categorical setting. Whenever an endofunctor H on a category has final coalgebras for all functors H()+X, then those coalgebras, TX, form a monad. The above notion of a system of recursive equations can be generalized, and T becomes the free completely iterative monad on H. In my talk I shall first recall these categorical results and then show how the coalgebraic treatment can also be extended to rational trees Those are the solutions of finitary guarded systems of recursive equations, and form the free iterative theory on a signature. Recent results of our group show how to obtain a free iterative monad on any finitary endofunctor of a locally presentable category. Time permitting, I shall finally also indicate how solutions of (uninterpreted) recursive program schemes, can be treated with the help of the presented coalgebraic methods. On Reduction Semantics for the Push and Pull Ambient Calculus Maria Viglioti (Imperial College) Honda and Yoshida showed how to obtain a meaningful equivalence on processes in the asynchronous $\pi$calculus using equational theories identifying insensitive processes. We apply their approach to a dialect of Cardelli and Gordon's Mobile Ambients. The version we propose here is the Push and Pull Ambient Calculus, where the operational semantics is no longer defined in terms of ambients entering and exiting other ambients, but in terms of being pulled and being pushed away by another ambient. Like the standard ambient calculus, this dialect has the computational power of the asynchronous $\pi$calculus. We contend that the new calculus allows a better modelling of certain security issues. Parametric Polymorphism and .NET Dr Andrew Kennedy (Microsoft Cambridge) Parametric Polymorphism is a wellstudied feature of programming languages such as ML and Haskell and of foundational calculi such as System F, but is not present in languages widelyused in industry with the exception of the flawed template mechanism of C++. Two weeks ago at OOPSLA'02 Microsoft announced forthcoming support for parametric polymorphism (also known as "generics") in C#, Visual Basic and Visual C++. In this talk I will discuss the work of Microsoft Research Cambridge in the design and implementation of this feature. In particular, I will describe how parametric polymorphism will be built into the .NET Common Language Runtime, with support for parameterized types, polymorphic methods, Fbounded type parameters, dynamic types, and polymorphic recursion. The expressivity of the design is illustrated by a typepreserving translation from System F into C#. Session Types for InterProcess Communication Dr Simon Gay (The university of Glasgow) We define a language whose type system, incorporating session types, allows complex protocols to be specified by types and verified by static typechecking. Although session types are well understood in the context of the picalculus, our formulation is based on an imperative language and is different in significant ways. Our typing judgements statically describe dynamic changes in the types of channels and our channel types statically track aliasing. After formalising the syntax, semantics and typing rules of our language, and proving a subject reduction theorem, we outline some possibilities for adding references, functions, objects and concurrency. This is joint work with Vasco Vasconcelos and Antonio Ravara. Finite Automata and Structures Dr Bakhadyr Khoussianov (The University of Heidelberg) In this talk, we introduce the notion of automatic structure, discuss several examples and results. The theory of automatic structures is relatively new, and has recently attracted the attention of experts in theoretical computer science and logic. Informally, a structure is automatic if its atomic diagram is recognized by a finite automaton. A fundamental property of any given automatic structure is that its first order theory is decidable. We consider several natural questions in the study of automatic structures, e.g. what structures have or do not have automatic presentations; what is the relationship between definability and computability by automata; apart from decidability, what else can be extracted from automata presentable structure. As an example, as time permits, we characterize automatic ordinals and show that the random graph does not have an automatic presentation. Most of the results of the talk have been obtained jointly with A. Nerode (Cornell, USA), S. Rubin (Auckland, New Zealand), and F. Stephan (Heidelberg, Germany). Lambdacalculus with an endofscope operator Dr Vincent van Oostrom (CWI and Universiteit Utrecht)
We extend the lambdacalculus with an endofscope operator. It allows
for the encoding of negative information, most notably sideconditions
expressing that `a variable does not occur free in M' at the object
level.
The following preliminary results seem to indicate that the resulting
calculus is promising:
 The corresponding notion of betareduction is confluent
even without the presence of alphaconversion.
 Betareduction is more efficient than usual since one does
not need to substitute for x in a subterm below the endofscope of
that x.
 The calculus can be implemented directly using stacks of names,
without having to resort to De Bruijn indices.
This seems useful for interaction with humans (debugging).
Complexity gaps in propositional proof complexity Dr Soren Riis (Queen MAry, University of London) Florent Madelaine (Université de la Réunion) For a given propositional proof system P, ideally one would like a complete classification of those combinatorial principles which are easy (e.g. have polynomial size Pproofs) and those principles which are difficult (e.g. require full exponential size Pproofs). Fortunately such a characterisation is possible for certain "weak" but important proof systems. In the talk I will focus on the resolution proof system (in tree like Gentzen style representation). The talk will focus on a Dichotomy Theorem stating that a combinatorial principle is hard (require exponential size) for resolution (in tree form) if and only if the principle fails in some infinite model. This explains why the propositional version of the pigeonhole principle is hard (since the pigeonhole principle fails in (some) infinite models). The theorem also states that any combinatorial principle either is easy (i.e. has polynomial size proofs) or is difficult (i.e. require full exponential size proofs). This jump in proof complexity (from polynomial to full exponential) is not unique for treeresolution, but also  so it seems  holds for other "weak" proof systems. The talk is aimed at a mathematically minded audience, with little or no prior knowledge in the field. Dr Colin Mc Bride (The University of Duhram) I shall present a universe construction for the regular datatypes (those generated by polynomial expressions and least fixed point) in type theory, giving us the means to develop their operations and properties generically. Because the `names of types' are just data, we can compute with them in the ordinary way, giving us a great deal more power than `generic programming' extensions of Haskell. A substantial example follows... Gerard Huet's classic paper `The Zipper' shows us informally how to equip a recursive datatype with its associated type of onehole contexts, with applications to structureediting. In this talk, I shall give a formal presentation of this calculation for the regular datatypes: the rules will seem familiarLeibniz discovered them several centuries ago. The operation which plugs a term into a onehole context is developed generically for every regular type. If time permits, I shall discuss a number of variations on the theme, and sketch a proof of Taylor's theorem for regular datatypes. Dr Colin Mc Bride (The University of Duhram) I shall present a universe construction for the regular datatypes (those generated by polynomial expressions and least fixed point) in type theory, giving us the means to develop their operations and properties generically. Because the `names of types' are just data, we can compute with them in the ordinary way, giving us a great deal more power than `generic programming' extensions of Haskell. A substantial example follows... Gerard Huet's classic paper `The Zipper' shows us informally how to equip a recursive datatype with its associated type of onehole contexts, with applications to structureediting. In this talk, I shall give a formal presentation of this calculation for the regular datatypes: the rules will seem familiarLeibniz discovered them several centuries ago. The operation which plugs a term into a onehole context is developed generically for every regular type. If time permits, I shall discuss a number of variations on the theme, and sketch a proof of Taylor's theorem for regular datatypes. Comparing Control Constructs by Doublebarrelled CPS Dr Hayo Thielecke (The University of Birmingham) We investigate callbyvalue continuationpassing style transforms that pass two continuations. Altering a single variable in the translation of lambdaabstraction gives rise to different control operators: firstclass continuations; dynamic control; and (depending on a further choice of a variable) either the return statement of C; or Landin's Joperator. In each case there is an associated simple typing. For those constructs that allow upward continuations, the typing is classical, for the others it remains intuitionistic, giving a clean distinction independent of syntactic details. Moreover, those constructs that make the typing classical in the source of the CPS transform break the linearity of continuation use in the target. Paper to appear in HigherOrder and Symbolic Computation; see http://www.cs.bham.ac.uk/~hxt/research/HOSCdoublebarrel.pdf Locally compact spaces in abstract Stone duality Dr Paul Taylor () Abstract Stone Duality is a reaxiomatisation of general topology intended to make it recursive. By turning the idea of the Scott topology on its head, notions that involve directed (infinitary) joins are reformulated using functions of higher type instead. Here we consider compactness and the \emph{way below} relation $\ll$ used for continuous lattices. A new characterisation of \emph{local compactness} is formulated in terms of an \emph{effective basis}, \emph{i.e}.~one that comes with a \emph{dual basis}. Any object that is definable in the monadic $\lambda$calculus for Abstract Stone Duality (including the lattice structure and Scott principle) has such a basis. Embeddings for Comparing Large Text Streams Dr Graham Cormode (The University of Warwick) Texts are ubiquitous in daily life, varying in size from small (SMS and email) to potentially immense (automatically generated reports, biological sequences). When scanning for similarities between new data and previously stored examples, we need a model that takes account of how texts are changed: pieces are inserted or deleted, and sections are moved around. With a large number of large texts, trying to compare all pairs is not feasible, and for the largest texts we may not even be able to hold the whole of any text in memory at once. We describe an approach to approximately comparing large texts quickly and in sublinear space. It relies on finding combinatorial structures present in any string of characters, and generating a vector representation. This allows rapid comparison of sequences based on a succint representation, and the application of clustering, nearest neighbor searching, and other data mining techniques. Efficient Parameterized Algorithms for the Undirected Feedback Set Problem Dr Venkatesh Raman (The Institute of Mathematical Science, Chennai, India) Given an undirected graph on n vertices and an integer k, the feedback vertex set problem asks whether there are k vertices in the graph whose removal makes the graph acyclic. This is a classical NPcomplete problem and efficient approximate algorithms are known. In this talk, we will give efficient exact algorithms for this problem (in the framework of parameterized complexity) resulting in an O((log k)^k n2) algorithm. The algorithm is obtained through a lemma giving upper bounds for the girth (length of the shortest cycle) of a graph having minimum degree three and small (O(sqrt n)) sized feedback vertex sets. 

Author: Vincent Schmitt (V.Schmitt at mcs.le.ac.uk), T: 0116 252 3813. 