ABOUT |
Computer Science Seminars and Short Courses — 2000/01The Computer Science Group holds regular advanced and research level meetings, mainly during the two University teaching semesters. These meetings take the form of Research Seminars, which consist of one hour talks given by invited speakers or members of the Department on current research; Reading Groups which involve a small number of staff and research students studying an advanced book or research paper; and Lecture Courses which are usually delivered by a member of staff and pitched at advanced postgraduate level. Such lecture courses are also taken by staff and research students. Semester 1Seminar programme
Seminar detailsApplied categorical logic: Information systems specification in industry Prof. Michael Johnson (Macquarie University - Division of Information and Comunication Sciences ) Category theory is sometimes viewed as even more esoteric than other branches of pure mathematics. Yet many of the basic concepts of category theory, are of great value in industrial consultancies. This talk describes some of my consultancy work which is based on using categorical universal algebra in information system specification. Most of the talk should be comprehensible for people with no knowledge of category theory, nor of information system specification. The Relative Complexity of Approximate Counting Prof. Martin Dyer (Leeds University)
Two natural classes of counting problems that are interreducible
under approximation-preserving reductions are:
(i) those that admit a particular kind of efficient approximation algorithm
known as an FPRAS, and
(ii) those that are complete for #P with respect to approximation-preserving
reducibility.
We describe and investigate not only these two classes but also
a third class, of intermediate complexity,
that is not known to be identical to (i) or (ii).
The third class can be characterised as the hardest problems in
a logically defined subclass of #P.
The Hospitals / Residents Problem with Ties Dr. David Manlove (University of Glasgow, Department of Computer Science)
In a number of countries, the annual process of assigning graduating
medical students (also known as residents) to their first hospital
appointments is accomplished by a centralised matching scheme. The
allocation process involves constructing a stable matching,
based on the preferences of residents over hospitals, and vice versa.
In this talk, I will describe the classical algorithms that lie at the
heart of these matching schemes. I will then outline some algorithmic
consequences of permitting hospitals and residents to express ties in
their preferences lists. In particular, I will present an algorithm
for matching residents to hospitals when indifference is allowed.
This work has been carried out jointly with Rob Irving and Sandy Scott,
also of the University of Glasgow, as part of the EPSRC-funded
Stable Matching Algorithms research project. The talk should
be accessible to a general CS/Maths audience, including those with
no prior knowledge of algorithmics.
Reasoning about Real-Time Programs Using Idle-Invariant Assertions Dr. Ian Hays (Queensland University)
We develop a set of laws for reasoning about real-time programs
using assertions (preconditions and postconditions) in the
style of Hoare. In the real-time context assertions may refer
to the current time and to the value of external inputs, which
are not under the direct control of the program and hence not
guaranteed to be stable with respect to the passage of time
(even if the program does not modify any of the variables
under its control). Hence in order to reason about real-time
programs, we make use of idle-invariant assertions: assertions
that are invariant to just the passage of time.
Abstract Stone Duality - The Computational Side Dr Paul Taylor (Queen Mary and Westfield College)
Abstract Stone Duality is a new axiomatisation for topological spaces
without arbitrary unions, intended to give them a direct computational
basis. This talk will largely be concerned with the translation of
these axioms into a programming language (PROLOG).
There should be something for everybody - some topological ideas, some
elementary category theory, some lambda calculus, some sequent calculus
and some (standard) compiler techniques. The category theory and
topology are needed to understand WHY I'm doing this, not HOW.
Prof. R.Brown (School of informatics, Mathematics division, University of Wales, Bangor) (Joint work with F A-A Al-Agl and R Steiner)
No previous knowledge of multiple categories will be assumed. We begin by reviewing the equivalence of 2-categories and double categories with connection (C B Spencer, 1977), showing how this gives a context for `2-dimensional algebra' . Then we explain the overall method of the recent extension to all dimensions. This extension allows for the description of a `commutative cube' and for the construction of a monoidal closed structure in the globular case. The methods go back to the deep past of relative homotopy theory. There are current investigations (Goubault, Gaucher) on applications to concurrency. Dr Steve Vickers (Open University of Milton Keynes, Department of Pure Mathematics) One possible view of schemas such as those in Z is that they present logical theories: they declare some symbols and then set out some axioms expressed in terms of those symbols. This is not a trivial comparison, since there are features in Z - an obvious one is the type sytem - that do not translate directly into first-order logic. However, in the so-called "geometric" logic this kind of translation can be made to work despite the fact that the logic is still first-order. The trick is that the logic is infinitary and this allows some types to be characterized within the logic itself in a way that is impossible in ordinary classical logic. Geometric theories correspond to Grothendieck toposes, their so-called "classifying toposes", and this means that a schema (of the right form) can be viewed as presenting a topos. That might seem unwelcome news if you don't know what a topos is, but it does provide a clean link to a deep body of mathematics and in fact it illuminates the question of what toposes themselves are, particularly in their mysterious guise as "generalized topological spaces" - the topos provides a "space of implementations" of the schema. I shall also describe how the geometric logic can bear an observational interpretation that suggests it can have specificational aptness as well as mathematical elegance. Programming modulo Alpha-conversion Dr Andrew Pitts (Cambridge University, Computer Laboratory) This talk will describe work in progress on the design of a programming language for recursively defined functions on user-declared inductive datatypes involving binding constructors. The language has a novel type constructor for abstractions. Recursive definitions can use a form of pattern-matching on bound names of abstractions, while the type system ensures that only those definitions respecting alpha-conversion of bound names typecheck. The language design is `semantically driven', in as much as it arises out of contemplating the model of variable-binding in terms of the Fraenkel-Mostowski permutation model of set theory given by Gabbay and Pitts in Proc. LICS '99, pp 214--224. First-order projections and polynomial-space games Pr Iain Stewart (Leicester) First-order projections are very restricted logical reductions between decision problems. They were developed by Neil Immerman in the 80's, and subsequently lots of problems have been proven complete for some complexity class via these reductions. In this talk I will show how the existence of a first-order projection from the problem TC (transitive closure) to some other problem $\Omega$ enables us to automatically deduce that a natural game problem, $\mathcal{LG}(\Omega)$, whose instances are labelled instances of $\Omega$, is complete for {\bf PSPACE} (via log-space reductions). Our analysis is strongly dependent upon the reduction from TC to $\Omega$ being a logical projection in that it fails should the reduction be, for example, a log-space reduction or a quantifier-free first-order translation. This seminar will be accessible to most in that I will only assume a nodding acquaintance with complexity theory and I will fully describe first-order projections and the associated games. (This is joint work with Argimiro Arratia, Universidad Simon Bolivar, Caracas.) What is the Eckmann-Hilton argument? Dr J.M.E. Hyland (Cambridge) The Eckmann-Hilton argument is generally regarded as the reason why the higher homotopy groups are abelian. I shall explain what it means in the context of algebraic structure on symmetric monoidal categories. This topic provides the foundations for a theory of wiring diagrams in computer science and is also a necessary step in the development of the higher dimensional linear algebra needed to provide an account of topological field theory. In the talk I shall concentrate on characterising some simple concrete categories of wiring diagrams. (This is an account of work done with John Power.) The Generic Approximation Lemma Dr Graham Hutton The approximation lemma is a simplification of the well-known take lemma, and is used to prove properties of programs that produce lists of values. In this talk I will show how the approximation lemma, unlike the take lemma, can naturally be generalised from lists to a large class of datatypes, and present a generic approximation lemma that is parametric in the datatype to which it applies. Semester 2Seminar programme
Seminar detailsNew Tractable Classes From Old Dr Richard Gault (Oxford University - Computing Laboratory) Many combinatorial problems can be naturally expressed as "Constraint Satisfation Problems" (CSPs). This class of problems is known to be NP-hard in general, but a number of restrictions of the general problem have been identified which ensure tractability. In this talk, we shall describe a framework, due to Jeavons, Cohen, et al., which enables a uniform description of these tractable subclasses. In addition, we will introduce a general method of combining two or more tractable subclasses, so as to generate new tractable subclasses. These new classes are genuinely novel, and have not previously been identified. In addition, our algorithm for solving them can be generalised to an algorithm for solving a much wider variety of CSPs than one might at first suppose. Resource Semantics, Bunched Logic and Logic Programming Imperfect Prof. David Pym (Queen Mary and Westfield College, Department of Computer Science) Starting from a basic model of resources, we derive BI, the logic of bunched implications, in which a multiplicative and an additive implication live side-by-side. Propositional BI may be seen to arise from an analysis of the proof-theoretic relationship between between conjunction and implication, and may be viewed ar a merging of intuitionistic logic and intuitionistic linear logic. Predicate BI includes, in addition to the usual additive quantifiers, multiplicative (or intensional) quantifiers, which arise from observing restrictions on structural rules at the level of terms. These restrictions allow the distinction between additive and multiplicative predication for each propositional connective. We discuss proof theory, forcing and categorical semantics for BI and describe a range of applications with particular emphasis on logic programming with BI, which comes along with a natural account of locality (sharing) and globality (non-sharing). Prof. Wilfrid Hodges (Queen Mary and Westfield College, School of Mathematics) The logic of imperfect information has been around for some forty years, and makes regular appearances in the literature on natural language semantics. Its position in computer science is more tenuous, though there are hints that it or something like it ought to be useful for analysing imperfect communication of information between systems. My own interest in it is that I recently found a Tarski-style semantics for it (as opposed to the standard game-theoretic semantics given implicitly by Henkin in the 1950s), and this allowed us to prove some surprising facts about the logic. The talk will survey what is known, mentioning contributions of Henkin, Ehrenfeucht, Hintikka, Enderton,Walkoe, Blass, Gurevich, Vaananen, Hodges, Cameron, Caicedo, Krynicki, Sandu, Bradfield. Periodicity of two-dimensional arrays Dr J K Truss (University of Leeds) I propose a solution to a problem of Maurice Nivat. A {\em two-dimensional array} is a function $f$ from $Z^2$ to some 'alphabet' $\Sigma$. The array is said to be {\em periodic} if there is a non-zero vector ${\bf a}$ (necessarily with integer co-ordinates) such that $f({\bf x} + {\bf a}) = f({\bf x})$ for all ${\bf x}$ in $Z^2$. Nivat proposed a sufficient condition for the periodicity of $f$ in terms of the 'views' exhibited by $f$. Specifically, for positive integers $m$ and $n$, an $m \times n$ {\em view} is an $m \times n$ matrix which 'appears' somewhere in $f$ (in an obvious sense). Nivat's condition is that for some $m$ and $n$, there are at most $mn$ $m \times n$-views exhibited by $f$. I remark that this condition is definitely not {\em necessary} for periodicity, and I explain the two main ideas in a proof that it is sufficient. The first is that to establish periodicity, one tries to show that the whole array is determined by the list of views from a finite subarray, and the other is that to use induction, one should also consider restrictions of $f$ to more complicated subsets of $Z^2$ than just finite rectangles. Dr Iain Phillips (Imperial College) It has long been recognised that ordinary process algebra has difficulty dealing with actions of different priority, such as for instance an interrupt action of high priority. Various solutions have been proposed. We introduce a new approach, involving the addition of ``priority guards'' to the summation operator of Milner's process calculus CCS. In our approach, priority is \emph{unstratified}, meaning that actions are not assigned fixed levels, so that the same action can have different priority depending where it appears in a program. An important feature is that, unlike in other unstratified accounts of priority in CCS (such as that of Camilleri and Winskel), we can treat inputs and outputs symmetrically. Dr David Duke (University of Bath, Department of Mathematical Sciences)
The design of an effective user interface often involves tacit
understanding of users' perceptual and cognitive capabilities.
However as interface designers explore new technologies such as
speech and gesture, the complexity of interactions can confound
craft-based intuitions. This talk explains the rationale behind
an approach to modelling interaction in which formal methods are
used to represent both the interface and a model of human
cognitive resources. The aim is to use the expressive power of
mathematical models to better understand the interaction between
computational resources and cognitive constraints.
Two recent directions opened up by this work will also be described.
The first generalises the modelling approach to give a framework
for "macro-theory", addressing interaction between agents
across a range of abstraction levels. The second is a specific
application of the modelling approach to understand issues in the
visualisation of large-scale datasets.
Classifying the complexity of temporal constraints Dr Andrei Krokhin (Oxford University - Computing Laboratory) Reasoning about temporal constraints is an important task in many areas of computer science and elsewhere, such as scheduling, planning, technical diagnosis, molecular biology, and archeology. Several frameworks for formalizing this type of problems have been suggested; for instance, the {\em point algebra} (for expressing relations between time points), the {\em point-interval algebra} (for expressing relations between time points and intervals) and the famous {\em Allen's interval algebra} for expressing relations between time intervals. This algebra has proven to be useful and convenient and it has also become the kernel of some other formalisms. Due to computational hardness (the basic satisfiability problem in Allen's algebra is NP-complete), it is unlikely that efficient algorithms exists for reasoning in the full algebra. This computational difficulty has motivated the study of the complexity of fragments of the algebra, and the search for effective heuristics based on tractable fragments. In this talk we present a complete classification of complexity in Allen's algebra. This is a joint work with Peter Jeavons (Oxford University) and Peter Jonsson (Linkoeping University, Sweden) An introduction to relation algebra and games Dr Robin Hirsch (University College London, Department of Computer Science) Boolean algebra can be thought of as an algebraisation of fields of unary relations (sets). Stone's theorem states that every boolean algebra can be represented as a field of sets. Similarly, relation algebra is an algebraic treatment of fields of binary relations, but not every relation algebra can be represented as a field of sets. I'll show how to use games to characterise which relation algebras are representable. I will then continue with one of two alternatives: (a) applications of these game-theoretic techniques to temporal constraints and planning, or (b) a connection between a probabilistic graph construction due to Erdos and a representation problem for relation algebras. If the audience is more interested in applications in computer science I will talk about (a) but if there is more interest in the mathematical aspects I will do (b). Graph properties are often easier than you think Prof. Etienne Grandjean (University of Caen, GREYC)
We are interested in some strong relationships between computational
complexity and logical definability.
We first recall how much the class NLIN (resp. DLIN), that is
the
class of problems recognizable in nondeterministic (resp. deterministic)
linear time, is computationally and logically (resp. algebraically)
robust.
Note that NLIN contains most natural combinatorial NP problems.
Secondly, we are interested by problems, regarded as sets of
first-order finite structures (typically, sets of graphs G=(V,E), where
E
is a binary relation on the vertex domain V) and we study the class,
denoted VertexNLIN, of the problems that are recognizable by RAM's in
nondeterministic time O(n), where n is the size of the domain (e.g., for
a
graph, its number of vertices). Note that the computation time of such a
(nondeterministic) RAM is too short to read all the input (graph)
structure.
We show that VertexNLIN is a robust class, from a logical (resp.
computational) point of view. The interest of this class is enhanced by
the
fact (that we will show) that it contains a number of combinatorial
graph
properties, namely, connectivity, biconnectivity, hamiltonicity,
nonplanarity, cubic subgraph, etc.
Noticing that most of those properties are monotone, we will
also
present logical (resp. computational) characterizations of the subclass
of
monotone problems in VertexNLIN, which can be compared to similar
characterizations of the larger class Monotone NP proved by Iain
Stewart.
Finally, we will prove that some graph properties are not in
VertexNLIN and will present some structural properties of this class.
(Common work with C. Lautemann, F. Olive and S. Ranaivoson).
Dr Paul Krause (Philips Research Laboratories) The theory of Computing goes back over three hundred years to Leibniz's design for a calculating machine. Since then it has exercised the minds of some of the world's finest mathematicians, and still continues to do so. But how well does the theory support the practice of Computing? If Computing is a science, then we ought to be able to use the theory to make predictions about the behaviour of our computing machines. Yet the modern world is littered with heroic failures where computing machines have shown, and continue to show, behaviour that appears to defy reason. Computing fails as a science, because it lacks an experimental method. After three hundred years, we still do not have a clear consensus on how an hypothesis can be tested in the real world of developing complex software. The reasons for this will be discussed during this talk, and a way forward proposed. This last involves the use of probabilistic "causal" models. I will describe one such model and present results of its validation on a series of large-scale development projects at Philips. Introduction to Parameterized Complexity Dr Venkatesh Raman (The Institute of Mathematical Sciences, Chennai, India)
Parameterized Complexity (developed by R. Downey and M. Fellows) is a
recent promising approach to deal with computationally
hard (typically NP-hard) problems. The idea is to identify a
parameter k in the input which is likely to be small,
and try to restrict the combinatorial exponential explosion
to the parameter. More specifically, given a problem with input
size n and a parameter k, the goal is to design, if possible, an algorithm
whose running time is O(f(k) n^c) where c is a constant independent
of k, and f is any (typically exponential) function of k.
In this introduction, we will look at such algorithm design techniques
illustrated through many examples. Some of the problems we will look at are:
Given a graph G on n vertices, and a parameter k does it have a vertex cover
(or a feedback vertex set, or a simple path or a simple cycle) on k vertices?
Given a boolean formula in CNF form, is there an assignment to the variables
satisfying at least k clauses?
We will show that these problems have O(f(k)n^c) type algorithms and
also look at some that are unlikely to have such an efficient
algorithm.
|
|
Author: Vincent Schmitt (V.Schmitt@mcs.le.ac.uk), T: 0116 252 3813. |