A theory of programming is intended to aid the construction of programs that meet their specifications; for such a theory to be useful it should capture the essential aspects of the program's behaviour, that is, only those aspects one wishes to observe. And it should do so in a mathematically elegant way. This talk shows how to derive enriched semantics of programming languages so that the algebraic laws established in the original semantic framework can be preserved.
We consider the following decision problems:
There is a standard hierarchy of sets numeric functions
E_0, E_2, E_3, ...... E_alpha ......
indexed by the ordinals. Ordinal 2 corresponds (roughly) to the polynomially bounded functions (sometimes called the feasible functions), 3 to the Kalmar elementary functions, omega to the primitive recursive functions, and epsilon_0 to the functions that are handled by Peano arithmetic. Larger ordinals correspond to more sophisticate descriptions of arithmetic. I will show how all of these fit into a common framework of descriptions (specifications) of functions.
G\"odel's T(erm calculus) lambdaG is an applied lambda-calculus for naming those numeric gadgets which occur before epsilon_0. It includes many interesting subsystems. For instance, Primitive Recursive Arithmetic sits at the bottom of lambdaG. It can be seen how the complexity of a numeric gadget is directly related to the syntactic description in this system. We do not need any reference to a model of computation.
In recent years a technique known as tiering (or ramifying) has been applied to lambda-calculi. When this is done to lambdaG the overall strength drops from epsilon_0 to 3, and thus we get a much finer classification of the Kalmar elementary functions. This tiering collapses Primitive Recursive Arithmetic down to level 2, and thus we get a much finer classification of the feasible functions. It is this result that has prompted the interest of some computer scientists.
I will describe some of the historical background to this topic, some of the more recent views of what is going on, and I will try to indicate what tiering is doing.
This talk is about two recently emerging frameworks to theoretically capture general computational behaviours. It is targeted to anybody who is interested in theoretical understanding of computation, including both sequential and concurrent ones.
The talk first focuses on how games semantics offer general intensional articulation of computational behaviours, taking basic categories of games for call-by-name and call-by-value computation. The illustration of these universes is given both at the level of interactional behaviour (which is quite down-to-earth) and at the level of categorical structures (where familiar structures such as CCC and Monad pop up). We in particular emphasise how a simple change in intensional structure results in a drastic change of induced categorical structures, giving a new insight on in what way these categories are about computational behaviours.
After obtaining a good feel about games universes, we now move into how these universes can be "written down" using the syntax of the $\pi$-calculus, and what this relationship gives us. The $\pi$-calculus is the calculus of interaction where only names --- channels of interaction --- are carried in communication. The link between games and the $\pi$- calculus offers not only deeper understanding on these two, and apparently quite different, frameworks for describing computational behaviours, but also a new technological basis for understanding types and typed interaction. Underpinning the discussion by our result which formally links games semantics and typed name passing processes, we explore several avenues of research which arise from such an understanding.
Technical results which will be mentioned in the talk are largely the results of my collaboration with many people, including Samson Abramsky, Marcelo Fiore, Guy MacCusker and Nobuko Yoshida.
If G is a group generated by a finite set of elements S, then each element of G can be represented by a word consisting of elements from S and/or inverses of elements of S. Different words can represent the same element of the group, and the "word problem" for the group is the problem of deciding whether or not two such words do indeed represent the same element.
A group is said to be "finitely presented" if it has a finite set of generators and a finite set of defining relations. The classic result of Novikov and Boone says that there is no algorithm that solves the word problem for finitely presented groups; indeed, there are specific groups where the word problem is unsolvable. To put this another way, there are finitely presented groups where the set W of words that represent the identity is not a recursive set.
The purpose of this talk will be to introduce these notions and to give a survey of some subsequent work concentrating on the implications for the structure of the group of the word problem belonging to certain classes of languages which lie below the recursive languages (for example, the class of regular languages or the class of context-free languages). We will try and explain what these terms mean and mention some results but we will not be presenting any proofs.
Central to the science of computing is the construction of programs that meet their specification. Mathematical methods for doing so are valuable and important because they provide the combination of precision and concision that is vital to mastering the complexity of the task. This talk presents my views on how one should teach the mathematics of program construction. I will argue that the current emphasis on program *verification* is counter-productive and that it is paramount that our teaching should focus on how to *construct* theorems, theories and programs. This theme will be illustrated with a number of simple examples. A short presentation will also be given of the MathSpad system and my aims for the future development of the system and its use in supporting the teaching the mathematics of program construction.
The current semantics documentation of the Unified Modeling Language (UML) has made a significant step towards providing a precise description of the language. However, at present it mainly provides a description of the language's syntax and well-formedness rules. The meaning of the language, which is mainly described in English, is too informal and unstructured to provide a foundation for developing formal analysis and development techniques. Another problem is the scope of the model, which is both complex and large. This paper describes work currently being undertaken by the precise UML group (pUML), an international group of researchers and practitioners, to address these problems. A formalisation strategy (developed through the experiences of the group) is proposed, which concentrates on making precise the meaning of the core elements of UML. This work is currently forming part of the groups contribution to the next major revision of UML2.0.
Program slicing is a technique for simplifying programs, by focusing on a subset of variables. Statements which cannot affect the value of these variables at some chosen point in the program are removed to form a slice. There are many interesting variations on this theme, all of which share the idea of simplifying a program by focusing upon some sub computation. Slicing has been applied to maintenance, testing, debugging, parallelisation, reuse, restructuring, comprehension, program integration and software measurement. This talk will provide an overview of the field and a brief description of some of the work we have undertaken at Goldsmiths on theoretical foundations of slicing and on extensions to the traditional concept of a slice.
The talk will outline a Petri net semantics as well as a structural operational semantics for an algebra of process expressions. It will specifically address this problem for the Petri Box Calculus (PBC), a model of concurrent computation which combines Petri nets and standard process algebras. The talk will addresses a range of issues that arise when process algebras and Petri nets are treated within a single framework; in particular, the compositionality of structure and behaviour, refinement, and equivalences.
Probabilistic analysis can establish that certain properties hold (in some meaningful probabilistic sense) where conventional model checkers fail, either because the property simply is not true in the state (but holds in that state with some acceptable probability), or because exhaustive search of only a portion of the system is feasible. Models used for such analyses are variants of probabilistic automata (such as labelled Markov chains), in which the usual (boolean) transition relation is replaced with its probabilistic version given in the form of a transition probability matrix. We consider the probabilistic temporal logic PCTL of Hansson & Jonsson, based on CTL, which allows one to express properties such as ``the probability of the message being delivered is at least 0.98''. Model checking procedure for this logic over fully probabilistic systems can be reduced to solving a linear equation system, and over concurrent probabilistic systems to a linear optimization problem.We report on some experimental results with MTBDD-based symbolic probabilistic model checking using Colorado University Decision Diagram package (CUDD).
A broad variety of location dependent services will become feasible in the near future due to the use of the Global Position System (GPS), which provides location information (latitude, longitude and height) and global timing to sensors or mobile users. The task of sending a message from a source to destination in a wireless network is known as the routing problem. In a distributed (or localized) algorithm, each node forwards the message towards the destination solely based on the location of itself, its neighbors and destination. Recently, several distributed GPS based routing protocols for a mobile ad hoc network were reported in literature. They are variations of directional (DIR) routing methods, in which node A (the source or intermediate node) transmits a message m to one or several neighbors whose direction is closest to the direction of D. We also found an older, MFR (most forward progress within radius) method. We propose a new location based GEographic DIstance Routing (GEDIR) algorithm. When node A wants to send m to node D, it forwards m to itís neighbor C which is closest to D among all neighbors of A. The same procedure is repeated until D, if possible, is eventually reached. 2-hop, flooding, and multiple path GEDIR, DIR, and MFR methods are also suggested. We show that the directional routing methods are not loop-free, and prove that the GEDIR and MFR methods are inherently loop free. The power needed to route a message from source to destination is, perhaps, more important performance measure than the hop count between them. Based on any formula that finds the power needed to transmit or receive message between two nodes on the basis of distance between them, we propose efficient power, cost, and power-cost distributed routing algorithms, and prove that they are loop-free. Wireless networks are best modeled by unit graphs, where nodes are connected by an edge iff the distance between them is at most R (the radius of the graph). We show how to eliminate some edges from a connected unit graph and obtain its planar connected subgraph. A well known face tracing (FACE) algorithm that always finds a path between two nodes in a planar connected graph is then modified to reduce the path length and still guaranty the delivery (without memorizing the past traffic at any node, or making multiple copies of m in the network). An efficient routing algorithm that guaranties the delivery (called GFG) is then obtained by switching back and forth between GEDIR and FACE routing schemes. Routing starts with GEDIR algorithm until delivery or failure. In case of failure at a node X, it switches to FACE algorithm until a node closer to destination than X is detected, and returns back to GEDIR until delivery (or next failure). The performance of GFG algorithm can be improved if network is reduced to a set of internal nodes, so that each node is either internal or directly connected to an internal node. This reduces the frequency of invoking FACE scheme. Power, cost, and power-cost routing algorithms that guaranty the delivery can be similarly obtained by combining them with FACE algorithm that is invoked in case of failure of a basic method, called off when a chance for recovery emerges, and by running them on internal nodes only until destination is reached in the last hop. The performance of all mentioned routing algorithms is being measured on static networks, with plans to test the best of these methods for the case of moving nodes, by combining these routing algorithms with a novel scheme for periodic location updates of nodes. Joint research with several students and colleagues and (to be) published in several papers.
Parallel computing is one of the most important advances in the development of contemporary computer science and technology. It has changed our ways of thinking and problem solving, created new research areas and also brought many challenges to us. In this talk, I will first summarize my main contributions to parallel computing in its research areas of parallel algorithms, interconnection networks, parallel programming environment, communication protocols, data intensive applications, advanced computing models and techniques. I will then present challenges in the forefront of these areas and promises based on my work. Finally I will give a list of possible future research projects which I believe can attract funding and are of common interest for me and colleagues in the department.
We show how linear typing can be used to obtain functional programs which modify heap-allocated data structures in place. We present this both as a ``design pattern'' for writing C-code in a functional style and as a compilation process from linearly typed first-order functional programs into malloc()-free C code. The main technical result is the correctness of this compilation. The crucial innovation over previous linear typing schemes consists of the introduction of a resource type <> which controls the number of constructor symbols such as "cons" in recursive definitions and ensures linear space while restricting expressive power surprisingly little. While the space efficiency brought about by the new typing scheme and the compilation into C can also be realised by with state-of-the-art optimising compilers for functional languages such as Ocaml, the present method provides guaranteed bounds on heap space which will be of use for applications such as languages for embedded systems or `proof carrying code'.
The logic programming language Prolog has been used to provide an animator simulation for the VERILOG Hardware Description Language (HDL). The simulator is based on an operational semantics of a significant subset of the language. Using this approach allows the exploration of sometimes subtle behaviours of parallel programs and the possibility of rapid changes or additions to the semantics of the language covered. It also acts as a check on the validity of the original operational semantics. A literate programming style has been used, allowing the associated documentation to be maintained in step with the Prolog program in a convenient manner.
The use of software components from untrusted or partially-trusted sources is becoming common. A user would like to know, for example, that they will not leak personal data to the net, but it is often infeasible to verify that such components are well-behaved. Instead, they must be executed in a secure environment, or _wrapper_, that provides fine-grain control of the allowable interactions. In  we introduced a process calculus with constrained interaction to express wrappers and discussed their security properties. In this work we study information flow properties of wrappers. We present a causal type system that statically captures the allowed flows between wrapped potentially badly-typed components; we use the type system to prove that a non-trivial wrapper has the desired property.
 Secure Composition of Insecure Components, Peter Sewell and Jan Vitek. Computer Security Foundations Workshop, CSFW-12, June 1999.
(joint work with Paul Levy and Hayo Thielecke)
I shall present a version of Eugenio Moggi's computational lambda calculus, with an explanation of why I think this formulation is worth consideration. I shall then describe two sound and complete classes of models for it, explain the definitions, and outline why one might be interested in them. The first class is that of closed Freyd-categories, a natural and direct generalisation of the notion of cartesian closed category. The second is that of closed kappa-categories, which are based upon indexed categories and which may be related to modern compiling technology.
A feature is a piece of functionality. For example, possible features of a printer include paper-out detection, duplex printing, and colour printing. Often, new systems are developed by adding features to old ones, a practice which we call "feature integration". When several features are integrated into the same system, they may interfere with each other in undesirable ways; this problem has been called the "feature interaction problem" in telephony systems, where it has received a lot of attention. I will start by reviewing some known examples of feature interaction which have arisen in telecommunications, and some of the approaches to detecting and resolving it. Then I will describe the approach we have taken, which consists of defining a new language construct for defining features, and the case studies we have examined. The feature construct allows the programmer to override behaviour of the base system. I will describe some of the properties of the non-monotonic inference relation that this induces, and perhaps examine feature integration in a more abstract setting.
Simplification orderings are special kinds of orderings on term algebras. The are used in rewriting, computer algebra and automated deduction. Of particular interest in automated deduction is the problem of solving ordering constraints for the known simplification orderings. The problem of solvability of ordering constraints is equivalent to the problem of decidability of the existential first-order theory of term algebras with the corresponding ordering. Two kinds of simplification orderings have been introduced so far: Knuth-Bendix orderings and recursive path orderings. The problem of decidability of Knuth-Bendix ordering constraints was open until recently. In this talk we describe a decision procedure for the Knuth-Bendix ordering constraints. The talk is structured as follows: (i) Simplification orderings; (ii) Use of ordering constraints in automated deduction; (iii) Knuth-Bendix orderings; (iv) Decision procedure for the Knuth-Bendix ordering constraints.
By a classical result of Barwise and Moschovakis, a least fixed point on the basis of a first-order formula is itself first-order definable if and only if its iteration closes within a uniformly bounded finite number of steps. We consider the corresponding decison problem, to decide whether a given fixed point iteration is bounded. This problem, which was first considered in the context of database query optimisation, turns out be largely undecidable. We review some old and some new undecidability results for some natural fragment, contrasting them with decidability results and some open issues for some other interesting fragments.
We address the question of how to can understand reasoning in general and mathematical proofs in particular. We argue the need for a high-level understanding of proofs to complement the low-level understanding provided by logic. We propose a role for computation in providing this high-level understanding, namely by the association of proof plans with proofs. Proof plans are defined and examples are given for two families of proofs. Criteria are given for assessing the association of a proof plan with a proof.
The point-free relational calculus has been very successful as a language for discussing general programming principles. However, when it comes to specific applications, the calculus can be rather awkward to use: some things are more clearly and simply expressed using variables. The combination of variables and relational combinators such as converse and choice yields a kind of nondeterministic functional programming language. We give a semantics for such a language, and illustrate with an example application.
We propose an extension of the asynchronous picalculus in which a variety of security properties may be captured using types. These are an extension of the Input/Output types for the picalculus in which I/O capabilities are assigned specific security levels. We define a typing system which ensures that processes running at security level sigm cannot access resources with a security level higher than sigma. The notion of access control guaranteed by this system is formalized in terms of a Type Safety theorem. We then show that, for a certain class of processes, our system prohibits implicit information flow from high-level to low-level processes. We prove that low-level behaviour can not be influenced by changes to high-level behaviour. This is formalized as a Non-Interference Theorem with respect to may testing.
A quasiorder is a reflexive transitive relation. A quasiorder is a wellquasiorder if the quotient partial order is wellfounded and has no infinite antichains. The Egli-Milner ordering on the power set of a wellquasiorder cannot be relied upon to be a well- quasiorder. The largest class of wellquasiorders closed under this operation turns out to be a natural class of structures in other ways, and constitutes the class of Better-quasiorders - the BQO of the title.
We will start by following "Java Security" by Scott Oaks, but will aim to progress onto more technical articles later.
Recently we have begun reading Francis Borceux's book Handbook of Categorical Algebra 2: Categories and Structures
Dr Argimiro Arratia, Wednesdays 10.30 in G4 (Semester 1).
Description: This is a reasonably informal series of lectures/seminars given by Dr Arratia and based around some of his recent research. The series will start by covering the necessary background material and will then consider the research results achieved so far and some possible further questions and directions.
Time-stamp: <99/09/06 12:14:43 sja4>