University of Leicester

cms

Computer Science Seminars and Short Courses — 2004/05

You main need this to find your way...

  • The room is likely to be BEN LT5, which is in the Bennet building and quite straightforward to find; however...
  • If the room is "Ken Edwards LT3" be aware that the entrance is hidden on the left hand side when you walk towards the main entrance. Don't go to the main entrance: there is no indication there of where LT3 is...
  • If the room is FJ1119, follow the signs to AVS - Graphics and Photography.

Semester 2

Seminar programme


Seminar details

New approximation algorithms for the set cover problem

Dr Asaf Levin (The Hebrew University of Jerusalem)
Friday 22/07, 14:30 in Benett LT5

In the weighted set-cover problem we are given a set of elements $E=\{ e_1,e_2, \ldots ,e_n \}$ and a collection $\cal F$ of subsets of $E$, where each $S \in$$\cal F$ has a positive cost $c_{S}$. The problem is to compute a sub-collection $SOL$ such that $\bigcup_{S\in SOL}S_j=E$ and its cost $\sum_{S\in SOL}c_S$ is minimized. When $|S|\le k\ \forall S\in\cal F$ we obtain the weighted $k$-set cover problem. The special case of the weighted $k$ set cover where each set has a unit cost, is called the unweighted $k$-set cover problem. It is well known that the greedy algorithm is an $H_k$-approximation algorithm for the weighted (and unweighted) $k$-set cover, where $H_k=\sum_{i=1}^k {1 \over i}$ is the $k$-th harmonic number, and that this bound is exact for the greedy algorithm for all constant values of $k$.

The previous best improvement of the greedy algorithm for the unweighted $k$-set cover problem is an $\left( H_k-{1\over 2}\right)$-approximation algorithm. In the first part of the talk I will present a new $\left( H_k-{196\over 390}\right)$-approximation algorithm for $k \geq 4$ that improves the previous best approximation ratio for all values of $k\geq 4$. This algorithm is based on combining local search during various stages of the greedy algorithm.

In the second part of the talk I will present the first improvement on the approximation ratio of the greedy algorithm for the weighted $k$-set cover problem for all constant values of $k$. This result shows that the greedy algorithm is not the best possible for approximating the weighted set cover problem. Our method is a modification of the greedy algorithm that allows the algorithm to regret.

A part of this talk is based on a joint work with Rafi Hassin.

Online Bin Packing with Cardinality Constraints

Dr Leah Epstein (The University of Haifa)
Thursday 21/07, 10:30 in Benett LT5

We consider a one dimensional storage system where each container can store a bounded amount of capacity as well as a bounded number of items k>1. This defines the (standard) bin packing problem with cardinality constraints which is an important version of bin packing, introduced by Krause, Shen and Schwetman already in 1975. Following previous work on the unbounded space online problem, we establish the exact best competitive ratio for bounded space online algorithms for every value of k. This competitive ratio is a strictly increasing function of k which tends to approximately 2.69103 for large k. Lee and Lee showed in 1985 that the best possible competitive ratio for online bounded space algorithms for the classical bin packing problem is the sum of a series, and tends to one less than the above value as the allowed space (number of open bins) tends to infinity. We further design optimal online bounded space algorithms for variable sized bin packing, where each allowed bin size may have a distinct cardinality constraint, and for the resource augmentation model. All algorithms achieve the exact best possible competitive ratio possible for the given problem, and use constant numbers of open bins. Finally, we introduce unbounded space online algorithms with smaller competitive ratios than the previously known best algorithms for small values of k, for the standard cardinality constrained problem. These are the first algorithms with competitive ratio below 2 for k=4,5,6.

Aspect-oriented Middleware

Dr Hans-Arno Jacobsen (The University of Toronto)
Friday 08/07, 14:30 in Benett LT5

Over the past decade, middleware platforms such as CORBA, J2EE, and Web Services have become increasingly popular, addressing software engineering problems for distributed application development in the most diverse application domains and under greatly varying requirements. For many of these platforms, these proliferations have led to complicated architectures, complex abstractions, and bloated implementations. We think, this is mainly due to the interaction of too many (crosscutting) concerns, imposed from a wide range of requirements, and the lack of being able to adequately modularize these concerns. For example, it is not possible to easily customize a given middleware implementation or flexibly adapt it to address ever changing domain requirements.

In this talk we will present results drawn from our on-going Aspect Oriented Middleware research project. We will first present examples illustrating the phenomenon of logic tangling and implementation convolution, which we believe to be central for the above outlined problems. This phenomenon is not due to bad design but rather due to the limitations of conventional architectural decomposition methodologies. We will show how aspect oriented programming (AoP) techniques can remedy this problem. We will present results on quantifying logic tangling in several existing middleware implementations through aspect mining. We will briefly review our mining methodology and, time-permitting, describe our Eclipse-based, Prism tool chain, which we have developed to aid in the discovery of aspects in large code bases. Our aspect mining results clearly indicate the inherent presence of aspects in (legacy) platform implementations. Moreover, our extensive aspect oriented re-factoring efforts strongly suggest that AoP can improve the customization, configurability, modularity, and computational efficiency of an aspect oriented middleware implementation.

This research is joint work with Charles Zhang

Syntactic Characterisations of Polynomial-Time Optimisation Classes

Dr Prabhu Manyem (School of Information Technology and Mathematical Sciences - University of Ballarat)
Thursday 07/07, 10:30 in Benett LT5

In Descriptive Complexity, there is a vast amount of literature on decision problems, and their classes such as P, NP, L and NL. However, research on optimisation problems has been limited. Optimisation problems corresponding to the NP class have been characterised in terms of logic expressions by Kolaitis and Thakur, Panconesi and Ranjan, and by Zimand. Here, we attempt to characterise the optimisation versions of subclasses of P via expressions in second order logic, many of them using universal Horn formulae with successor relations.

Proof Preserving Transformations

Dr Piotr Kosiuczenko (The University of Leicester)
Monday 27/06, 15:00 in Ben LT5

Redesign of a structure requires transformation of the corresponding specification. In this talk we will discuss the way proof are transformed via, what we call, interpretation functions. We provide a sufficient condition for such functions to preserve proofs in equational logic, as well as proofs in other logical systems such as propositional logic with modus ponens, proofs using resolution rule and induction. This results allow us to identify a class of specification preserving system redesigns.

A Formal Approach to Service Specification and Matching based on Graph Transformation

Dr Reiko Heckel (The University of Leicester)
Monday 27/06, 15:00 in Ben LT5

When Web services are composed by linking service providers and requestors, the requestor\u2019s requirements for a \u201duseful\u201d service have to be matched against the service description offered by the provider. Among other things, service specifications (requirements or descriptions) may contain operation contracts specifying pre-conditions and effects of (required or provided) operations.

In this presentation we provide a semi-formal, UML-based notation for contracts and contract matching, as well as a formalization of these notions in terms of graph transformation. We establish the desired semantic relation between requestor and provider specifications and prove the soundness of our syntactic notion of matching w.r.t. this relation.

Reversing CCS with One-time Communication Keys

Dr Irek Ulidowski (The University of Leicester)
Monday 27/06, 15:00 in Ben LT5

Reversibility of Milner's CCS has been studied by Danos and Krivine in their language RCCS, which uses separate memories and labels transitions with information about threads. We introduce a different approach, where processes maintain their structure as they evolve. We keep track of communications by binding together the two partner processes with ``communication keys'', which are generated freshly for each new communication. Just as with RCCS, we can use the calculus to model reversible biological reactions.

FireWorks: A Formal Transformational Approach to Features (joint work with Mark Ryan)

Dr Pierre-Yves Schobbens (L'Universite' de Namur )
Monday 27/06, 14:00 in Ben LT5

The FireWorks approach to features is based on development by refinements. But we assume that the base product is refined through a domain- defined levels. Additions on this base product are defined as transformations at each level. Transformations can be initially as simple as the replay of the editions operations used to obtain the enhanced product, but will then be improved to work on a larger class of base products. We are specially interested by applying the transformation the result of a previous transformation (feature), and we can define it by cases for this purposes, but it is better to obtain a single, uniform transformation able to deal also with intially unforeseen features. This gives us the basis to define a first type of feature interference: when the levels obtained by composing the features are no longer refinements. Features that preserve refinement can thus be proved to never yield interference of this type. Up to now, we mainly used substitutions as the transformations representing features, but more is possible. Specially, for graphical formalisms, some graph transformations could be adequate.

Infinitary rewriting

Dr Fer-Jan de Vries (The University of Leicester)
Monday 27/06, 11:30 in Ben LT5

The application of rewriting theory to functional languages leads naturally to the consideration of infinite rewriting sequences and their limits. Our theory of transfinite rewriting puts this intuitive concept on a sound footing through the concept of a strongly convergent rewriting sequence, of which the crucial property is that not only does the sequence of terms tend to a limit, but the sequence of redex positions tends to infinite depth.

This notion allows us to demonstrate some classical results for orthogonal systems. However, it transpires that the most important of these, confluence, fails in certain cases. These cases can be precisely characterised, and confluence can be re-established modulo the equality of the set of terms which obstruct precise confluence. The offending terms are of a form that can reasonably be viewed as representing infinite computations that produce no result, and in this sense are undefined. Further consideration of this set of terms reveals that any set which satisfies certain natural axioms can serve as the class of undefined terms.

All this leads to the following construction on a finite orthogonal system R: (0) identify a set of undefined terms. (1) add a new bottom element to the syntax (2) consider the set of finite and infinite terms (3) add a new rule that rewrites undefined terms to bottom This infinite extension of R is confluent and normalising and yields immediately a syntactic model of the original orthogonal system.

For the lambda calculus this has yielded a new uniform characterisation of several known models (eg the Boehm model), and a construction of many more. The latter is current work in progress by Paula Severi and me.

Abstract rewriting theory

Dr Marc Aiguier (L'Universite'd'Evry - France)
Monday 27/06, 10:30 in Ben LT5

When rewriting is used to generate convergent and complete rewrite systems in order to answer the validity problem for some theories, all the rewriting theories rely on a same set of notions, properties and methods. Rewriting techniques have mainly been used to answer the validity problem of equational theories, that is to compute congruences. However, recently, they have been extended in order to be applied to other algebraic structures such as pre-orders and orders. In this talk, we investigate an abstract form of rewriting, by following the paradigm ``logical-system independent''. Hence, we provide a few simple conditions (or axioms) under which rewriting (and then the set of classical properties and methods) can be modeled, understood, studied, proven and generalized. This enables us to extend rewriting techniques to other algebraic structures than congruences and pre-orders such as congruences closed under monotonicity and modus-ponens. This allows us to introduce convergent rewrite systems, and to describe both deduction procedures for their corresponding theory and a Knuth-Bendix style completion procedure.

Operational domain theory and topology of a sequential programming language.

Dr Weng Kin Ho (The University of Birmingham)
Friday 03/06, 14:30

A number of authors have exported domain-theoretic techniques from denotational semantics to the operational study of contextual equivalence and pre-order. We further develop this, and, morover, we additionally export topological techniques. In particular, we work with an operational notion of compact set and show that the total programs with values on certain types are uniformly continuous on compact sets of total elements. We apply this and other conclusions to prove the correctness of non-trivial programs that manipulate infinite data. What is interesting is that the development applies to sequential programming languages.

Paper: M.H. Escardo and W.K. Ho. Operational domain theory and topology of a sequential programming language. LICS 2005. See http://www.cs.bham.ac.uk/~mhe/papers/index.html

Structured CSP - A Process Algebra as Institution

Dr Markus Roggenbach (The University of Swansea)
Friday 27/05, 14:30

In this talk we extend the process algebra CSP by a 'module concept' that allows us to build complex specifications out of simpler ones. Here we re-use typical structuring mechanisms from algebraic specification as they are realised, e.g., in the algebraic specification language CASL. We demonstrate through various examples that these structuring mechanism (e.g. extension, union, renaming, parametrisation) are suitable for CSP.

On the theoretical side our approach requires us to formulate the process algebra CSP as an institution. We show that the CSP trace model fits into this setting. Besides the practical outcome in the form of a module concept for CSP, this formulation can also be seen as a natural framework for stating process algebraic laws. Furthermore, formulating a process algebra as an institution links two hitherto unrelated specification formalisms.

Approximating the Distortion

Dr Alexander Hall (ETH Zurich)
Thursday 26/05, 10:30 in MCS G4

Kenyon et al. (STOC 04) compute the distortion between one-dimensional finite point sets when the distortion is small; Papadimitriou and Safra (SODA 05) show that the problem is NP-hard to approximate within a factor of 3, albeit in 3 dimensions. We solve an open problem in these two papers by demonstrating that, when the distortion is large, it is hard to approximate within large factors, even for 1-dimensional point sets. We also introduce additive distortion, and show that it can be easily approximated within a factor of two. This is joint work with Christos Papadimitriou.

Foundations of Modular Structural Operational Semantics

Pf P.D. Mosses (The University of Swansea)
Friday 06/05, 14:30

Modular SOS (MSOS) is a variant of conventional Structural Operational Semantics (SOS). Using MSOS, the transition rules for each construct of a programming language can be given incrementally, and they do not need reformulation when further constructs are added to the described language. MSOS thus provides an exceptionally high degree of modularity in semantic descriptions, removing a shortcoming of the original SOS framework.

The crucial feature of MSOS is that labels on transitions are now morphisms of a category, and exploited to ensure the appropriate flow of information (such as environments and stores) during computations.

The talk explains the foundations of MSOS, and illustrates how MSOS descriptions are written in practice. It also discusses standard notions of semantic equivalence for MSOS, and indicates how MSOS descriptions can be translated to Prolog.

Familiarity with the basic ideas of conventional SOS and labelled transition systems is assumed.

The canonical relation lifting and coalgebraic modal logic

Dr Bartek Klin (The University of Sussex)
Friday 29/04, 14:30 in Ben LT5

In the coalgebraic approach to state-based processes, relation lifting is used to model bisimulation abstractly. On the other hand, predicate lifting is used in coalgebraic modal logics. I will consider both liftings in a fibrational setting, show how they are related, and describe how the least relation liftings give rise to modal logics expressive for a wide class of functors. In the resulting logics, modalities of multiple arity can appear.

Energy efficient broadcast trees in the Euclidean plane

Christoph Ambühl (Dalle Molle Institute for Artificial Intelligence -Lugano)
Friday 18/03, 14:30 in Ben LT5

Computing energy efficient broadcast trees is one of the most prominent operations in wireless networks. For stations embedded in the Euclidean plane, the best analytic result known to date is a 7.6-approximation algorithm based on computing an Euclidean minimum spanning tree. We improve the analysis of this algorithm and show that its approximation ratio is 6, which matches a previously known lower bound for this algorithm.

The main ingredient of the proof is the following lemma:

Let S be any point set which

- is a subset of the unit disk around the origin and - includes the origin.

Let T be the Euclidean minimum spanning tree of S. Then the sum of the squared edge lengths of T is bounded by 6.

OMML: A Behavioural Model Interchange Format

Dr Andrea Zisman (City University London)
Friday 11/03, 14:30 in Ben LT5

Large distributed systems are normally developed by combining various nodes that are produced by different stakeholders, using different technologies, languages, and formalisms. The number and diversity of existing languages for describing behavioural specifications (models) of systems do not enable the integration, sharing, or reuse of models between tools. Incompatible node models cannot be used to help validate overall combined system behaviour. In this talk we present an XML-based model interchange format named OpenModel Modeling Language (OMML). OMML is part of a large project of research called OpenModel to support design, analysis, and validation of multi-stakeholder distributed systems. OMML is a function rich procedural language that expresses functionality in terms of function/object theories and represents behavioural models in a tool-independent way. OMML is composed of 5 different document types describing executable specification models of the services running at the nodes, information about connections between the various nodes, information about the (abstract) state of the services, and domain specific information to allow standardisation of the terminology used by model developers. We present how OMML can be used to support interchange of models in email, instant messaging, and web services applications. We discuss prototype tools that we have developed to support translation between models expressed in P-EBF, OMML, and SCR and evaluate our approach by validating applications composed of models expressed in different languages in the GSTView validation tool.

Inductive-recursive definitions and generic programming

Dr Anton Setzer (The University of Wales - Swansea)
Friday 18/02, 14:30 in Ben LT5

(joint work with P. Dybjer, Gothenburg, Sweden)

We give a brief introduction into dependent type theory and its role in generic/generative programming. We then investigate the basic constructs for introducing types in dependent type theory and generalize it to the notion of inductive-recursive definitions. We then introduce a data type of inductive-recursive definitions and discuss how this data type can be used for defining generic functions, which can not only take elements of data types, but as well codes for data types, and generate from it both codes for data types and elements of those types.

Automatic Presentations for Finitely Generated Groups

Graham Oliver (The University of Leicester)
Friday 11/02, 14:30 in Ben LT5

Structures presentable by finite automata are of growing interest, particularly as the closure properties of automata give a generic algorithm for deciding first order properties.

One catalyst for this idea was the theory of automatic groups, which has been useful in developing implementable algorithms for group-theoretic problems. However, the theories are not equivalent. The classification presented here, of those finitely generated groups which have automatic presentations, allows a direct comparison; it also adds to the limited number of classification theorems already established.

Nominal Reasoning Techniques in Isabelle/HOL

Dr C. Urban (The University of München)
Friday 28/01, 14:30 in Ben LT5

I will describe how standard (informal) proofs about the lambda-calculus can be formalised with ease in Isabelle/HOL using a variant of Pitts' nominal logic work. First, I shall present an inductive definition for alpha-equated lambda-terms and then a strong induction principle that looks very much like the Barendregt variable convention. The technical novelty of my work is that it is compatible with the axiom-of-choice and therefore can be based on standard set-theory.

Predicate transformers, predicatively

Dr Peter G. Handcock
Friday 21/01, 14:30 in Ben LT5

In the refinement calculus (Back and von Wright, Morgan, et al) predicate transformers model both specifications and programs. It involves a nice category in which the objects are monotone predicate transformers over a state-space, and the morphisms are so-called downward simulations. The natural logical habitat of this category is impredicative higher-order classical logic. By putting together some notions in the literature of constructive mathematics one can reconstruct this category in a fully constructive setting such as Martin-Lof's type theory. This gives a "working" model of command-response interfaces and components. Joint work with Pierre Hyvernat (Chalmers/Marseilles) and Anton Setzer (Swansea).

Categorical Quantum Mechanics: a high-level approach for quantum computation and information

Dr Bob Coeke (Oxford University Computing Laboratory)
Friday 14/01, 14:30 in Ben LT5

The current methods available for developing quantum algorithms and protocols are deficient on two main levels. Firstly, they are too low-level. One finds a plethora of ad hoc calculations with `bras' and `kets', normalizing constants, matrices etc. The arguments for the benefits of a high-level, conceptual approach to designing, programming and reasoning about quantum computational systems are just as compelling as for classical computation. Moreover, there is the whole issue of integrating quantum and classical features, which would surely be mandatory in any practicable system.

At a more fundamental level, the standard mathematical framework for quantum mechanics (due to [von Neumann 1932]) is actually insufficiently comprehensive for informatic purposes: in particular, the flow of information from quantum to classical arising from measurements, and the dependence of subsequent stages of the computation on this information, is not adequately covered by the standard formalism. As quantum protocols and computations grow more elaborate and complex, this point is likely to prove of increasing importance.

In [Abramsky and Coecke 2004] we addressed these problems, the key ingredient being a purely *categorical axiomatization of quantum mechanics* The concepts of abstract quantum mechanics are formulated in any *strongly compact closed category with biproducts* (of which the category FdHilb of finite dimensional Hilbert spaces and linear maps is an example). Preparations, measurements, and classical data exchange are all morphisms in the category, and their types fully reflect their kinds. Correctness properties of standard quantum protocols such as quantum teleportation can be proved at this abstract level. In particular, due to the compositional nature of quantum entanglement in this abstract formalism, proofs actually become much simpler, and so does the discovery of new protocols. Although this setting seems purely qualitative, in fact even the Born rule, which specifies how probabilities are calculated, can be derived in it.

J. von Neumann. Mathematische Grundlagen der Quantenmechanik. Springer-Verlag (1932). English translation in Mathematical Foundations of Quantum Mechanics. Princeton University Press (1955).

S. Abramsky and B. Coecke. A categorical semantics of quantum protocols. In the proceedings of LiCS'04 (2004). An extended version is available at arXiv:quant-ph/0402130

Algebra for Information Update in Multi-Agent systems

Dr Mehrnoosh Sadrzadeh (Oxford University Computing Laboratory)
Friday 14/01, 15:30 in Benett LT5

Joint work with A. Baltag and B. Coecke

We model information flow in multi-agent systems where communication between agents changes the information state of each agent. I shall first present an alternative and more general non-boolean algebraic semantics for the usual Kripke structures used in dynamic epistemic logic (e.g. Baltag-Moss). This algebraic setting consists of an epistemic system (M,Q,{f_A}_{A \in {\cal A}}), which is a quantale-module pair (M,Q) endowed with a family of "appearance maps" for each agent f_A = (f_A^M: M ->M, f_A^Q:Q ->Q). These maps satisfy some natural conditions. The right Galois adjoint to the appearance map gives rise to the "box" modality of epistemic logic (expressing knowledge or belief of some agent). The algebra can be used to encode and solve, in an elegant manner, standard dynamic epistemic puzzles such as "cheating muddy children" . I will also present a complete sequent calculus with regard to our algebra and apply it to an example of a security protocol. Finally, I shall briefly show how moving to a sup-enriched categorical setting allows us to relativize information updates to agents and thus deal with classical problems such as logical omniscience.

On minimizing the number of ADMs in WDM/SONET rings

Dr Alexey Fishkin (Max-Planck Institute Informatik, Saarbruecken)
Thursday 13/01, 10:30 in Benett LT4

SONET (Synchronous Optical NETworks) add-drop multiplexers (ADMs) are the dominant cost factor in the WDM (Wavelength Division Multiplexing)/SONET rings. The number of SONET ADMs required by a set of traffic streams is determined by the lightpath routing and wavelength assignment of the traffic streams. We address the WDM/SONET ring network design problem where each traffic stream has a predetermined routing given by a lightpath, and a careful wavelength assignment for the ligthpaths is used to minimize the number of SONET ADMs. This can be modeled as the problem of partitioning a set of circular arcs over a ring into segments of non-intersecting arcs where the objective is to minimize the number of non-circle segments. We present and analyse two randomized approximation algorithms.

Semester 1

Seminar programme


Seminar details

Data Types for Differentiable Functions

Dr Dirk Pattinson (Ludwig Maximilien University (München), on leave at Imperial College)
Friday 10/12, 14:30 in Ben LT5

We introduce a domain-theoretic computational model for multi-variable differential calculus, which gives rise to data-types for differentiable functions. The model, a continuous Scott domain for differentiable functions of n variables, is built as a sub-domain of the product of n+1 copies of the function space on the domain of intervals by tupling together consistent information about locally Lipschitz (piecewise differentiable) functions and their differential properties (partial derivatives). We show, in two stages, that consistency is decidable on basis elements and discuss two applications of our model: The effective calculation of inverse and implicit functions and improved methods for solving initial value problems.

How to untie knots in stores: Hoare-Logic for Higher-Order Store

Bernhard Reus (The University of Sussex)
Friday 03/12, 15:30 in Ben LT5

We present a Hoare logic for a simple imperative while-language with stored commands, ie. stored parameterless procedures. Stores that may contain procedures are often called higher-order. Soundness of our logic is established by using denotational rather than operational semantics. The former seems more suitable as it elegantly accounts for an inherent difficulty of higher-order store, namely that assertions necessarily describe recursive predicates on a recursively defined domain.

Our findings should be relevant to logics for object-oriented languages where methods can be loaded or created at runtime.

Towards a Calculus for the Development of Security Protocols

Dr Gavin Lowe (Oxford)
Friday 03/12, 14:30 in Ben LT5

During the last 10 years, we've become pretty good at analysing security protocols. However, we still lack a solid framework for designing correct protocols: the best technique we have is to use our experience and judgement to come up with a protocol that we think is correct, and then to verify it. This is the question this research addresses.

I will outline a calculus for developing correct security protocols. Protocol descriptions are annotated with assertions that state properties that will be true when the protocol execution reaches that point; I will give some proof rules that allow the assertions to be verified. A novel feature of the calculus is that the initial development of a protocol uses abstract messages that describe the intention of a message, rather than the concrete implementation; I will outline some rules that allow these abstract messages to be suitably implemented. I will sketch the semantic model that is used to formalise the calculus, and illustrate it with some examples.

Empirical studies of Software evolution - The last 20 years

Dr J.F.Ramil (Computing Dept., The Open University, UK)
Friday 26/11, 14:30 in Ben LT5

Informally speaking, software evolution is what happens after the first operational release. More formally, software evolution is the process of continual software change. The term is mainly used to refer to enhancements, adaptations and fixes that occur after the first version of a software system is released to its users. According to different surveys, the largest portion of effort is subsumed by software evolution, not by initial development. In a 1984 IEEE TSE paper, Prof Barry Boehm indicated the area of modelling of the software life-cycle evolution as one of seven outstanding issues in software engineering economics. Despite the advances in understanding during the last 20 years, this is still an active area of research with many unanswered questions. For example, in an world driven by short-term gains, it is still not clear how one can decide how much refactoring and restructuring effort should be applied in order to maximize the value that a software system provides to its stakeholders. This talk will summarise our findings in the study of Open Source evolution and their implications for implementing and managing software evolution. The talk will also argue for quantitative models and views that reflect different levels of abstraction and aggregation (macro-evolution and micro-evolution) in order to derive a richer, more meaningful and useful picture of the long term evolution of a software system.

Juan F. Ramil is a lecturer at the Computing Dept., The Open University. He joined the OU in October 2001 after five years or so as a researcher at Imperial College London.

Andrea Capiluppi is a visiting researcher at the Computing Dept., The Open University, Milton Keynes, UK. He is planning to submit his PhD thesis at the Dipartimento of Automatica e Informatica, Politecnico of Torino, Italy, early in the next year.

Recent advances in Dependent Type Theory

Dr Nicola Gambino (The University of Cambridge, UK)
Friday 25/11, 10:30 in Benett LT4

Dependent type theories have been considered both as foundational frameworks for constructive mathematics, and as programming languages for the simultaneous development of programs and specifications.

In my talk, I will discuss some recent work on dependent type theories, developed in collaboration with Peter Aczel and Martin Hyland. Much of this research focuses on relating dependent type theories, constructive set theories, and categorical structures. The connections between these three settings leads to some nice proof-theoretic applications.

Modular Construction of Modal Logics

Dr Corina Cirstea (The University of Southampton)
Friday 12/11, 14:30 in Ben LT5

(joint work with Dirk Pattinson)

The coalgebraic approach to modelling state-based systems allows various system combinations to be modelled at the level of coalgebraic signatures using operations such as functor composition, product or coproduct. We show that this modularity at the level of signatures can be lifted to a logical level, thereby allowing the derivation of logics for coalgebras of functor combinations from logics for coalgebras of the functors being combined. Furthermore, we show that relevant properties of the logics, including expressiveness, soundness and completeness, are preserved by these operations for combining logics. These results are used to derive expressive, sound and complete logics for a large class of probabilistic system types.

Intersection Types and Lambda Models

Pr Mariangiola Dezani (Universita di Torino - Departimento di Informatica)
Friday 05/11, 14:30 in Ben LT5

Intersection types can be viewed also as a restriction of the domain theory in logical form, to the special case of modelling pure lambda calculus by means of omega-algebraic complete lattices. Intersection types have been used as a powerful tool both for the analysis and the synthesis of lambda models. On the one hand, intersection type disciplines provide finitary inductive definitions of interpretation of lambda terms in models. On the other hand, they are suggestive for the shape the domain model has to have in order to exhibit certain properties.

Problem and Solution Structures

Pr Michael Jackson (The Open University)
Friday 29/10, 14:30 in Ben LT5

Problem decomposition in software-intensive systems is discussed, and some general principles and ideas are presented. For a problem decomposed into subproblems the task of composing the results demands attention, in both the problem and solution domains. It is recommended to postpone considering composition until the subproblems to be composed have been analysed and understood. Vincenti's distinction between radical and normal engineering is explained. Software developers should practise normal engineering wherever possible.

System Co-development through Requirements Specification

Professor Pericles Loucopoulos (UMIST)
Friday 22/10, 14:30 in Ben LT5

Some of the most challenging system development projects involve multiple stakeholders from different participating organisations, subcontractors, divisions etc who may have a diversity of expertise, come from different organisational cultures and often have competing goals. The relatively recent emphasis on process-centred approaches has highlighted the need for appropriate mechanisms for the elicitation, representation and validation of requirements that focus on the co-development activity whose aim is to ensure alignment between business processes and support technical systems.

A key challenge in the development of systems is the engagement of domain experts in their articulation, agreement, and validation of requirements. This challenge is particularly pronounced at the so-called early requirements phase when multiple stakeholders from different divisions and often different organisations need to reach agreement about the intended systems. Decisions taken at this stage have a profound effect on the technical and economic feasibility of the project. It is no longer appropriate for information systems professionals to focus only on functional and non-functional aspects of the intended system and somehow assume that organisational context and needs are outside their remit.

Traditional requirements engineering (RE) techniques that have emerged over the years in the field of information systems engineering are also being applied to business process modelling. In both cases modelling is advocated as a way of facilitating analyst-client communication. In both cases the focus of the modelling paradigm is on requirements for support systems rather than on co-development issues. In neither case does a modelling paradigm is specifically deployed to facilitate communication between stakeholders and since early requirements elicitation is founded on the need for stakeholders to co-operatively reach agreement on the interactive and inter-dependent nature of all requirements in their totality, neither class fully tackles the problems of early requirements.

This talk will critically examine contemporary approaches to early requirements modelling, discuss the main issues that should be addressed by modelling and propose an approach that arguably provides a solid and usable framework for addressing the key problem of early requirements namely, engaging stakeholders in defining, exploring and validating early requirements. Examples from large industrial applications, involving business process re-engineering and enterprise integration will be given to highlight the key issues raised in this talk.

Modelling a biochemical network as a concurrent, distributed system - insightful or insanity?

Pr Muffy Calder (Department of Computing Science, University of Glagow)
Friday 15/10, 14:30 in Ben LT5

Yes, this really is a computer science seminar! In this talk I will consider how theories and tools from Computing Science can be used to model and reason about biochemical pathways.

I will consider the ERK signalling pathway as an example and show how we can use the Markovian process algebra PEPA to develop and analyse a number of novel, predictive models. I will discuss how these compare with traditional biochemical models, what the biologists say about them, and the implications for computing science. No prior knowledge of biochemistry nor PEPA will be assumed.

Non-Interference for Weak Observers

David Clark (King's College London)
Friday 08/10, 14:30 in Ben LT5

Language based security properties are experiencing a resurgence of attention as part of the more general interest in security. These properties are aimed at curtailing the presence of unwanted flows and related covert channels in programs. One much studied confidentiality property is non-interference (NI) which seeks to guarantee that the activities of high confidentiality users of the program cannot be observed by low confidentiality users.

Giacobazzi and Mastroeni's 2004 POPL paper, "Abstract Non-Interference: Parameterizing Non-Interference by Abstract Interpretation" indirectly posits what we call a "weak observer": someone who is unable to "see" all the low confidentiality data available. We characterise a sub-class of their abstractions of the observer by using relations and show that the presence of unsensed, non-confidential information is experience by the weak observer as non-determinism. We then demonstrate the obvious: that a possibilistic definition of NI is inadequate and propose our own, probabilistic definition. We show its relationship with Gray's p-restrictiveness and sketch how it might be weakened to measure flows of secret information using information theory.

This is joint work with Sebastian Hunt and Pasquale Malacaria

Adjunction Models For Call-By-Push-Value With Stacks

Dr Paul Lévy (The University of Birmingham)
Thursday 23 September, 10:30 in G4

Call-by-push-value is a "semantic machine code", providing a set of simple primitives from which both the call-by-value and call-by-name paradigms are built. We present its operational semantics in the form of a stack machine, and see how this machine suggests a term judgement of stacks. We then see that CBPV, incorporating these stack terms, has a simple categorical semantics based on an adjunction.

We describe this semantics incrementally. First, we introduce locally indexed categories and the opGrothendieck construction, and use these to give the basic structure for interpreting the 3 judgements: values, stacks and computations. Then we look at the universal property required to interpret each type constructor. We define a model to be a strong adjunction with countable coproducts, countable products and exponentials.

We see a wide range of instances of this structure: we give examples for divergence, storage, erratic choice, continuations, games (with or without a bracketing condition) etc., in each case resolving the familiar strong monad (as advocated by Moggi) into a strong adjunction.

Author: Vincent Schmitt (V.Schmitt at mcs.le.ac.uk), T: 0116 252 3813.
© University of Leicester 11th September 2004. Last modified: 16th December 2005, 14:26:15
CMS Web Maintainer. This document has been approved by the Head of Department.