University of Leicester

cms

Computer Science Seminars and Short Courses — 2005/06

You main need this to find your way...

  • The room is likely to be BEN LT5, which is in the Bennet building and quite straightforward to find; however...
  • If the room is "Ken Edwards LT3" be aware that the entrance is hidden on the left hand side when you walk towards the main entrance. Don't go to the main entrance: there is no indication there of where LT3 is...
  • If the room is FJ1119, follow the signs to AVS - Graphics and Photography.

Semester 2

Seminar programme


Seminar details

Some computer-based proofs

Pr George Havas (University of Queensland)
Mon 24, 14:30 in Benett LT5

Given a finite presentation of a group, proving properties of the group can be difficult. Indeed, many questions about finitely presented groups are unsolvable in general. Algorithms exist for answering some questions while for other questions algorithms exist for verifying the truth of positive answers. An important tool in this regard is the Todd-Coxeter coset enumeration procedure. It is possible to extract formal proofs from the internal working of coset enumerations. We give examples of how this works, and show how the proofs produced can be mechanically verified and how they can be converted to alternative forms. We discuss these automatically produced proofs in terms of their size and the insights they offer. We compare them to hand proofs and to the simplest possible proofs. We point out that this technique has been used to help solve a longstanding conjecture about an infinite class of finitely presented groups.

Completions of ordered structures.

Pr John Harding (New Mexico State University)
Friday 17 July, 14:30 in

The term "ordered structure" is a general one meant to encompass any type of object consisting of a partially ordered set with additional operations or relations. By a completion of an ordered structure we mean an embedding into an ordered structure of the same type whose underlying partial order is a complete lattice, preferably by an embedding preserving some desired set of properties of the original. With only a small lie, one of the earliest and most important examples of such a completion is Dedekind's construction of the ordered field of real numbers from the ordered field of the rationals.

In this survey talk we outline some of the history of this broad area, its most important results, and a number of open problems. The past ten years has seen considerable activity in this area, and we pay particular attention to these developments.

Distribution and Mobility in an Aspect-Oriented Software Architecture

Nour Ali (Polytecnic University of Valencia)
Thursday 06/07, 14:30 in Ben LT5

Nowadays, distributed and mobile systems are acquiring importance and becoming widely extended for supporting ubiquitous computing. Also, distributed and mobile systems are usually difficult to develop as they need to take into account functional and non-functional requirements and reusability and adaptability mechanisms. In order to achieve these needs it is necessary to separate the distribution and mobility concerns from the rest of the concerns. PRISMA is an approach that integrates the advantages of Component-Based Software Development and Aspect-Oriented Software Development for specifying software architectures. This seminar presents how distribution and mobility primitives have been introduced to PRISMA in order to allow it to describe software architectures of distributed and mobile systems. The implementation of these primitives has been performed in a middleware called PRISMANET. This middleware provides the support for executing distributed and mobile applications based on PRISMA in the .NET platform. In addition, Ambient Calculus is a formalism that allows the representation of boundaries where computation occurs. For describing distribution and mobility primitives in a formal and precise way, ambient calculus primitives have been introduced in the PRISMA approach.

This work has been funded by the Department of Science and Technology (Spain) under the National Program for Research, Development and Innovation, DYNAMICA project TIC2003-07776-C02-02. Moreover, it has funded by the Microsoft Research Cambridge, "PRISMA: Model Compiler of aspect-oriented component-based software architectures" Project

Asynchronous models for Diagnosis and Modelling

Dr Stefan Haar (IRISA - Rennes)
Thursday 22/06, 10:30 in TBA

We describe the effective use of partial order semantics in diagnosis of asynchronous and distributed systems, motivated by a telecommunication networks application. Algorithmic and algebraic aspects of distributed computation of unfoldings for Petri net (and other) models will be discussed.

We will describe current work on negotiation and monitoring of QoS for heterogeneous networks and web services, and perspectives of future work.

Promoted Operational Semantics: Expressiveness and Congruence

Dr Mohammad Reza Mousavi (The University of Eindhoven)
Wednesday 07/06, 14:30 in Ben LT8

In the first part of this talk, we will present an overview of the developments concerning the meta-theory of Structural Operational Semantics (SOS) in the last 20 years. Then, in the second part, we will focus on the more recent results concerning SOS frameworks and meta-results for Higher-order formalisms (such as lambda calculus and higher-order pi). We present the results of a comparison between such frameworks (called promoted frameworks) and the traditional frameworks suited for ordinary process algebraic formalisms (such as CCS and ACP).

(Joint work with Michel Reniers)

Epistemic puzzles

Hans van Ditmarsch (University of Otago, New Zealand)
Friday 02/06, 14:30 in Ben LT5

Since the 1940s various so-called epistemic puzzles have become known, wherein typically announcements of ignorance surprisingly lead to knowledge. A well-known example is the Muddy Children Problem. Far too well-known, so this talk will NOT be about the Muddy Children Problem. Instead, we will present and analyze some other, possibly less well-known puzzles. Some examples are given below - people interested in attending this presentation are advised to have a go at solving them before attending the seminar. The logic used to solve the problems is typically 'public announcement logic' - we will only use the essentials of the semantics and not go in depth on theoretical logical results.

puzzle 1 - Consecutive Numbers (aka. 'Conway Paradox')

Anne and Bill are each going to be told a natural number. Their numbers will be one apart. And they are aware of this scenario. The numbers are now being whispered in their respective ears. Suppose Anne is told 4 and Bill is told 3. The following conversation takes place between Anne and Bill: - Anne: I do not know your number. - Bill: I do not know your number. - Anne: I do not know your number. - Bill: But now I know your number! You've got 4! - Anne: Right. So you've got 3. Why is this a truthful conversation?

puzzle 2 - Russian Cards

From a pack of seven known cards 0,1,2,3,4,5,6 Anne and Bill each draw three cards and Cath gets the remaining card. How can Anne and Bill openly (publicly) inform each other about their cards, without Cath learning from any of their cards who holds it?

puzzle 3 - What is my number?

Each of agents Anne, Bill, and Cath has a positive integer on its forehead. They can only see the foreheads of others. One of the numbers is the sum of the other two. All the previous is common knowledge. The agents now successively make the truthful announcements: - Anne: I do not know my number. - Bill: I do not know my number. - Cath: I do not know my number. - Anne: I know my number. It is 50. What are the other numbers?

Exploring the Synergies of Abstract Mathematics and Object Oriented Techniques

Dr Marc Conrad (University of Luton)
Friday 26/05, 14:30 in Ben LT5

The development of the object oriented software design methodology has been driven by software engineering needs to manage large software projects and in fact it is well known that the object oriented programming paradigm has its origin in languages designed to simulate real world processes. So it may come as a surprise that there exists various similarities and relations between abstract mathematical structures and features provided by main stream object oriented programming languages (such as Java) and it is discussed how these similarities benefit both Mathematics and Computer Science.

Computable real analysis without set theory or Turing machines

Paul Taylor ()
Friday 19/05, 14:30 in Ben LT5

The many schools of computable or constructive analysis accept without question the received notion of set with structure. They rein in the wild behaviour of set-theoretic functions using the double bridle of topology and recursion theory, adding encodings of explicit numerical representations to the epsilons and deltas of metrical analysis. Fundamental conceptual results such as the Heine--Borel theorem can only be saved by set-theoretic tricks such as Turing tapes with infinitely many non-trivial symbols.

It doesn't have to be like that.

When studying computable continuous functions, we should never consider uncomputable or discontinuous ones, only to exclude them later. By the analogy between topology and computation, we concentrate on open subspaces. So we admit $+$, $-$, $\times$, $\div$, "strictly less than", strictly greater than" , $\neq$, $\land$ and~$\lor$, but not $\leq$, $\geq$, $=$, $\lnot$ or~$\Rightarrow$. Universal quantification captures the Heine--Borel theorem, being allowed over \emph{compact} spaces. Dedekind completeness can also be presented in a natural logical style that is much simpler than the constructive notion of Cauchy sequence, and also more natural for both analysis and computation.

Since open subspaces are defined as continuous functions to the Sierpi\'nski space, rather than as subsets, they enjoy a ``de~Morgan'' duality with closed subspaces that is lost in intuitionistic set-, type- or topos theories. Dual to $\forall$ compact spaces is $\exists$ over ``overt'' spaces. Classically, all spaces are overt, whilst other constructive theories use explicit enumerations or distance functions instead. Arguments using $\exists$ and overtness are both dramatically simpler and formally dual to familiar ideas about compactness.

Hard-to-Solve Bimatrix Games

Rahul Savani (LSE)
Friday 12/05, 14:30 in MCS G4

A bimatrix game is a two-player game in strategic form, a basic model in game theory. A Nash equilibrium is a pair of (possibly randomized) strategies, one for each player, so that no player can do better by unilaterally changing his or her strategy.

In this talk, we show that the commonly used Lemke-Howson algorithm for finding one equilibrium of a bimatrix game is not polynomial. This question had been open for some time. The algorithm is a pivoting method similar to the simplex algorithm for linear programming. We present a class of square bimatrix games for which the shortest Lemke-Howson path grows exponentially in the dimension of the game d. We construct the games using pairs of dual cyclic polytopes with 2d facets in d-space.

Stochastic Local Search Algorithms for Protein Folding

Dr Kathleen Steinhoefel (King's College - London)
Friday 24/03, 14:30 in Ben LT5

The hydrophobic-hydrophilic (H-P) model for protein folding was introduced by Dill et al. in 1995. A problem instance consists of a sequence of amino acids, each labeled as either hydrophobic (H) or hydrophilic (P). The sequence must be placed on a 2D or 3D grid without overlapping, so that adjacent amino acids in the sequence remain adjacent in the grid. The goal is to minimize the energy, which in the simplest variation corresponds to maximizing the number of adjacent hydrophobic pairs. Although the model is extremely simple, it captures the main features of the problem. The protein folding problem in the H-P model is NP-hard in both 2D and 3D. Recently, Fu and Wang (2004) proved an exp(O(n^{1-1/d})ln(n)) algorithm for d-dimensional protein folding simulation in the HP-model. Our preliminary results on stochastic search applied to protein folding utilize complete move sets proposed by Lesh et al. (2003) and Blazewicz et al. (2005). We obtain that after (n/a)^{O(G)} Markov chain transitions, the probability to be in a minimum energy conformation is at least 1-a, where n is the length of the instance and G is the maximum value of the minimum escape height from local minima of the underlying energy landscape; G depends on the move sets, and future research will focus on upper bounds of this value. To be competitive with the Fu/Wang bound, G less than n^(1-1/d) is required. From previous experiments on stochastic search applied to computationally hard problems we expect G to be much smaller for real protein sequences.

New Results on Dynamic Bin Packing

Dr Prudence Wong (The University of Liverpool)
Friday 17/03, 14:30 in Ben LT5

We study the dynamic bin packing problem, in which items arrive and depart at arbitrary time. We want to pack a sequence of items of size in (0,1] into unit-size bins such that the maximum number of bins used over all time is minimized. An on-line algorithm has to pack an item when it arrives without knowing future arrivals/departure. The performance is measured by competitive ratio, which is the worst case ratio between the maximum number of bins used by the on-line algorithm and that of the optimal off-line algorithm. Dynamic bin packing of arbitrary size items and unit fraction items are studied, where a unit fraciton item is an item with size 1/w for some integer w >= 1.

For unit fraction items, tight and almost-tight performance bounds are found for the family of any-fit algorithms, including first-fit, best-fit, and worst-fit. We show that the competitive ratio of BF and WF is 3, which is tight, and the competitive ratio of FF lies between 2.45 and 2.4985. We also show that no on-line algorithm is better than 2.428-competitive. For arbitrary size items, we improve the general lower bound from 2.388 to 2.5. We also study resource augmentation and show that using bins of size 2 is necessary and sufficient to achieve 1-competitiveness.

PSPACE-Bounds for Rank-1 Modal Logics.

Dr Lutz Schroeder (The University of Bremen)
Friday 10/03, 14:30 in Ben LT5

For lack of general algorithmic methods that apply to wide classes of logics, establishing a complexity bound for a given modal logic is often a laborious task. The present work is a step towards a general theory of the complexity of modal logics. Our main result is that all rank-1 logics enjoy a shallow model property and thus are, under mild assumptions on the format of their axiomatisation, in PSPACE. This leads not only to a unified derivation of (known) tight PSPACE-bounds for a number of logics including K, coalition logic, and graded modal logic (and to a new algorithm in the latter case), but also to a previously unknown tight PSPACE-bound for probabilistic modal logic, with rational probabilities coded in binary. This generality is made possible by a coalgebraic semantics, which conveniently abstracts from the details of a given model class and thus allows covering a broad range of logics in a uniform way.

A logic of formal pragmatics Reasoning about assertions, obligations and causality

Dr Kurt Ranalter (Queen Mary University of London)
Friday 24/02, 14:30 in Ben LT5

In this talk we introduce the project of formal pragmatics, a framework intended to formalise concepts from speech act theory. A fundamental role is played by the notion of illocutionary force, represented in the formal system of pragmatics by illocutionary force operators. We will discuss some of the technical issues related to such a system: in particular the categorical semantics and proof theory. We will also talk about possible applications of formal pragmatics to computer science, especially in the fields of software agents and artificial intelligence.

Designing languages for agent communication

Dr Peter Mac Burney (DCS Liverpool, The University of Liverpool)
Friday 17/02, 14:30 in Ben LT5

Considerable efforts in recent years have been devoted to the design of artificial languages for communication between intelligent, autonomous software agents. Perhaps the most famous of these languages is the FIPA Agent Communications Language (ACL) of the Foundation for Intelligent Physical Agents (FIPA), which has been given a semantics drawing on speech act theory from philosophy and using modal logical formalism. Although proposed as a standard communications language, there are many problems with FIPA ACL, and I discuss some of these in this talk. In particular, I will explore ideas of argumentation and particular issues arising when communication concerns action.

Random testing via uniform generation of combinatorial structures

Pr Marie-Claude Gaudel (Universit\'e Paris XI (Orsay))
Thursday 16/02, 10:30 in Ben LT10

In the area of software testing, numerous methods have been proposed and used for selecting finite test sets and automating this selection. Among these methods some are deterministic and some are randomized (sometimes called \u201cstatistical\u201d). In the recent years, the general problem of studying and simulating random processes has particularly benefited from progresses in the area of random generation of combinatorial structures. In this talk, we explore the idea of using such concepts and tools for software testing. We describe a generic method for using these tools as soon as there is a graphical description of the behaviour of the system under test. Uniform generation is used either for drawing paths (or traces) from the set of execution paths (or traces) of the system under test, or, more efficiently, among some trace subsets satisfying some coverage conditions. The talk presents the general method and concludes with some experimental results on applying it to C programs and model-based testing.

Some early ideas for a formal model of adaptive systems

Dr Simon Dobson (Systems Research Group, University College Dublin)
Friday 10/02, 14:30 in Ben LT5

The notion of an adaptive system whose behaviour changes in response to changing user needs, environment circumstances, system events and so forth underpins research in pervasive and autonomic computing. From a design perspective it is essential to understand the ways in which a system will react to changing circumstances in order to ensure that desirable properties such as privacy and quality of service are maintained. However, most systems built to date use models and technology that do not provide good support for such analysis in even simple cases.

In this lecture we discuss the rational for a more mathematically well-founded notion of adaptive system, and present some early ideas for its realisation. Developing such a model is proving to be an interesting compromise between mathematical generality and the desire to provide analysable descriptions of realistic systems. We suggest some directions the research might lead.

Bayesian Network Models and their application to the management of software development

Dr Elena Perez-Minana (Motorola)
Friday 03/02, 14:30 in Ben LT5

Predicting faults early on software projects has become possible thanks to the utilization of innovative techniques that can combine observations of key quality drivers with expert opinion and historical data in causal models.

Motorola Labs has developed a methodology to build a set of Bayesian Network (BNs) models for predicting the faults found at each phase of the software development process used in an organization. The Phase containment effectiveness predictions computed with these models, are then compared against product and process quality goals to help the organization drive corrective actions in a more timely manner. The methodology also contributes to a reduction of latent defects and associated Cost Of Quality (COQ) improvement.

These models have been further extended to cover defects types and areas of software development such as system testing.

This talk covers details of the methodology, showing how different business units in Motorola have used it to successfully model the lifecycle of the Telecomm applications they develop. it concludes describing how the model predictions can help drive process improvement activities more effectively.

Specification and Verification of Reconfigurable Component Based Systems

Pr Tom Maibaum (Mc Master University)
Friday 27/01, 14:30 in Ben LT5

The problem of specifying reconfigurable systems in a declarative way is still largely an open one in the software architecture world. Almost all languages for specifying reconfiguration are operational (graph grammars, process algebras, etc) and thus reasoning about reconfiguration must be done in an, often informal, metalanguage. We use temporal logic to specify the behaviours of components and extend the language with a formalisation of connectors. The simplest connector is like an association in OO languages. Using component specifications and connector specifications, we build a coarse grained structural unit called a 'subsystem', which incorporates reconfiguration operations and their properties.

A Hierarchical Analysis of Propositional Temporal Logic based on Intervals

Dr Ben Moszkowski (De Montfort University - Software Technology Research Laboratory)
Thursday 12/01, 10:30 in Ben LT3

We present a hierarchical framework for analysing propositional linear-time temporal logic (PTL) to obtain standard results such as a small model property, decision procedures and axiomatic completeness. Both finite time and infinite time are considered. The treatment of PTL with both the operator Until and past time is readily reduced to that for a version of PTL without either one. Our method is based on a low-level normal form for PTL called a transition configuration. In addition, we employ reasoning about intervals of time. Besides being hierarchical and interval-based, the approach differs from other analyses of PTL which in general utilise sets of formulas and sequences of such sets. Models are instead described by means of time intervals represented as finite and countably infinite sequences of states. The analysis naturally relates larger intervals with smaller ones. The steps involved are expressed in a propositional version of Interval Temporal Logic (ITL) since it is better suited than PTL for sequentially combining and decomposing formulas. Consequently, we can articulate various issues in PTL model construction which are equally relevant within a more conventional analysis but normally only considered at the metalevel. We also briefly describe a prototype implementation of a decision procedure based on Binary Decision Diagrams.

Beyond the specific issues involving PTL, the research can be viewed as a significant application of ITL and interval-based reasoning and also illustrates a general approach to formally reasoning about a number of issues involving discrete linear time. For example, it makes use of various techniques for performing sequential and parallel composition. The formal notational framework for a hierarchical reduction of infinite-time reasoning to simpler finite-time reasoning could also be used in model checking. The work includes some interesting representation theorems and exploits properties of fixpoints of a certain interval-oriented temporal operator. In addition, it has relevance to hardware description and verification since the property specification languages PSL/Sugar (just made IEEE standard 1850) and 'temporale' (part of IEEE candidate standard 1647) both contain temporal constructs concerning intervals of time.

Semester 1

Seminar programme


Seminar details

Evolutionary Equilibrium in Bayesian Routing Games: Specialisation and Niche Formation

Dr Petra Berenbrink (Simon Fraser University)
Friday 09/12, 15:30 in Ben LT5

I talk about Nash Equilibria for the selfish routing model where a set of $n$ users with tasks of different size try to access $m$ parallel links with different speeds. In this model, a player can use a mixed strategy (where they use different links with a positive probability); then he is indifferent between the different link choices. This means that the player may well deviate to a different strategy over time. In the paper we propose the concept of evolutionary stable strategies (ESS) as a criterion for stable Nash Equilibria, i.e. Equilibria where no player is likely to deviate from his strategy. An ESS is a steady state that can be reached by a user community via evolutionary processes in which more successful strategies spread over time. The concept has been used widely in biology and economics to analyze the dynamics of strategic interactions.

I first present a symmetric version of a bayesian parallel links game where every player is not assigned a task of a fixed size but instead is assigned a task drawn from a distribution. Then I characterize ESS for the bayesian parallel links game and show that in an ESS links acquire "niches", meaning that there is minimal overlap in the tasks served by different links. Furthermore, in an ESS all links with the same speed are interchangeable for every task with weight $w$: every player must place a task with weight $w$ on links having the same speed with the same probability. Also bigger tasks must be assigned to faster links and faster links must have bigger load.

Finally I introduce a "clustering" condition---roughly, links must form disjoint niches---that is sufficient for evolutionary stability. For uniform task sizes there is a unique ESS; if the $m$ links are uniform as well, the unique ESS places the task on each link with probability $1/m$.

This is joint work with Oliver Schulte.

Transform-domain image processing solutions

Dr Helmut Bez (The University of Loughboro)
Friday 09/12, 14:30

Topic1: Moving cast shadow detection for video processing: the development of a shadow model from physical principles and its application to shadow removal from video sequences.

Topic2: Two image fusion problems: (i) fusion of images taked at different apertures to produce optimum image (ii) similarly for images taken at different focus settings.

The formal engineering of generic requirements for failure management

Dr. Mike Poppleton (The University of Southampton)
Friday 02/12, 14:30

We consider the failure detection and management function for engine control systems as an application domain where product line engineering is indicated. The work is complicated by the addition of the high levels of verification demanded by this safety-critical domain, subject to avionics industry standards.

We present our case study experience in this area as a candidate methodology for the engineering, validation and verification of generic requirements using domain engineering and Formal Methods techniques and tools. For a defined class of systems, the case study produces a generic requirement set in UML and an example instantiation in tabular form. Domain analysis and engineering produce a model which is integrated with the formal specification/ verification method B by the use of our UML-B profile. The formal verification both of the generic requirement set, and of a simple system instance, is demonstrated using our U2B and ProB tools.

In conclusion we consider some issues as we move towards the next-generation Event-B method.

This work is a demonstrator for a tool-supported method which will be an output of EU project RODIN IST 511599 - Rigorous Open Development Environment for Complex Systems http://rodin.cs.ncl.ac.uk/

Title: Finding Frequent Patterns in a String in Sublinear Time

Dr. Tom Friedetzky (The University of Durham)
Thursday 24/11, 10:30 in Benett LT3

We consider the problem of testing whether (a large part of) a given string X of length n over some finite alphabet is covered by multiple occurrences of some (unspecified) pattern Y of arbitrary length in the combinatorial property testing model. Our algorithms randomly query a sublinear number of positions of X, and run in sublinear time in n. We first focus on finding patterns of a given length, and then discuss finding patterns of unspecified length.

Joint work with Petra Berenbrink and Funda Ergun (both SFU, Vancouver)

Simple questions about simple programs

Dr Igor Potapov (The University of Liverpool - Department of Computer Science)
Friday 18/11, 14:30 in Ben TL5

Abstract: It is an amazing fact that even very simple programs can show enormously complex behaviour. In principal they can be far more complex than typical programs and they often appear as core elements in some natural computational processes. To find the example of such a program we can look at iterative maps which are simplest model of difference equations. The main characteristic of iterative maps is that the future value of some variable at time n+1 depends only on its value at the present time n, i.e. it is of the form X(n+1)=f(X(n)).

Let us consider the sequence of iterations starting from some point x: x, f(x), f(f(x)), f(f(f(x))) and so on. Can we predict the behaviour of such a system? In the sense of computational theory, the frontiers between predictability and unpredictability of iterative maps are very closely related to the border between decidability and undecidability.

In this talk I present the latest research on predictability problems for iterative maps that have natural connections with computability questions in matrix semigroups, hybrid systems, algorithms and combinatorics on words, counter automata, number theory, control theory, etc.

An Abstract Interpretation-based Semantics for Radiation-Hard Asynchronous Circuits

Dr Sarah Thompson (University of Cambridge Computer Laboratory.)
Friday 11/11, 14:30

The effectiveness of program analysis techniques based upon sound mathematical abstract semantics has long been recognised. Though normally applied to software, this approach is also applicable to digital electronics. Most circuits are designed assuming a synchronous model; a single global clock signal drives the clock inputs of every flip flop in the system, and unclocked feedback loops are outlawed. This design approach works well, primarily because it makes circuits easy to reason about. The real world, however, is fundamentally asynchronous. Even something so simple as a push button exhibits the characteristic that it can change state at any time, requiring designers to be very careful when constructing appropriate interface circuits. Intuition based on the synchronous model tends to break down due to the need to consider timing information that is continuous in nature. Formal reasoning is generally difficult, so engineers often use inexact discrete time simulations that often miss possible pathological behaviour.

Abstract interpretation provides a sound formal framework that allows abstractions of concrete systems to be reasoned about. In our SAS'04 paper, we introduced transitional logic, an abstract interpretation technique resembling a multi-valued logic that is capable of supporting reasoning about asynchronous behaviour of combinational circuits. In this talk, I will first present a gentle overview of transitional logic, then go on to show how it can be used to reason about the radiation hardness of majority voting circuits.

Our analysis showed that, whilst permanent subsystem failure is tolerated as expected, brief single-event transients (SETs) of the kind typically caused by relativistic heavy ion (cosmic ray) impacts are not reliably rejected. This result can be generalised to show that, in general, delay-insensitive circuits are fundamentally incapable of rejecting SETs.

It's the people, stupid! Observations from real software development projects.

Pr. Mike Holcombe (The Univesrity of Sheffield)
Friday 28/10, 14:30

Over the last 15 years I have managed over 50 commercial software development projects. When I started I tried to apply what I had learnt and had been teaching from the academic software engineering textbooks and current research in the field. Sadly, it soon became clear that this was not going to work.

There are many different aspects of a real development project, some of the most important are not related to technology and design at all. They are concerned with the roles of people and organisations. We have experienced many problems, an occasional disaster but many successes. What I have learned has completely changed my view of software engineering. In this session I will introduce some of the issues I have had to face and discuss with you how 'you' might have dealt with them.

Matching problems and algorithms

Dr. Rob Irving (The University of Glascow)
Friday 21/10, 14:30

In a matching problem, we typically wish to allocate the members of one set to the members of another (or the same) set so as to meet certain specified criteria. For example we may wish to match men to women, families to houses, doctors to hospitals, students to schools, and so on, taking into account, in some precise way, the wishes of those involved.

Matching problems have been studied from an algorithmic point of view over a period of many years, often in some real-world context. Many challenging computational problems have emerged from these studies, and a range of elegant algorithms have been devised for solving at least some of them.

This talk presents an overview of selected matching problems from the standpoint of the algorithmicist. We focus on cases where some or all of the participants express preferences over possible outcomes, leading to the consideration of criteria such as stability and matching profiles.

An action language for modelling norms and institutions

Pr Marek Sergot (Imperial College)
Friday 14/10, 14:30

The action language C+ of Giunchiglia, Lee, Lifschitz, McCain, and Turner is a formalism for specifying and reasoning about the effects of actions and the persistence (`inertia') of facts over time. An `action description' in C+ is a set of C+ laws which define a labelled transition system of a certain kind. (C+)++ is an extended form of C+ designed for representing norms of behaviour and institutional aspects of (human or computer) societies. There are two main extensions. The first is a means of expressing `counts as' relations between actions, also referred to as `conventional generation'. The second extension is a way of specifying the permitted (acceptable, legal) states of a transition system and its permitted (acceptable, legal) transitions. There are implementations supporting a wide range of temporal reasoning and planning tasks (based on the 'Causal Calculator' for C+ from the University of Texas), for obtaining event calculus like computations with C+ and (C+)++ action descriptions, and for using C+ and (C+)++ with standard model checking systems for verifying system properties expressed in temporal logics such as CTL.

Games Computer scientists play

Pr Faron Moller (The University of Swansea)
Friday07/10, 14:30

short abstract

When a computer runs a program, it is in essense playing a game against the user who is providing the input to the program. The program represents a strategy which the computer is using in this game, and the computer wins the game if it correctly computes the result.

In this lecture, we explore concepts of games and strategies, and expand on the analogy outlined above to justify the thesis that Game Theory provides a useful paradigm for understanding the nature of computation.

Long abstract

Computer Science is a relatively young discipline, and its nature is changing rapidly. Historically, a Computer Science Department typical grew either out of a Mathematics Department or an Engineering Department, and a Computer Science degree was predominantly about writing programs for a computer (software) as well as the computer itself (hardware). Textbooks typically referred to programming as an "art" or a "craft" with little scientific basis compared, e.g., to traditional engineering subjects.

When a computer runs a program, it is in a sense playing a game against the user who is providing the input to the program. The program represents a strategy which the computer is using in this game, and the computer wins the game if it correctly computes the result. In this game, the user is the adversary of the computer and is naturally trying to confound the computer, which itself is attempting to defend its claim that it is computing correctly, that is, that the program it is running is a winning strategy.

This view suggests an approach to addressing three basic problems in the design of computer systems:

1) Specification refers to the problem of defining the problem to be solved, and what constitutes a solution. This problem reduces naturally to the problem of defining a winning strategy.

2) Synthesis refers to the problem of devising a solution to the problem which respects the specification. This problem reduces naturally to the problem of implementing a winning strategy.

3) Verification refers to the problem of proving that the devised solution does indeed respect the specification. This problem reduces naturally to the problem of proving that a given strategy is in fact a winning strategy.

In this lecture, we explore concepts of games and strategies, and expand on the analogy outlined above to justify the thesis that Game Theory provides a paradigm for understanding the nature of computation.

Some working experiences in Estimation of Distribution Algorithms (EDAs)

Dr Qingfu Zhang (The University of Essex)
Friday 30/09, 14:30

EDAs have been recognized as a major paradigm in evolutionary computation. There is no crossover or mutation in EDAs. Instead, they explicitly extract global statistical information from the selected solutions (often called parents) and build a posterior probability distribution model of promising solutions, based on the extracted information. New solutions are sampled from the model thus built and fully or in part replace the old population.

In this talk, I will present some of our work mainly in Estimation of Distribution Algorithms (EDAs) for hard optimisation problems. The contents of my talk are:

Background EDA + Two Local Searches Guided Mutation: EDA+GA An New EDA for Multi-objective Optimisation Orthogonal Crossover: Genetic Algorithm + Experimental Design

Modelling Concurrent Interactions

Dr Juliana Küster Filipe (The University of Birmingham)
Friday 23/09, 14:30

In UML 2.0, sequence diagrams have been considerably extended but their expressiveness and semantics remains problematic in several ways. For instance, it is still not possible to distinguish between a message that if sent may or must be received, or to enforce progress of an instance along its lifeline. One proposed way of addressing this limitation, and consequently obtain a richer language for inter-object specification, is to combine sequence diagram with liveness constraints expressed in UML's Object Constraint Language (OCL).

In this talk, I describe the main features of sequence diagrams in UML 2.0 and present a semantics based on labelled event structures. Further, I describe a proposed OCL template to specify liveness properties and show how it can be used. Formally, the liveness-enriched sequence diagrams can be specified as a collection of formulae in a true-concurrent two-level logic interpreted over labelled event structures. The top level logic, called communication logic, is used to describe inter-object specification, whereas the lower level logic, called home logic, describes intra-object behaviour. I discuss some practical benefits of this logical framework as well as ongoing work on extending the Edinburgh Concurrency Workbench.

Author: Vincent Schmitt (V.Schmitt at mcs.le.ac.uk), T: 0116 252 3813.
© University of Leicester 1th September 2005. Last modified: 13th September 2010, 08:00:04
CMS Web Maintainer. This document has been approved by the Head of Department.