ABOUT |
Computer Science Seminars and Short Courses — 2003/04You main need this to find your way...
Semester 2Seminar programme
Seminar detailsSimplified External Memory Algorithms for Planar DAGs Dr Laura Toma (Bowdoin College, Brunswick) In recent years a large number of I/O-efficient algorithms have been developed for fundamental planar graph problems. Most of these algorithms rely on the existence of small planar separators as well as an O(sort(N)) I/O algorithm for computing a partition of a planar graph based on such separators, where O(sort(N)) is the number of I/Os needed to sort N elements. In this paper we simplify and unify several of the known planar graph results by developing linear I/O algorithms for the fundamental single-source shortest path, breadth-first search and topological sorting problem on planar directed acyclic graphs, provided that a partition is given; thus our results give O(sort(N)) I/O algorithms for the three problems. While algorithms for all these problems were already known, the previous algorithms are all considerably more complicated and use O(sort(N)) I/Os even if a partition is known. Unlike the previous algorithm, our topological sorting algorithm is simple enough to be of practical interest. This is joint work with Lars Arge. Dr Vincent van Oostrom (University of Utrecht) We present an implementation of the core of functional programming languages, i.e. an implementation of beta-reduction on the lambda-calculus. More precisely, lambda-terms are implemented as graphs and each beta-reduction step is decomposed into a number of graph rewrite steps. This idea is quite common in the implementation of functional programming languages. The novel part is in the way in which beta-steps are decomposed. We show that a beta-step contracting (\lambda x.M)N to M[x:=N] can be decomposed into two operations: - one dealing with correctly managing the scope of the variables when substituting N in M (to avoid confusion of variables), and - another dealing with the replicating the argument N as many times as x occurs in M. Accordingly the implementation, which we've dubbed lambdascope, consists in making the above two operations into explicit operators. This yields a graph rewrite system which has the following features:
Irene Lopez de Vallejo (University College London) The structure of the talk is the relationship between society, space and technology. The author focuses on the interaction of new smart technologies and people in the workplace, and assumes that a sociological understanding of this association is key input both for the research community and for the construction of the future workplace.Ubiquitous Computing and Ambient Intelligence paradigms will be discussed as potential frameworks for this research. The discussion with the public will centre on the following topics:
An overview of past and current research related to this area will be given. Dr Matus Mihalak (ETH Zurich) Orthogonal Variable-Spreading-Factor (OVSF) codes are used in new generation wireless networks (e.g.: UMTS) to share the radio spectrum of the system by users and allowing them different date rates. The codes can be represented as nodes of a complete binary tree $T$ of height $h$, where codes on different level provides different data rates. As users come in and leave out the system, we assign them codes out of the tree according to their date rate demands. The constraint on the assignment is: no two assigned codes lie on a common leaf-root path. When a user requests a code it might be the case we have to reassign some of the already assigned codes in order to find a code to serve the request. A natural goal is to minimize the number of code assignments. We discuss online and offline variants of this optimization problem and present hardness results as well as algorithms" Qualitative vs Quantitative Program Analysis Dr Herbert Wiklicky (Imperial College London) Practice and theory aiming in the investigation of the meaning, semantics, properties, etc. of programs and software - as with any investigation of human artifacts - can be divided in two dialectically interacting approaches: synthesis and analysis. Arguable, large parts of software engineering practice concentrates on the synthetical aspects, i.e. the specification, implementation, construction, etc. of (large) programs. Analytical aspects, ranging from verification and model checking to performance analysis and testing, often play a much less important role in the everyday practice. This talks aims in presenting a coherent approach to program analysis covering qualitative (e.g. verification) as well as quantitative (e.g. performance) aspects. Central to such a unifying point of view are two closely related notions of ``approximation'' which formalise the concepts of ``correctness'' and ``closeness'' in different ways. This leads to the framework of classical Abstract Interpretation based on so-called Galois Connections - introduced by Cousot and Cousot in the 1970s - and of Probabilistic Abstract Interpretation based on so-called Moore-Penrose Pseudo-Inverse - developed recently by Di Pierro and Wiklicky. Prof. Meir Lehman (School of Computing, Middlesex University.) Interest in the phenomenon of software evolution and its exploitation is spreading rapidly. First identified by Lehman following his one year IBM Programming Process study at Yorktown in 1968/9, its reality, significance and potential practical importance for software product and process improvement has only recently become more widely acknowledged. It is, however, now receiving widespread attention in, for example, a number of international networks and other groupings. The phenomenon results from an intrinsic need and desire to continually adapt software systems to a changing world, to improve performance and functionality and to ensure continuing reliability and economic viability. There are two separate, though related, aspects to its study. On the one hand it is of considerable interest to understand/ what/ software evolution/ is/,/ why/ it/ occurs/,/ why/ it is/ inevitable/ and/ what impact/ it has in practice. On the other hand industry needs to know how to recognise a need or opportunity and then plan, manage, control and, above all,/ implement/ evolution predictably, reliably, economically and on-time. Industrial and academic interest has, in the main, focussed on the latter aspects, the/ means/ whereby evolution is achieved. Full achievement of this goal requires, however, sound understanding of the nature and characteristics of the phenomenon. Sound answers to these questions will speed up process improvement to help meet the challenges posed by the ever increasing dependence of 21st century society on computer systems that function satisfactorily despite an ever changing world. The talk introduces and discusses aspects of software and the software process that provide a basis for answers to the study of these questions and of research addressing them. The talk will introduce one underlying sources of continuing evolution, identify its true nature, present examples of evolutionary behaviour, briefly discuss aspects of the modelling and interpretation of that behaviour, illustrate the dynamics of the evolution process and indicate some initial elements of a theory of software evolution. This provides background for an outline of practical implications, management and technical guidelines and rules that follow from the more theoretical discussion outlined above. These, in turn, suggest improvements to the software evolution process to achieve more reliable, timely, economic evolution that maintains software systems satisfactory in a dynamic, always changing, world. Modal logics of submaximal spaces Dr David Gabelaia (King's College) (Joint work with Guram Bezhanishvili and Leo Esakia) The operations of closure and derivation on a topological space can be understood as normal modal operators working on the boolean algebra of all subsets of the underlying set. This gives rise to modal logics associated with topological spaces. It is froam this point of view that we consider the subclasses of submaximal spaces (among others door spaces and maximal spaces), which have been the subject of increasing interest in the topological community for past decades. Such spaces are of interest in digital topology as well (eg. Khalimsky line is submaximal). It will be demonstrated that the equations involving the closure operator alone fail to distinguish these classes. On the other hand, derivation operator offers more expressivity, and allows for capturing precisely the individual characteristics of each of them. For each of the topological classes concerned, a simple class of finite Kripke frames generating the same modal validities will be exhibited. It will follow that all the modal logics arising from submaximal spaces have the finite model property and are decidable. Homological finiteness conditions for term rewriting systems Dr Philippe Malbos (Université de Montpellier 2 - visiting the University of Bangor) By using the completion Knuth-Bendix algorithm, in some cases the word problem for a monoid is reduced to a calculus of normal forms. However, in general a monoid cannot be solved in this way. A program, initiated by Squier, aims to characterize by an algebraic invariant the class of monoids decidable by rewriting. In this talk, we will briefly survey constructions of such invariants for string rewriting. We will show how the finiteness conditions for string rewriting can be generalize to first-order term rewriting systems. For this purpose, we will consider a categorical semantic of first ordered term by algebraic theories in the sense of Lawvere and a homology theory of these theories with coefficients in cartesian natural systems. Dr Sasha Rubin (The University of Auckland) Automatic structures are a natural extension of regular languages into algebra. After presenting the formal definition I will show how basic algebraic structures can be realised as automatic structures, and then present some fundamental properties. I will outline more recent research that displays the richness of the area; this involves the classification problem, the isomorphism problem, and the relationship between definability and regularity in automatic structures. The latter results are joint with Bakhadyr Khoussainov, Frank Stephan and Andre Nies. Some technical reports can be found at http://www.cs.auckland.ac.nz/staff-cgi-bin/mjd/secondcgi.pl Homotopical Aspects of Multiagent Systems Prof. Timothy Porter (University of North Wales - Bangor) The talk will examinee some simple minded combinatorial models for multiagent systems, the notion of morphism between them and of paths. Then it will pose some questions using analogues of constructions from homotopy theory This suggests that it may be worth exploring a locally defined analogue of a multiagent system. This will be briefly explored. but interpretations of these latter systems are still at a very elementary stage. Addressing the model management challenges of systems biology Dr Peter Saffrey (University College London) Biological modelling is an increasingly complex and diverse field. As well as developing new models for biological phenomena, there is a need to integrate existing models and {\em scale up} to investigate higher level behaviour. To achieve this integration, techniques and tools are required to catalogue and understand existing models, and to support the development of new models ready for integration. We describe our approach to this problem and validate the approach with examples. We also present some preliminary results demonstrating prototype architectures and running code. Formal Development of a Safe Wheelchair Control Program Dr Cristoph Lutz (Universitaet Bremen) In the talk, I will report about ongoing work in the SafeRobotics project, which is concerned with the formal development of a safe control program for a wheelchair for severely handicapped patients. The talk will introduce the system architecture of our wheelchair, and discuss how safety can be ensured by the formal development of a small but critical safety module. The formal development itself is done with the higher-order prover theorem prover Isabelle/HOL. First we formalise the wheelchair's kinematics in higher-order logic, and formulate a safety requirements specficiation. From this specification, we derive a design specification in the executable fragment of higher-order logic, from which we derive a second design specification using imperative features such as state and exceptions, arriving finally at an imperative program. Proof nets for unit-free multiplicative-additive linear logic Pr Rob van Glabbeek (University of Edinburgh) Common work with D.Hughes (Stanford) A cornerstone of the theory of proof nets for unit-free multiplicative linear logic (MLL) is the abstract representation of cut-free proofs modulo inessential commutations of rules. The only known extension to additives, based on monomial weights, fails to preserve this key feature: a host of cut-free monomial proof nets can correspond to the same cut-free proof. Thus the problem of finding a satisfactory notion of proof net for unit-free multiplicative-additive linear logic (MALL) has remained open since the inception of linear logic in 1986. We present a new definition of MALL proof net which remains faithful to the cornerstone of the MLL theory. Business Rule Configuration for a Credit Recovery System Michel Wermelinger (New University of Lisbon, Portugal) This talk reports on joint work with G. Koutsoukos, R. Avillez, H. Lourenço and J. Gouveia from ATX Software SA on the use of coordination contracts in a project for a credit recovery company. We have designed and implemented a framework that allows users to define several business rules according to pre-defined parameters. However, some rules require changes to the services provided by the system. For these, we use coordination contracts to intercept the calls to the underlying services and superpose whatever behaviour is imposed by the business rules applicable to that service. Such contracts can be added and deleted at run-time. Hence, our framework includes a configurator that, whenever a service is called, checks the applicable rules and configures the service with the given parameters and contracts, before proceeding with the call. Using this framework we have also devised a way to generate rule-dependent SQL code for batch-oriented services. Process-aware collaboration systems - Models and Architectures for Virtual Teamwork Schahram Dustdar (Technical University of Vienna)
Wavelength Conversion in Optical Networks with Shortest-Path Routing Thomas Erlebach (ETH Zurich) All-optical networks offer the capacity that is required to meet the ever-increasing demands for transmission bandwidth. A connection is established by assigning it a wavelength and a path from transmitter to receiver. Wavelength blocking can reduce the usable capacity of the network. The use of wavelength converters has been proposed in order to alleviate this problem. While previous work on wavelength converter placement has studied the case where arbitrary sets of paths must be accomodated, we focus on the variant of the problem where all connections are routed along shortest paths. We present efficient algorithms for deciding whether a placement of wavelength converters allows the network to run at maximum capacity, and for finding an optimal wavelength assignment if such a placement of converters is given. Concerning the problem of computing an optimal converter placement, we prove MAX SNP-hardness. Devising a good approximation algorithm remains an interesting open problem. This is joint work with Stamatis Stefanakos. Pr Meir Lehman (School of Computing, Middlesex University)
Dr. Thorsten Altenkirch (School of CS and IT, Nottingham University)
Pr Luciano Baresi (Politecnico di Milano)
Grid Middleware: Dressing the Emperor! Pr Gordon Blair (Lancaster University) There has recently been a major investment in the UK and elsewhere in the computational Grid and e-Science generally. A part of this investment has been directed towards appropriate middleware for the Grid with current thinkng favouring an application of the web services approach in this area. This seminar will discuss such developments and also associated developments in the greater middleware community (distributed objects, components, etc). It will be argued that the basic web services approach is insufficient to meet the needs of all classes of e-Science application. The talk will conclde by a short presentation of ongoing research addressing more open and flexible middleware architectures for Grid computing. Developing Software Engineering Environments for Collaborative Software Development Pr Cornelia Boldyreff (Duhram University) For a number of years, the deployment of large development teams has been a common practice in software engineering. Global teams are not now uncommon. However, it is only recently that collaborative software development environments have been under construction; and studies on the nature of global software development processes are only just emerging. While many generic Computer Supported Cooperative Work (CSCW) systems and groupware products exists, specific CASE groupware has been lacking. Web-based CVS offers a simple example of such a tool and more general support can be found via SourceForge. Nevertheless the CASE support offered for globally distributed teams is limited. Over the past two years, the GENESIS and CoDEEDS projects have been addressing the development of CASE support for globally distributed teams. The CASE environments and tools being built in both of these projects are focused on supporting collaborative working of teams engaged in distributed software engineering projects and the projects are both releasing their software as open source. An area addressed by both these projects is support across projects, both in time and space, i.e. historical views of an enterprise's projects and global views of current projectsi and their related software artefacts. Our initial concern is to develop support for such views for the software engineers actively engaged in global software development. It is not uncommon for such an engineer to have worked on and to be working on a number of projects with each project using a different CASE environment and tools without any overarching support for their work. Pr Anthony Finkelstein (University College London) xlinkit is a flexible application service that checks the integrity of distributed documents, databases and web content. It provides powerful diagnostics that generate hyperlinks between elements that violate data integrity. xlinkit was specifically designed to: validate complex documents, where traditional database mechanisms and XML schema languages are not sufficient; integrate distributed data, by allowing you to check your data wherever it resides, and by checking integrity across your data sources; bridge data heterogeneity, by providing mechanisms to check data whatever format or storage it is in; generate portals that provide structured access to complex, distributed data. In this talk I will show how constraints across data resources can be expressed and checked. I will describe a method for generating links based on the result of the checks and show how this can yield high quality diagnostic information. I will look at how repairs to data integrity flaws can be identified. I will present the architecture of the service and the results of a substantial 'real-world' evaluation. The talk, which may be of interest to anybody with interests in XML technologies, will be supported by a demo. Formal redesign of UML class diagrams with OCL constraints Dr Piotr Kosiuczenko (University of Leicester) The refactoring of program code have been widely studied. Much less attention was paid to the problem of specification redesign and tracing especially at the formal level. Specification of a larger piece of software is usually spread through several documents and changes constantly during software life cycle. In such a case it is very hard to trace the requirements. Similarly it is hard to validate compliance of software to the requirements. In this talk I will present my recent results concerning transforming and tracing OCL constraints in UML class diagrams. I will present a general notion of interpretation function and show how to use it to transform constraints and to define trace relationship. This notion of trace allows to navigate through requirements in forward and backward direction in spread and changing specifications. I will also discuss a possible implementation. Logical Interpolation and Projection in the Duration Calculus Dr Dimitar P Guelev (University of Birmingham, School of Computing) I briefly present the Duration Calculus (DC), which is an extension of real time Interval Temporal Logic by state expressions and duration terms for states. I review earlier results of mine on Craig interpolation and interval-related interpolation about ITL. Then I discuss the possibility to generalise this result and present a new interpolation result on a subset of DC, where "interval-related" is replaced by "projection-related", projection being a DC modality, which I introduce and motivate too. I give a counter-example showing that neither Craig nor the more general interval-related interpolation hold about DC without restrictions, such as the ones appearing in the interpolation theorems I present. Time permitting, I show how Craig interpolants can be constructed in a subset of DC widely regarded as the "propositional" subset, in a fashion similar to one known for classical propositional logic. |
|
Author: Vincent Schmitt (V.Schmitt at mcs.le.ac.uk), T: 0116 252 3813. |