University of Leicester

cms

Computer Science Seminars and Short Courses

Semester 2, 2008/9

Seminar programme


Seminar details

Creating and Processing Visual Domain-Specific Languages

Gergely Mezei (Budapest University of Technology)
Tuesday, July 7, 10:00 in KE LT3 (Host: Reiko Heckel, Denes Bisztray)

Software reuse has always been a focused issue. As opposed to the claim of certain methodologies, reuse is not a free side effect of a good development method, software must be designed for reuse in the first place. It is fairly obvious that a software product cannot address all issues in the world. Generative programming aims at addressing all the issues of a certain domain. It captures variability, and a configurable generator produces the software for a given set of decisions. Then we face two challenges: the inherent complexity of these systems and the increasing demand to represent software mechanisms in a visual, easy-to-understand way. Domain-specific modeling offers a solution to both problems: it raises the level of abstraction beyond classical programming languages and allows to visualize the concepts by a domain-specific notation. The users work with graphical domain-specific models, while the final products are generated from these high-level specifications. The key to automated code generation lies in limiting the scope of the modeling languages to a well defined domain instead of trying to be highly generic. Experience shows that the development is 5-10 times faster than current practices not mentioning the user-friendly graphical representations.

The aim of the presentation is to introduce the theoretical and practical background of generative domain-specific modeling. The presentation is based on the experience gained from more than eight years of intensive research and software development. The basic concepts of visual language engineering, model transformation, code generation, refactoring and simulation techniques will be elaborated. Besides the theoretical concepts, it will be also discussed how to create a software application capable of supporting all the aforementioned techniques in an efficient yet flexible way. The presentation will introduce several case studies to illustrate the methods.


Intelligence Augmentation using Semantics and Social Computing

Vania Dimitrova, Lydia Lau (University of Leeds)
Fri, July 3, 1400 in Ben LT2 (Host: Effie Law)

Technological advancements can transform human life, practice and expectations. Combining the modern technologies with creative design methodologies can create real opportunities to augment human intelligence. The talk will start with revisiting Engelbart's vision of using computers to augment human intellect pointing at the new possibilities and challenges for implementing this vision. We will argue that a broader view of intelligence augmentation is needed, which considers the design and deployment of technologies that turn experience into knowledge and apply it in turn to improve practice in the relevant context. A co-evolutionary approach which brings together smart technologies and innovative design methodologies to augment intelligence by enabling people to understand and profit from their experience will be presented. Individual and collective viewpoints are important to this holistic approach. The second part of the talk will illustrate this approach with several interdisciplinary projects at Leeds University. We will focus on the use of semantics and social computing techniques for intelligence augmentation. We will present: - Research conducted in collaboration with the Leeds University Business School to design intelligent environments that provide new opportunities for on-the-job training in emergency decision making by using Activity Theory and ontological reasoning. - Research conducted in collaboration with the School of Education at the University of Leeds on designing an online community for academic writing where semantics is employed to implement social scaffolding. We will show how semantics has been used to scaffold social exchanges and to derive a holistic model of a community. - Research conducted in collaboration with the School of Chemistry at the University of Leeds on the design of an Electronic Laboratory Notebook which aims to facilitate the sharing of provenance data in modelling experiments.


A Decision Procedure for Fusion Logic

Antonio Cau (De Montfort University)
Fri, June 19, 14:00 in Ben LT2 (Host: Monika Solanki)

Slides Developing a decision procedure for Propositional Interval Temporal Logic (PITL) is a non-trivial task due to the non-elementary complexity of PITL. We introduce the syntax and semantics of Fusion Logic (FL) which has the same expressiveness as PITL but for which it is `easier' to construct a decision procedure. The main differences between PITL and FL concern computational complexity, naturalness of expression for analytical purposes, and succinctness. Fusion Logic augments conventional Propositional Temporal Logic (PTL) of Manna and Pnueli with the fusion operator. The fusion-operator is basically a limited ``chop'' that does not have an explicit negation on the left hand side. We will show a detailed construction of the decision procedure for Fusion Logic.

Fusion Logic can be used to specify history based access control policies and the decision procedure can therefore be used to verify properties about them. History-based access control policies govern the behaviour of a system based on previously observed events in the execution of the system. The increase in expressiveness comes at the cost of enforcement efficiency, with respect to both space and time. We will show that a `slightly' modified version of the decision procedure can be used to efficiently enforce these policies.


Modelling change of awareness and knowledge

Hans van Ditmarsch (Sevilla)
Friday, June 12, 1400 in BEN LT2 (Host: Fer-Jan de Vries, Alexander Kurz)

Slides (joint work with Tim French, University of Western Australia) We propose various logical semantics for change of awareness. The setting is that of multiple agents that may become aware of facts or other agents, or forget about them. We model these dynamics by quantifying over propositional variables and agent variables, in a multi-agent epistemic language with awareness operators, employing a notion of bisimulation with a clause for 'same awareness'. The quantification is over all different ways in which an agent can become aware (or forget). Logics for change of awareness combine well with logics for informational change, as when a public announcement simultaneously makes you aware of an issue ('a plane just crashed on Schiphol Airport').


Constructing Final Coalgebras

Jiri Velebil (Prague)
Fri, May 22, 14:00 in (Host: Alexander Kurz)

Final coalgebras (for an endofunctor) contain all possible behaviours of recursive systems (given by that endofunctor). This property makes the existence of final coalgebras an important problem. So far, mainly final coalgebras for ``good'' endofunctors of ``good'' categories have been constructed. We provide an explicit method of constructing final coalgebras on ``not so good'' categories.

We will focus mainly on examples, there will be only a little category theory used.

This is a joint work with Apostolos Matzaris and Panagis Karazeris from the University of Patras, Greece.


Specification and verification of model transformations using algebraic triple patterns

Fernando Orejas (Barcelona)
Friday, May 8, 14:00 in Ben LT2 (Host: Jose Fiadeiro)

Model transformation is one of the key notions in the model-driven engineering approach to software development. Most work in this area concentrates on designing methods and tools for defining or implementing transformations, on defining interesting specific classes of transformations, or on proving properties about given transformations, like confluence or termination. However little attention has been paid to the verification of transformations. This may be due, partly, to the fact that there is also little work trying to provide some formal foundations in this area.

In this talk I will first introduce a formal approach for the specification of model transformations using triple algebraic patterns, which is a slight generalization of the approach introduced by Guerra and de Lara in the last ICGT. Then, we will see how these specifications can be compiled into a set of rules, proving the termination, soundness and completeness of these rules. Finally, we will discuss what does it mean to verify a model transformation, providing a specific method also based on the use of algebraic triple patterns.


Antichain algorithm for Visibly Pushdown Model Checking Duration

Mizuhito Ogawa (Japan Advanced Institute of Science and Technology (JAIST))
Tuesday, Mar 31, 10:00 in ATT SB2.06 (Host: Fer-Jan de Vries)

Alur, et.al. (ACM STOC 2004) proposed Visibly Pushdown Automata (VPA), which preserve nice boolean closure poperties, yet covering parenthesis languages. Their result theoretically shows the possiblity of visibly pushdown model checking, but due to the high complexity O(2^{n2}) of the determinization step, there are virtually no implementations. We attack to the problem with an antichain idea, which combines determinization, minimization, and reachability steps in an on-the-fly manner. Although theoretical complexity is not improved, preliminary experiments on randomly generated VPAs show the significant improvements.


MCMAS: A model checker for the verification of multi-agent systems

Hongyang Qu (Imperial College London)
Friday, Mar 27, 14:00 in BEN LT2 (Host: Monika Solanki)

We present MCMAS, a symbolic model checker specifically tailored for agent-based specifications and scenarios. MCMAS supports specifications based on Computational Tree Logic (CTL), epistemic logic (including operators of common and explicit knowledge), Alternating Time Logic (ATL), and deontic modalities for correctness. The model input language includes variables and basic types and it implements the semantics of interpreted systems thereby naturally supporting the modularity present in agent-based systems. MCMAS implements Ordered Binary Decision Diagram (OBDD)-based algorithms optimised for interpreted systems and supports fairness, counter-example generation, and interactive execution (both in explicit and symbolic mode). It has been tested in a variety of scenarios including web-services, diagnosis and security. MCMAS comes with a GUI aimed at assisting the verification process.

In this talk, we describe not only the functionalities of the tool with examples, but also the internal design and optimisation of the verification algorithms. We also would like to share the experience gained during the development and discuss experimental results.


Balancing Conflicting Objectives for UK Football

Graham Kendall (Nottingham)
Thursday, March 26, 10:00 in BEN LT5 (Host: Shengxiang Yang)

Every year the English football authorities produce a set of fixtures for the four main divisions in England. Over the Christmas and new year period every team has to play two fixtures; one being played at their home venue and the other at an opponent's venue. There are various other constraints that also have to be respected with the overall objective being to minimise the total distance travelled by all teams. In this talk, we will define the problem, discuss the data collection that was undertaken and present various algorithms that we used to produce good quality solutions for this problem. We show that our results, using data from the last seven seasons, create better schedules than those currently used. Additionally we look at the multi-objective nature of the problem and show how we are able to balance conflicting objectives. We demonstrate that, on occasions, we are able to minimise both objectives with no loss of solution quality.


Reversible computation and reversible programming languages

Tetsuo Yokoyama (Nagoya)
Friday, March 20, 14:00 in BEN LT2 (Host: Irek Ulidowski)

We will discuss the fundamental properties of reversible computation from a programming language perspective. We show principles for good design of a reversible high-level language and a reversible machine code, in which the reversibility of each layer is guaranteed independently. Many irreversible language constructs and properties can be used in reversible programming languages. For example, we show that structured program theorem holds for reversible programming languages. On the other hand, we show reversible programming languages have their own modularity and programming techniques. The practicality of reversible programming languages is shown by implementation of a reversible fast Fourier transform and reversible physical simulation.


The pi-calculus and Logics for Concurrent and Sequential Programs

Kohei Honda (Queen Mary, London)
Friday, March 13, 14:00 in BEN LT2 (Host: Emilio Tuosto, Alexander Kurz)

This talk is about how we can represent computational behaviour with a mathematical rigor and uniformity, and how such representation can be useful for their high- level specifications and reasoning. Diverse software behaviours can be found on our PCs, on Internet and the web. A perhaps surprising fact is that all of these behaviour can be represented faithfully in a small calculus of name passing, the pi-calculus, up to some abstraction. Related with other semantic theories such as game semantics and Linear Logic, the calculus can represent diverse computational behaviour with precision, sequential and concurrent, stateless and stateful, shared variable and message passing, first-order and higher-order, coupled with its rich theories of types.

In this talk, we shall discuss why such representation is possible and what they can give us, drawing from our experience so far on its applications to logical validation of a wide range of programming languages, taking examples from typed business protocols, imperative programs and higher-order functions. The logic, which is the classical Hennessy-Milner Logic applied to typed processes, allows reasoning about processes through proof systems for what correspond to partial and total correctness through its two proof systems, characterising respectively the standard May and Must preoder: their mixture gives a proof system which characterises bisimilarity. The logic allows compositional reasoning for diverse classes of process behaviour, including typed communicating processes and first-order and higher-order programs, which will be illustrated through examples. The logic can embed assertions and proof rules of the program logics including Hoare logic for imperative programs.


Faster algorithm for the set variant of the string barcoding problem

Leszek Gasieniec (Liverpool)
Friday, 6 March, 14:00 in BEN LT2 (Host: Rajeev Raman)

A "string barcoding problem" is defined as to find a minimum set of substrings that distinguish between all strings in a given set of strings S. In a biological sense the given strings represent a set of genomic sequences and the substrings serve as probes in a hybridisation experiment. We study a variant of the string barcoding problem in which the substrings have to be chosen from a particular set of substrings of cardinality n. This variant can be also obtained from more general 'test set problem', by fixing appropriate parameters. We present almost optimal O(n|S|log^3 n)-time approximation algorithm for the considered problem. Our approximation procedure is a modification of the algorithm due to Berman et al. which obtains the best possible approximation ratio (1+ln n), providing NP is not contained in DTIME(n^{loglog n}). The improved time complexity is a direct consequence of more careful management of processed sets, use of several specialised graph and string data structures as well as tighter time complexity analysis based on amortised argument.

This is a joint work with Cindy Y. Li and Meng Zhang. The respective paper is available here


Sorting out Higher-order Contexts by playing Games with the Int-construction

Thomas Hildebrandt (Copenhagen)
Friday, 27 Feb, 14:00 in BEN LT2 (!!) (Host: Emilio Tuosto, Alexander Kurz)

A number of recent papers employ some variant of /higher-order/ contexts to provide concise semantics of process calculi with parametric or non-local reaction rules.

Inspired by the usefulness of higher-order contexts demonstrated by these papers, and the apparent similarity between the approaches, we have provided a general method for constructing (categories of) higher-order contexts from (categories of) ordinary first-order contexts.

In the talk we first give a gentle introduction to bigraphs, which is an example of a general category of contexts. We then proceed to define and motivate higher-order contexts, leading to the definition of higher-order bigraphs. We show that Miculan and Grohmann's /directed/ bigraphs arise as a special, limited instance of higher-order bigraphs.

In the end we will sketch the general construction of higher-order contexts, which uses the Int-construction (lifting traced monoidal categories to compact closed ones), 2) the recent formulation of sortings for reactive systems, and 3) games for multiplicative linear logic.

The material is presented in the recent technical report: "Higher-order Contexts via Games and the Int-construction" by Lars Birkedal, Mikkel Bundgaard, Soren Debois, Davide Grohmann and Thomas Hildebrandt. IT University Technical Report Series, TR-2009-117 ISSN 1600-6100, January 2009, http://www.itu.dk/people/debois/pubs/tr09.pdf


Potential applications of semantic technologies within the cultural heritage sector

Ross Parry (Museum Studies, Leicester)
Friday, 20 Feb, 14:00 in BEN LT1 (Host: Jose Fiadeiro)

Museums and galleries have for some forty years been grappling with the potential that computers afford to their primary functions of collecting, documenting and displaying objects. Today, one of the most complex and open discussions and areas of development surrounds the impact that the Semantic Web (and semantic technologies) may or may not have on institutions such as museums that hold large collections of rich and complex information that need to be discoverable by many different user groups.

Ought there to be a co-ordinated initiative within the museum sector to make the best possible use of this technology? Will it require industry standards and collaboration within the sector, or will applications simply emerge 'from below'? Is a revolution in collections management required (and new way of looking at museum collections 'semantically') or are we more likely to witness an inevitable evolution. Will museums find themselves adapting to a all-encompassing Semantic Web, or is the future more likely to be of smaller, more localised, more specific (and possible more ad hoc) semantic technologies being applied to particular projects and collections? What could be possible for museums with this new technology? What role might there be for the term-lists, thesauri and other documentation standards that already exist in the museum world? And what are the specific technologies (topic maps? RDF? microformats?) on which the museum sector might be wise to focus.

Drawing upon the findings of an ARHC-funded national thinktank ('UK Museums and the Semantic Web') this seminar will attempt to explore the ways that the cultural heritage sector is engaging with the potential of the 'machine-readable Web'. It will be an opportunity for participants to offer their advice (from a computer science perspective) on where the museum might now strategically head with this development.


Introduction to Quantum Computing

Thorsten Altenkirch (Nottingham)
Friday, 13 Feb, 14:00 in BEN LT1 (Host: Jose Fiadeiro, Alexander Kurz)

We give an overview of the basic concepts of the emerging field of quantum computing, a computational formalism which is based on the laws of quantum mechanic. The most famous example of a quantum algorithm is Shor's factorisation algorithm, a probabilistic algorithm which can find the factor of a number in polynomial time. We close with a summary of current research directions in quantum computing.


Attacks on the Trusted Platform Module, and solutions

Mark Ryan (Birmingham)
Friday, 6 Feb, 14:00 in BEN LT1 (!!) (Host: Jose Fiadeiro, Alexander Kurz)

(Joint work with Liqun Chen.) The Trusted Platform Module (TPM) is a hardware chip designed to enable computers achieve greater security. It is currently shipped in high-end laptops, desktops and servers made by all the major manufacturers and destined to be in all devices within the next few years. It is specified by an industry consortium, and the specification is now an ISO standard. There are currently 100M TPMs in existence as of 2008, and this figure is expected to be 250M by 2010. The TPM provides hardware-secured storage, secure platform integrity measurement and reporting, and platform authentication. Software that uses this functionality will be rolled out over the coming years.

We explain some of the internal workings of the TPM, and explain two attacks relating to the way authorisation on the TPM is handled, and their consequences. By using the attacks, an attacker can circumvent some crucial operations of the TPM, and impersonate a TPM user to the TPM, or impersonate the TPM to its user. We describe modifications to the TPM protocols that avoid these attacks, and use protocol verification techniques to prove their security. I also hope to give some ideas for future research in trusted computing.


Service-Level Agreements for Service-Oriented Computing

Stephen Gilmore (Edinburgh)
Friday, 30 Jan, 14:00 in BEN LT2 (Host: Jose Fiadeiro)

Service-oriented computing is dynamic. There may be many possible service instances available for binding, leading to uncertainty about where service requests will execute. We present a novel Markovian process calculus which allows the formal expression of uncertainty about binding as found in service-oriented computing. We show how to compute meaningful quantitative information about the quality of service provided in such a setting. These numerical results can be used to allow the expression of accurate service-level agreements about service-oriented computing.

Joint work with Allan Clark and Mirco Tribastone, Laboratory for Foundations of Computer Science The University of Edinburgh, Scotland


Semester 1, 2008/9

Seminar programme


Seminar details

General Structural Operational Semantics through Categorical Logic.

Sam Staton (Cambridge)
Friday, Dec 12, 14:00 in BEN LT2 (Host: Alexander Kurz)

I will explain how ideas from categorical logic are helpful in understanding the semantics of programming languages. There is a variety of general techniques in structural operational semantics; for instance, rule induction, and congruence results. In their simplest form, these techniques aren't relevant for languages with alpha-equivalence and substitution, such as the pi-calculus or the spi calculus. I will explain how to make the techniques relevant, by using categorical logic to interpret them in different categories. The talk incorporate some results I presented at LICS this year. A preprint is on my homepage.


Tolerating Faults in Interconnection Networks

Iain Stewart (Durham)
Friday, Dec 5, 14:00 in BEN LT2 (Host: Rajeev Raman)

As distributed memory parallel machines are built containing more and more processors, there is an increasing liklihood of faults arising. These faults can be faults on the links between processors or faults involving the processors themselves. Of course, we still wish our (expensive) machines to continue to compute in the presence of a limited number of faults. In this talk I will introduce some interconnection networks prevalent within distributed memory parallel computing and some resulting graph-theoretic questions which arise in this context. I shall also highlight some recent results involving fault-tolerance.


Completeness of Moss's Coalgebraic Logic

Alexander Kurz ()
Friday, Nov 28 , 14:00 in BEN LT2 (Host: )

Moss's coalgebraic logic was the first proposal for a logic for coalgebras which is uniform for all types of coalgebras. To give a complete calculus was an open problem until recently. Joint work with Clemens Kupke and Yde Venema.


Dynamics, robustness and fragility of trust

Dusko Pavlovic (Oxford)
Friday, Nov 21, 14:00 in BEN LT2 (Host: Alexander Kurz)

slides I present a model of the process of trust building that suggests that trust is like money: the rich get richer. The proviso is that the cheaters do not wait for too long, on the average, with their deceit. The model explains the results of some recent empiric studies, pointing to a remarkable phenomenon of *adverse selection*: a greater percentage of unreliable or malicious web merchants are found among those with certain (most popular) types of trust certificates, then among those without. While some such findings can be attributed to a lack of diligence, and even to conflicts of interest in trust authorities, the model suggests that the public trust networks would remain attractive targets for spoofing even if trust authorities were perfectly diligent. If the time permits, I shall discuss some ways to decrease this vulnerability, and some problems for exploration.


Proof and Definition in Logic and Type Theory

Robin Adams (Royal Holloway, University of London)
Friday, Nov 14, 14:00 in BEN LT2 (Host: Nicola Gambino)

slides Most approaches to logic focus on proof. The logic consists of the axioms and rules of deduction that may be used when proving a theorem. Definitions are secondary: to say that a function is definable typically means just that a certain formula is provable. Type theory takes the opposite approach. A type theory specifies which operations are acceptable when constructing an object. With type theory, proof is secondary: to say that a theorem is provable means just that an object of a certain type is constructible.

I shall present a recent hybrid approach known as logic-enriched type theories (LTTs), which contain two parts: a type theory for definitions, and a separate, primitive logical mechanism for proving theorems. I shall describe a recently constructed LTT that formalises predicative mathematics as developed by Hermann Weyl. I shall argue that LTTs provide a clean, controllable "laboratory" environment for experimenting with logical systems, in which we can adjust the definable objects or the provable theorems independently.


Formal Specification and Analysis of Real-Time Systems in Real-Time Maude

Peter Ölveczky (University of Oslo)
Thursday Nov 13, 10:00 in GP LTB (Host: Artur Boronat)

slides Real-Time Maude is a tool that extends the rewriting-logic-based Maude system to support the executable formal modeling and analysis of real-time systems. Real-Time Maude is characterized by its general and expressive, yet intuitive, specification formalism, and offers a spectrum of formal analysis methods, including: rewriting for simulation purposes, search for reachability analysis, and temporal logic model checking. Our tool is particularly suitable to specify real-time systems in an object-oriented style, and its flexible formalism makes it easy to model different forms of communication. This modeling flexibility, and the usefulness of Real-Time Maude for both simulation and model checking, has been demonstrated in advanced state-of-the-art applications, including scheduling and wireless sensor network algorithms, communication and cryptographic protocols, and in finding several bugs in embedded car software that were not found by standard model checking tools employed in industry. This talk gives a high-level overview of Real-Time Maude and some of its applications, and briefly discusses completeness of analysis for dense-time systems.

Syntax graphs and substitution by pushout

Richard Garner (Cambridge)
Friday, Nov 7, 14:00 in BEN LT2 (Host: Nicola Gambino)

Equational theories finds their categorical expression in the theory of monads; a crucial result in whose theory tells us that, if presented with a signature of operations of finite arity, we may generate from it a monad on the category of sets whose algebras are sets equipped with an interpretation of that signature.

A less well-known result tells us that any such signature also generates a comonad on the category of sets, whose coalgebras can be regarded as DAG (directed acyclic graph) representations of closed terms built from the signature. In this talk, I will generalise this result by describing a comonad on a suitable category whose coalgebras are DAG representations of open terms. Substitution of such terms into each other is given by pushout; which makes precise the intutition that substitution is really just "glueing terms into holes". If time permits, I will also describe how this viewpoint allows us to explain evaluation of terms as an orthogonality property.


Learning Operational Requirements from Scenarios

Alessandra Russo (Imperial College London)
Friday, Oct 24, 14:00 in BEN LT2 (Host: Jose Fiadeiro)

Requirements Engineering involves the elicitation of high-level stakeholders goals and their refinement into operational requirements. A key difficulty is that stakeholders typically convey system goals "indirectly" through intuitive narrative-style scenarios of desirable and undesirable system behavior, whereas goal refinement methods usually require goals to be expressed declaratively using, for instance, a temporal logic. Currently, the extraction of formal requirements from scenario-based descriptions is a tedious and error-prone process that would benefit from automated tool support. In this talk I will present a methodology for learning operational requirements from scenarios and a initial but incomplete requirements specification. The approach uses an Inductive Learning system, called XHAIL, that, applied to an appropriate event-based logic programming encoding of requirements and scenarios, computes missing operational requirements in the form of event preconditions and trigger conditions. A brief introduction of the XHAIL system will be given and the various phases of approach will be defined and illustrated through a case study. I will then conclude showing how the approach can be generalized to a larger framework for the elaboration of "complete" operational requirements specification that satisfies given set of high-level system goals. The contribution of this work is a novel application of ILP to requirements engineering that also demonstrates the need for non-monotonic learning.


High assurance systems: correctness by development

Alistair McEwan (Department of Engineering, Leicester)
Friday, Oct 17, 14:00 in BEN LT2 (Host: Jose Fiadeiro)

In this talk I shall discuss some of the challenges faced in specifying, designing, and building systems where high levels of assurance in behaviour is essential. I shall describe some recent advances in the area of Formal Methods, and demonstrate how they have enabled a more rigorous approach to the development and verification of systems. A simple case study consisting of a safety-critical system and its associated control system (implemented in hardware) will be presented.


Session Types

Simon Gay (University of Glasgow)
Friday, Oct 10, 14:00 in BEN LT2 (Host: Jose Fiadeiro)

Type systems are familiar in most programming languages, for example Java. There are rules about which operations can be applied to which data types, and the compiler can check that the rules are followed, resulting in a guarantee that there will be no nonsensical operations at run-time.

The idea of "session types" is to use compile-time typechecking to guarantee run-time correctness of communication between agents in a concurrent or distributed system. A session type describes the sequence and type of messages that should be sent on a particular communication channel. In fact, a session type is a formalized protocol. For example, the protocol might be to send an integer, then receive a boolean, then do nothing further. Compile-time typechecking with session types can verify that an implementation of the protocol is correct.

The talk will introduce the idea of session types, explain some of the technical machinery needed to make them work, and discuss applications in areas such as web services.


Sublinear Algorithms

Artur Czumaj (University of Warwick)
Friday, Oct 3, 14:00 in BEN LT2 (Host: Jose Fiadeiro)

In many massive data applications, polynomial algorithms that are efficient in relatively small inputs, may become impractical for input sizes of several gigabytes. Managing and analyzing such data sets forces us to revisit the traditional notions of efficient algorithms and to aim at designing algorithms whose runtime is sublinear. Constructing a sublinear time algorithm may seem to be an impossible task since it allows one to read only a small fraction of the input. However, in recent years, we have seen development of sublinear time algorithms for optimization problems arising in such diverse areas as graph theory, geometry, algebraic computations, and computer graphics.

In this talk, we will present a few examples of sublinear algorithms. Our main focus will be on techniques used to design sublinear-time algorithms to estimate basic graph parameters (e.g., the average degree of the graph or the cost of its minimum spanning tree). We will also discuss the framework of *property testing*, an alternative notion of approximation for decision problems, which has been applied to give sublinear algorithms for a wide variety of problems.


Author: Alexander Kurz (kurz mcs le ac uk), T: 0116 252 5356.
© University of Leicester . Last modified: 13th September 2010, 07:58:18
CMS Web Maintainer. This document has been approved by the Head of Department.