ABOUT |
Computer Science Seminars and Short CoursesSemester 2, 2009/10Seminar programme
Seminar detailsTowards Building a Knowledge Base for Research on Andean Weaving
Sven Helmer (Birkbeck, University of London) We are working on a
knowledge base to store 3D Andean textile patterns together
with rich cultural and historic context information. This will
allow ontological studies in museum collections as well as on
ethnographic and archaeological fieldwork. We build on an
existing ontology, extending it to incorporate more content
and make it more accessible. This goes well beyond storing and
retrieving textile patterns and enables us to capture the
semantics and wider context of these patterns.
Murdoch James Gabbay (Heriot-Watt University) Permissive Nominal
Logic(PNL) is an extension of first-order logic
where term-formers can bind names in their arguments. This
allows for direct axiomatisations with binders, such as the
forall-quantifier of first-order logic itself and the
lambda-binder of the lambda-calculus. This also allows us to
finitely axiomatise arithmetic. Like first- and
higher-order logic and unlike other nominal logics, equality
reasoning is not necessary to alpha-rename. All this gives
PNL much of the expressive power of higher-order logic, but
terms, derivations and models of PNL are first-order in
character, and the logic seems to strike a good balance
between expressivity and simplicity. Using a paper in
PPDP'10 as a basis, I will give a tour of PNL. More
information can be found on my webpage.
Modelling and Analysis of Quantum Protocols
Rajagopal Nagarajan (University of Warwick) The novel field of quantum
computation and quantum information has gathered significant
impetus in the last few years, and it has the potential to
radically impact the future of information technology. While
the successful construction of a large-scale quantum computer
may be some years away, secure communication involving quantum
cryptography has already been implemented, and equipment for
quantum cryptography is commercially available. The major
selling point of quantum cryptography is unconditional
security. But can this be guaranteed in practice? Even when
protocols have been mathematically proved to be secure, it is
notoriously difficult to achieve robust and reliable
implementations of secure systems. Techniques based on formal
verification are now widely used by industry to ensure that
(classical) systems meet their specifications. In this talk, I
will give an overview of our ongoing work in using such
techniques for the modelling and analysis of quantum protocols
and, eventually, their implementations. (Joint work with
Simon Gay and Nick Papanikolaou).
Smart cards in public transport: the Mifare Classic Case
Bart Jacobs (Nijmegen, The Netherlands) The Mifare Classic chipcard has been
dismantled. Between 1 and 2 billion copies have been sold
worldwide. The card is mainly used for access to buildings
and for public transport. The talk will give an overview of
these developments and pay special attention to the
OV-chipcard in the Netherlands and to London's Oyster card.
Both are broken: data on the card, including balances, can
be changed easily.
Transforming XML - the XSLT language
Michael Kay (Saxonica Limited) XSLT is a widely-used language for transforming XML - sometimes into HTML for display on a browser, often into other XML vocabularies. Michael Kay is the editor of the XSLT 2.0 specification, author of a reference book on the language, and developer of Saxon, a leading implementation available in open source and commercial variants. The purpose of the talk is to explore some interesting questions about XSLT:
The talk should be interesting both to people who know XML and XSLT well, and to those with a more general interest in the factors that influence the design and adoption of programming languages. Local Algorithms for Robotic Formation Problems
Friedhelm Meyer auf der Heide (Heinz Nixdorf Institute & Computer Science Department, University of Paderborn)
Assume a scenario with a set of autonomous mobile robots having initial
positions in the plane. Their goal is to move in such a way that they
eventually reach a prescribed formation. Such a formation may be a straight
line between two given endpoints (short communication chain), a circle or any
other geometric pattern, or just one point (gathering problem). In this talk,
I consider simple local strategies for such robotic formation problems: the
robots are limited to see only robots within a bounded radius; they are
memoryless, without common orientation. Thus, their decisions where to move
next are solely based on the relative positions of robots within the bounded
radius. I will present local strategies for short communication chains and
gathering, and present runtime bounds assuming different time models. All
previous algorithms with a proven time bound assume global view on the
positions of all robots.
Linear-Time Haplotype Inference on Pedigree without Recombinations
Bethany Chan (Hong Kong) In diploid organisms,
such as humans, chromosomes come in pairs, and experiments
often yield genotypes, which blend haplotypes for the
chromosome pair. This gives rise to the problem of inferring
haplotypes from genotypes. Given a pedigree P and each node
associated with a genotype, the Haplotyping Pedigree Data
(with no recombinations) Problem (HPD-NR) is to find a
consistent haplotype configuration for P such that the
haplotypes of each child are Mendelian-consistent, i.e., one
of the child's haplotypes is exactly the same as one of its
father's and the other is the same as one of its mother's. Li
and Jiang formulated the problem as an mn x mn matrix and
solved HPD-NR by Gaussian elimination which could be solved in
polynomial time (O(m^3n3)), where n is the number of
individuals in the pedigree and m is the number of loci for
each individual. In this paper, we propose a new algorithm
that can either find a CHC solution or report "no solution" in
O(mn) time, which is optimal.
Reasoning about multi-core x86 programs
Scott Owens (Cambridge) With the rise of multi-core
processors, shared-memory concurrency has become a widespread
feature of computation, from hardware, to operating systems,
to programming languages such as C++ and Java. However, none
of these provide sequentially consistent views of shared
memory; instead they have relaxed memory models, which make
concurrent programs even more challenging to understand.
Making matters worse, the public vendor architectures,
supposedly specifying what programmers can rely on, are often
in informal prose, which leads to confusion on exactly what
behaviours a processor is allowed to have. In this talk,
I will present a formal model of the x86's relaxed memory
architecture from the viewpoint of an assembly language
programmer, and I will relate this model to a few versions of
the Intel and AMD documentation, highlighting their various
ambiguities and unsoundnesses. The model is based on the
insertion of store buffers between each processor and main
memory, and so it provides strong guarantees relative to many
other relaxed memory architectures. I will briefly discuss an
approach to reasoning about low-level concurrency primitives
that takes advantage of these guarantees, and its application
to examples, including a spin lock taken from the Linux
kernel.
M. Andrew Moshier (Chapman University, CA, USA) We discuss two ways to mix information (from
a domain theoretic point of view) and propositional
logic. The first has its roots in Kleene's
metamathematical investigations of three-valued logic,
where evaluation of propositions may possibly diverge. The
second is closely related to Belnap's proposal (of some
interest for knowledge-based systems in distributed
environments) to consider a four-value logic: true, false,
undefined and contradiction. Kleene's system is closely
related to domain theory anyway, as divergence is a
fundamental notion that is well-modelled in
domains. Belnap, on the other hand, explicitly discusses
the idea that the four truth values come equipped with two
natural lattice orders: logic, in which false and true are
the bounds, and a second order which he calls
"information," in which undefined is least informative,
contradiction is most informative and true and false are
incomparible. In this talk, we will take Belnap's
terminology seriously, by allowing for possibly infinitary
accumulation of information along the lines of domain
theory (and implicit in Kleene's approach). This ties
Kleene and Belnap closely together. As a result, we
develop two systems that mix logic and information based
on three- and four-valued logic. In the end, however, we
will show that the two approaches are equivalent. We
conclude with a dicussion of logic and information in a
more general setting.
A Traceability Attack Against e-Passports
Tom Chothia (Birmingham) I will briefly describe
two pieces of work we have been doing at Birmingham, one on
analysing traceability attacks in the applied pi-calculus and
the other on using information theory to measure information
leaks. Then I'll describe a case study involving
"e-passports". Since 2004, many nations have started issuing
passports that contain an RFID tag that, when powered,
broadcast information. It is claimed that these passports are
more secure and that our data will be protected from any
possible unauthorised attempts to read it. We have shown that
there is a flaw in one of the passport's protocols that makes
it possible to trace the movements of a particular passport,
without having to break the passport's cryptographic key. All
an attacker has to do is to record one session between the
passport and a legitimate reader, then by replaying a
particular message, the attacker can distinguish that passport
from any other. We have implemented our attack and tested it
successfully against passports issued by a range of
nations.
Global Caching for Coalgebraic Description Logics
Dirk Pattinson (Imperial College London) Coalgebraic
description logics offer a common semantic umbrella for
extensions of description logics with reasoning principles
outside relational semantics, e.g. quantitative
uncertainty, non-monotonic conditionals, or coalitional
power. Specifically, we work in coalgebraic logic with
global assumptions (i.e. a general TBox), nominals, and
satisfaction operators, and prove soundness and
completeness of an associated tableau algorithm of optimal
complexity EXPTIME. The algorithm is based on global
caching, which raises hopes of practically feasible
implementation. Instantiation of this result to concrete
logics yields new algorithms in all cases including
standard relational hybrid logic.
This is joint work with Rajeev Gore (ANU), Clemens Kupke
(Imperial) and Lutz Schroeder (DFKI Bremen).
Programming Logics for Dynamic Data
Uday Reddy (Birmingham) 'Programming logic' refers to logical systems devised by Computer
Scientists that allow us to reason about computer programs and prove
their properties. The quintessential logic of this kind was devised
by Tony Hoare in the 70's for proving properties of simple imperative
programs with assignment statements for ordinary variables. The
extension of Hoare Logic for dealing with dynamic data, accessed via
pointers, was devised by John Reynolds in 2000, and refined by Peter
O'Hearn and colleagues. This system is called Separation Logic. Over
the last 10 years, the subject has developed rapidly, dealing with
data abstraction, concurrency, verification tools and program analysis
tools. This talk introduces the logic and explains the rationale
behind its basic features.
Reference:
J. C. Reynolds, Separation Logic: A logic for shared mutable data
structures. IEEE Symposium on Logic in Computer Science, 2002, pp
55-74. URL: http://www.cs.cmu.edu/~jcr/seplogic.pdf
Pedro Molina (Software Architect at Capgemini Spain) Model Driven Development (MDD) is an engineering
approach to software development based in models and code
generation. The quality and productivity gain is measured in
an order of magnitude if compared to traditional
development. This talk will introduce to the audience the
key concepts of the approach, a review of the current
technologies to support it, the context where it can be
applied, the economics foundations in terms of ROI and some
real usage scenarios.
Meta-Modelling for Language Engineering
Juan de Lara (Universidad Autónoma in Madrid) Model-Driven
Engineering (MDE) is a software engineering paradigm that
seeks developing software faster with higher levels of
quality. It does so by increasing the level of abstraction at
which engineers work, promoting the use of models in contrast
to programs. In this way, developers deal with less
“accidental” details, as the notations are closer to the
application domain. In MDE, models are not only used for
documentation, but they are the core asset in the process,
being activelly used, e.g. to (re-)generate code, for
validation and verification, and for maintenance. Thus, a
crucial aspect of MDE is to design high-level, domain-specifc
(visual) languages, appropriate for the domain of
interest. The description of such languages is also done
through models. This activity is called meta-modelling and
will be the topic of the presentation. The presentation will
introduce the basic techniques of meta-modelling, giving an
overview on how to describe the abstract and concrete syntax
of visual languages. We will discuss tools and standards, and
end the presentation showing current limitations and trends in
current research.
Towards the Maturation of Usability and User Experience Evaluation Methods (U2XEMs)
Effie Law (Leicester) Recently, the field of usability has been
expanded thanks to recent advances in mobile, ubiquitous,
social, and tangible computing technologies that have led to
a shift (i.e. the Third Wave HCI) towards to an emphasis on
user experience (UX), where user’s feelings, motivations,
and values are given a lot of attention. Nonetheless, it
remains yet unclear how UX should be defined and
measured. Researchers and practitioners tend to view UX with
the lens of usability, though the scope of the former is
broader than that of latter, considering elusive attributes
such as affect, emotion, fun, excitement, engagement,
empathy, value, to name just a few. Paradoxically, without
clearly defining these componential attributes, a range of
new UX evaluation methods (UXEMs), which are derived by
adapting known usability evaluation methods (UEMs) and by
tapping on the imaginative power of proposers, have been
thrown out to the wider HCI communities. Apparently, those
emerging UXEMs entail further validation and
consolidation. This talk aims to present state-of-the-art of
the UEM and UXEM development and discuss the related
theoretical/practical issues. It comprises three major
parts: (1) To address the scoping, strengths and weaknesses
of the popular UEMs and UXEMs; (2) To review the
definitional issue of User Experience (UX) and various
models of UX; (3) To illustrate with concrete case studies
which combinations of traditional and emergent usability and
UX evaluation methods are deployed in different sectors and
to discuss associatedissues as well as outlook for the
future research; Examples: 80Days – Evaluation of adaptive
digital educational games (DEGs)
Semester 1, 2009/10Seminar programme
Seminar details
Tony Clark (Thames Valley University) Splitting and scheduling jobs in a hybrid flowshop
Jiyin Liu (Business School, Loughborogh University) This talk addresses lot
streaming problems, which involve both lot sizing and
scheduling decisions, in a two-stage hybrid flowshop with m
identical machines at the first stage and a single machine
at the second stage. The objective is to minimise the
production lead times. We first analyze some special
cases of the problem, the single-job problem with given
number of sublots and the single-job problem with equal
sublot sizes, and provide efficient optimal solutions. The
general single-job problem and the multi-job problem are
then studied. Heuristic solutions are proposed to solve
these problems. Worst-case analysis and computational
experiment show that the proposed heuristic algorithms
generate close-to-optimal solutions.
Lazy Systematic Unit Testing for Java
Tony Simons (University of Sheffield) Testing from state-based
specifications is a proven technique for exercising software
components, to check for all expected and unwanted
behaviours. However, in the Agile Methods community, there
is a resistance even to simple state-based specifications
and an over-reliance on manually created regression tests,
which fail to find all faults. The "lazy systematic unit
testing" method was created in response. It is based on the
two notions of "lazy specification", the ability to defer
finalising the software's apparent specification until a
late stage in development; and "systematic testing", the
ability to explore the state-space of a software component
exhaustively, to bounded depth. The approach is
exemplified in a Java unit testing tool, JWalk. This infers
the apparent specification of compiled Java classes, by a
combination of static and dynamic analysis, and limited
interaction with the programmer. The tool is able to
excercise all method protocols, all algebraic constructions
and all high-level state transitions, to bounded depth.
Once the oracle for the specification has been acquired,
testing is fully automatic. The test sets adapt gracefully
when changes are made to the code; and re-generated tests
perform significantly better than regression tests. In a
testing challenge, a JWalk tester was able to test two
orders of magnitude more paths than an expert tester writing
manual tests for JUnit. The talk will include a
demonstration of the JWalk tool.
Unification and Narrowing in Maude
Santiago Escobar (Technical University of Valencia) Maude is a high-performance reflective language
and system supporting both equational and rewriting logic
specification and programming for a wide range of
applications, and has a relatively large worldwide user and
open-source developer base. We have recently introduced
support for unification and narrowing in Maude
2.4. Unification is supported in Core Maude, the core
rewriting engine of Maude, with commands and metalevel
functions for order-sorted unification modulo some frequently
occurring equational axioms. Narrowing is currently supported
in its Full Maude extension. I will give a brief summary of
these novel features of Maude 2.4 and talk about some
applications.
Evolutionary Multiobjective Optimisation and User Preferences (Slides)
Juergen Branke (Warwick University) Many practical optimization problems require
the consideration of multiple, conflicting objectives. In
such cases, usually no single optimal solution
exists. Instead, there is a set of so-called efficient or
Pareto-optimal solutions with different trade-offs. In the
absence of additional user preference information, they all
have to be regarded as equally good.Evolutionary
algorithms, i.e., heuristics inspired by natural evolution,
are gaining increasing popularity for such problems. Because
they work on a population of solutions, they can be used to
search for a well-distributed set of Pareto-optimal
solutions in one run, which are then given to the decision
maker to choose from. This talk will give an introduction
to evolutionary multiobjective optimisation, and then
discuss why and how the decision maker’s preferences should
be incorporated already during the optimisation, rather than
only after optimisation, as currently most evolutionary
multiobjective optimisation approaches do.
Graphical Reasoning in Symmetric Monoidal Categories (Slides)
Lucas Dixon (Edinburgh) Symmetric monoidal categories provide a
foundational formalism for a variety of important domains,
including quantum computation. These categories have a
natural visualisation as a form of graphs. I will present a
formalism for equational reasoning about such graphs and
develop this into a generic proof system with a fixed
logical kernel. A salient feature of our system is that it
provides a formal and declarative account of derived results
that can include 'ellipses'-style notation. I will
illustrate the framework by describing the graphical
language applied to quantum computation and boolean
circuits, and show how this can be used to perform
computations symbolically. #
A completeness result for gs-monoidal categories (Slides)
Fabio Gadducci (University of Pisa) Building on previous results concerning the
functorial semantics of partial and multi-algebras, the talk
discusses the category of free semi-modules over a semiring
(considered as the Kleisli category of the semiring
monad). At first, it is proved that such Kleisli category is
gs-monoidal (i.e., it is symmetric monoidal and each object
has a comonoid structure), whenever the underlying category
is cartesian: hence, it represents a suitable semantical
domain for multi-algebras. Furthermore, and more important,
two arrows of the freely generated gs-monoidal category are
shown to be isomorphic iff they are identified by every
possible intepretation in the category of free semi-modules
for the semiring of natural numbers.
Relations on Graphs and Hypergraphs (Slides)
John Stell (University of Leeds) The theory of rough
sets provides one approach to understanding data at
different levels of detail. An equivalence relation on a set
can represent a notion of indistinguishability and arbitrary
subsets can then be described approximately with respect to
this relation. While this is very well-known, the
generalization to graphs raises a number of issues such as:
what should we mean by a relation on a graph or a
hypergraph, and what role do these relations play in
describing graphs at multiple levels of detail? Another
approach to granularity is provided by mathematical
morphology which has applications in image processing, but
which, in a sense, subsumes rough set theory. In this talk I
will discuss relations on graphs and hypergraphs and their
connections with the lattice-theoretic account of
mathematical morphology.
Minimal Social Situations (Slides)
Andrew Colman (School of Psychology, University of Leicester) This talk will focus on research not yet
published on minimal social situations -- repeated games in
which players learn to cooperate without awareness of the
payoff structure or even of the involvement of other players
in the outcomes. We have used experimental methods to test
theoretical predictions about adaptive learning of
cooperative responses in minimal social situations of
varying group sizes. We have also used Monte Carlo methods,
which have suggested that players approach minimal social
situations using a noisy version of the win-stay,
lose-change decision rule, deviating from the deterministic
rule less frequently after rewarding than unrewarding
outcomes.
The `Archaeology' of Transformations (Slides)
Yiyun Yu (The Open University) During a software project's lifetime, the
software's artefacts and developers go through many
transformations: components are added, removed and modified
to fix bugs and add new features, people join and leave the
project or change their responsibilities, etc. This tutorial
looks at some of the challenges faced, techniques developed
and findings obtained in reconstructing, measuring and
visualising the historical transformations that happen
during the evolution of software and developer teams.
Is there always an algorithm? An introduction to computability theory (Slides)
Rick Thomas (University of Leicester) Computability theory is concerned with the
following type of question: given a problem, does there exist
an algorithm to solve that problem? This is of fundamental
importance in computing; there is no point trying to design an
algorithm to solve a problem if no such algorithm exists!
In this talk we will give a gentle introduction to
computability theory. We will give some idea of the history
of the subject, mentioning some of the key figures involved;
as we will see, this sort of question preceded the modern
computer. We will then give some examples of problems for
which no algorithm exists.
Savara - from Art to Engineering: It’s all in the description (Slides)
Steve Ross-Talbot (Cognizant Technology Solutions) If we cannot describe it we cannot understand
much less talk about any of its properties. So why is
description often the poor man of computational science? What
can we do to change it?
In this session we shall present a new open source community
project called Savara that focusses on describing distributed systems
in terms of the behavior and behavioral typing which leads to a robust
and accurate description and yields a better understanding of that
that we build.
The purpose of Savara is to develop a new methodology for Enterprise
Architecture, called “Testable Architecture”, and provide appropriate
tooling support. Savara's initial aim is to support SOA solutions.
In presenting Savara we shall demonstrate how an eClaims system can be
designed and implemented from requirements gathering through
successive refinement to a solution. Savara is very new but much of
what we show here is already in use in both Red Hat and Cognizant
Technology Solutions. The tools that we use to demonstrate are freely
available as community open source and the example will also be
provided.
This methodology aims to ensure that any artifacts defined during the
development lifecycle can be validated against other artifacts in
preceding and subsequent phases. This ensures that the final delivered
system is guaranteed to meet the original business
requirements. Through runtime governance, it is also possible to
ensure that the running system continues to meet those requirements.
As a system evolves, through subsequent enhancements to the business
requirements, having such a precise understanding of the system, from
requirements, through design, implementation and deployment, enables
more sophisticated change management techniques to be considered.
It is very much a working session so you will be asked, if you have
the time, to bring your laptops and download the pi4soa toolsuite
from www.pi4soa.org. Enclosed
is a little project that forms the basis of the talk. At the start of
the talk we shall show you how to load the project and then you can
follow or try out things during the session.
|
|
Author: Alexander Kurz (kurz mcs le ac uk), T: 0116 252 5356. |