ABOUT |
Computer Science SeminarsUseful links:
Semester 2, 2013/14Seminar programme
Seminar detailsDVS scheduling for various processor models and task types
Minming Li (City University of Hong Kong)
Dynamic Voltage Scaling (DVS) techniques provides the
capability for processors to adjust the speed when executing
jobs. Energy consumption can thus be saved through wise
scheduling. In this talk, I will summarize the works we have
done in this field in the last several years. On the one hand,
we study different processor models like the ideal model where
the speed of the processors can change to any value instantly,
the discrete model where the possible speeds are given as
input, the accelerate model where the processor can only
gradually change its speed and the memory model where the
processor is equipped with a memory hierarchy together with
the network DVS model. On the other hand, besides studying
general independent jobs, we also study different types of
tasks like jobs with agreeable deadlines, jobs with multiple
active intervals and jobs with memory operations.
Algorithms for Conformance Checking
Josep Carmona (Universitat Politecnica de Catalunya (UPC)) Conformance checking is one of the crucial aspects of the
novel field of process mining. Process mining is a novel area which
focuses in the process perspective of a system, providing mechanisms to
facilitate the elicitation and monitoring of processes. Conformance
checking techniques asses whether a process model (what should happen)
is followed or not in the implementation (what is happening), and to
what extent. These techniques uses as input a model and a log left by
the footprints of the process executions. In this talk we will describe
recent contributions in conformance checking that among other
features, allow to decompose the problem, representing a big step for
applying the techniques on the large.
Coinductive proofs for the infinitary lambda calculus
Lukasz Czajka (Warsaw) Infinitary lambda calculus may be regarded as an
abstract model of a functional programming language with
(potentially) infinite data structures. A real-world example
of such a language is Haskell, where e.g. the data type of
streams (infinite lazy lists) may be defined. Coinduction is
a method for reasoning about properties of infinite
objects. The talk will present a certain informal style of
doing proofs by coinduction, similar to ordinary inductive
proofs. An elementary and reasonably direct justification of
such proofs will be given. The method will then be illustrated
on the infinitary lambda calculus, and (some parts of) a
coinductive proof of confluence, up to equivalence of
root-active subterms, of infinitary lambda calculus will be
presented.
Intensional Partial Metric Spaces
Steve Matthews (Warwick) (slides)
Partial metric spaces (1992) generalised Fréchet’s metric
spaces (1906) by allowing self-distance to be a non-negative
real number. Originally motivated by the goal to reconcile
metric space topology with the logic of computable functions
and Dana Scott’s innovative theory of topological domains they
are increasingly too static a form of mathematics to be of use
in modelling contemporary applications software (aka Apps)
which is increasingly pragmatic, interactive, and inconsistent
in nature. This talk faces up to the reality that if partial
metric spaces are to survive in future research then they must
become much more scalable. Wadge’s hiaton time delay is used
as a working example to study requirements for scaling partial
metric spaces.
Hans van Ditmarsch (LORIA (CNRS - University of Lorraine) and IMSc Chennai (associate)) This is joint work with Laura Bozzelli, Tim French, James Hales, and Sophie
Pinchinat.
In this talk I will present refinement modal logic. A refinement is
like a bisimulation, except that from the three relational
requirements only 'atoms' and 'back' need to be satisfied. Our logic
contains a new operator 'forall' in addition to the standard
modalities box for each agent. The operator 'forall' acts as a
quantifier over the set of all refinements of a given model. We call
it the refinement operator. As a variation on a bisimulation
quantifier, it can be seen as a refinement quantifier over a variable
not occurring in the formula bound by the operator. The logic combines
the simplicity of multi-agent modal logic with some powers of monadic
second order quantification. We present a sound and complete
axiomatization of multiagent refinement modal logic. There is an
extension to the modal mu-calculus. It can be applied to dynamic
epistemic logic: it is a form of quantifying over action models. There
are results on the complexity of satisfiability, and on succinctness.
(Accepted for Information and Computation)
Multiparty session types and their application in large distributed systems
Nobuko Yoshida (Imperial College London) We give a summary of our recent research developments on multiparty session types for verifying distributed and concurrent programs, and our collaborations with industry partners and a major, long-term, NSF-funded project (Ocean Observatories Initiatives) to provide an ultra large-scale cyberinfrustracture (OOI CI) for 25-30 years of sustained ocean measurements to study climate variability, ocean circulation and ecosystem dynamics.
We shall first talk how Robin Milner, Kohei Honda and Yoshida started collaborations with industry to develop a web service protocol description language called Scribble and discovered the theory of multiparty session types through the collaborations. We then talk about the recent developments in Scribble and the runtime session monitoring framework currently used in the OOI CI.
Matthias Englert (Warwick) In a classical online problem arriving requests
have to be processed immediately and in order of arrival. In
some applications on the other hand, it is possible to
temporarily store arriving sequences in a fixed sized buffer
before processing. Only when the buffer is full, the algorithm
is forced to decide which request from the buffer to process
(and how). We will discuss problems with this property
and analyze them. In particular we will talk about a model in
which colored items, which arrive one by one, have to be
grouped together according to their color to minimize cost.
Using Developer Knowledge in Automated Software Remodularisation
Mathew Hall (Sheffield)
Careful organisation of source code entities into cohesive modules
is an important aspect of software design. Without a continuous
effort to maintain the structure of the system, the organisation
can decay as the scope of the project changes. Maintenance of the
system's structure is a costly process that requires domain
knowledge to do correctly. Access to this domain knowledge is
scarce, or sometimes impossible (consider a system architect who
leaves without documenting the system's structure). To
overcome the cost of maintaining the structure of the system, we
can look to machines to help us optimise the structure of the
system. Techniques from machine learning can help us discover
patterns from names or dependencies in the code which we can use
to produce a new structure based on the information already
present in the code. Unfortunately, this information can lead
automated approaches astray, ultimately requiring human
understanding to correct erroneous groupings. In this talk,
I'll outline the problems around automated remodularisation and
then then introduce SUMO, our constraint-based refinement
algorithm that makes automated remodularisation more tractable by
focusing on acquiring feedback. I'll detail some comprehensive
experiments we conducted on SUMO, including a user study conducted
at Leicester and large-scale simulation experiments. The talk will
conclude with our findings on how well SUMO can help to close the
gap between automation tools and human intuition, and the factors
that affect its performance.
Digital Elevation Models as Fuzzy Sets and Fuzzy Numbers
Pete Fisher (University of Leicester) Propositional rewriting techniques for knowledge representation
Igor Razgon (Birkbeck, University of London) The following scenario is common in knowledge
representation. There is a knowledge base and types of
queries to be answered. The requirement is that all these
queries must be answered in a polynomial time. However, some
queries are intractable (e.g. NP-complete to answer) in the
given representation of the knowledge base. A possible
solution is *rewriting*: the knowledge base is compiled into a
different format where the queries can be answered
efficiently. Of course the compilation may take a lot of
time. However, it is performed *offline*, at the preprocessing
stage and if the knowledge base is not changed over a *long*
period of time, the time savings from the query answering will
amply compensate the additional preprocessing time. The
real difficulty of application of the above methodology is
that the knowledge base resulting from the rewriting may
become prohibitively large. Therefore the main effort of
researchers is concentrated towards design of rewriting
algorithms having a reasonable space complexity of their
output. In this talk I will concentrate on the
*propositional* rewriting also known as knowledge
compilation. In the relevant scenario the initial format of
the knowledge base is Conjunctive Normal Form (CNF) and a
typical query is whether or not the given partial assignment
to its variables can be extended to a satisfying assignment of
this CNF. This kind of query is called *clausal entailment*
and it is clearly NP-complete since it is a generalization of
the satisfiability testing, a classical NP-complete problem. The methodology of knowledge compilation is transformation
of the input CNF into an equivalent *good* representation for
which the clausal entailment query can be answered in a
polynomial time. I will overview a number of such
representations including: decision trees, ordered binary
decision diagrams, free binary decision diagrams, decomposable
negation normal forms. I will also overview the relationship
between these representations, their advantages and
disadvantages. In the final part of my talk I will
intuitively overview an approach to obtain a space-efficient
good representation from the initial CNF. This approach is
based on the assumption that the CNF is associated with a
natural parameter that is very small compared to the size of
the CNF. The resulting representation is exponential in terms
of this small parameter but polynomial in the input size.
Designing Trajectories Through Museum Experiences
Steve Benford (Mixed Reality Laboratory, University of Nottingham) I will discuss how a series of research projects
with artists to design mobile performances, and with amusement
parks to design interactive rides and souvenir systems,
revealed how professional designers incorporate digital
technologies into extended visiting experiences. I will
distill their extensive craft knowledge into a general design
framework for planning, managing and recounting such
experiences based on the approach of interleaved
trajectories. I will discuss how trajectories can be applied
to the design of museum and gallery experiences so as to
support visitors in making their own personalized
interpretations as part of a group visit.
Expectation Modelling and its Economic Consequences
Christian Richter (University of Bedfordshire Business School) A key question in economics and social sciences
in general is what determines expectations? From an outset it
is not that clear why expectations are so important. The
reason is that if expectations are determined by “irrational
behaviour” in a wider sense, then they may destabilise the
entire economy (or not as we will see later on). How can this
happen? Research in Psychology has shown that there exists for
example “herd behaviour” among decision makers. If that is
true, then an influential decision maker can steer the market
into a direction that benefits only him by persuading other
market participants to follow him. This can have disastrous
consequences, as large wealth may be destroyed when the market
collapses. Strangely, although, this is a serious risk,
which is endangering financial markets in particular,
mainstream economics has neglected this risk by claiming that
financial markets are informationally efficient. However,
defining a problem away does not make it non-existent. So
research into the determinants of expectations formation is
more important than ever, especially because of the most
recent events in financial markets. In this talk, we will
give a brief overview of important theories on expectations
formation. However, we will also search for new approaches
which look at expectations formation from a different point of
view.
Susanne Albers (TU München) We study algorithmic techniques for energy
savings in computer systems. We consider power-down
mechanisms that transition an idle system into low power
stand-by or sleep states. Moreover, we address dynamic speed
scaling, a relatively recent approach to save energy in
modern, variable-speed microprocessors. In the first
part of the talk we survey important results in the area of
energy-efficient algorithms. In the second part we
investigate a setting where a variable-speed processor is
equipped with an additional sleep state. This model integrates
speed scaling and power-down mechanisms. We consider
classical deadline-based scheduling and settle the complexity
of the offline problem. As the main contribution we present
an algorithmic framework that allows us to develop a number of
significantly improved constant-factor approximation
algorithms.
From Logic to Computer Science, back and forth. (slides)
Antonino Salibra (Universita Ca'Foscari Venezia) In this talk we explain the origins of computer science in logic and mathematics, and we discuss the importance of computer science for mathematics.
Semester 1, 2013/14Seminar programme
Seminar detailsVariations on balanced trees - lazy red-black trees
Stefan Kahrs (University of Kent) Search trees are a commonly used data structure
to implement sets or finite maps. To prevent degenerate
scenarios one normally uses a variant of search trees with a
built-in mechanism to keep them in balance. Best known of
these are AVL trees and red-black trees, of which the latter
are cheaper to maintain whilst they stay not quite as well in
balance as the former. Inspired by differences between
iterative and recursive versions of red-black trees, we are
attempting to create a hybrid that tries to combine the better
aspects of both. The hybrid's approach to tree maintenance
could be described as "lazy" - though effectively it is merely
working with a different invariant. Interestingly, the
mechanism still operates well with a yet weaker invariant, in
which the trees are used for the majority of the time as
ordinary search trees, i.e. without maintenance overhead.
Local Search Algorithms for Set Packing Problems
Justin Ward (University of Warwick)
In the standard, unweighted set packing problem, we are given a collection of sets of elements and must find the largest possible sub-collection of sets such that no element occurs in more than one set. In this talk, I will consider a restriction of the set packing problem in which all of the given sets contain at most k elements, for some constant k. This restricted setting, which is called k-set packing, captures a wide variety of allocation and optimization problems, and is in fact NP-hard for k > 2.
One simple algorithmic approach to the k-set packing problem is local search. The standard local search algorithm maintains a current collection of disjoint sets and repeatedly tries to enlarge this collection by adding some small number of new sets to the collection and removing the sets that share elements with these sets. The algorithm terminates when no such small change improves the current collection. Despite its relative simplicity, this approach (or variants thereof) is the state of the art for k-set packing problems.
In this talk, I will present current algorithmic results for both the unweighted and weighted variants of k-set packing, as well as the monotone submodular variant, in which the value of a group of sets satisfies the natural economic property of diminishing returns. I will discuss some known performance barriers for local search algorithms in each of these settings. Then, I will show how novel modifications of the standard technique can be used to move past these barriers, giving algorithms that return solutions of higher a guaranteed value than standard local search algorithms. Although the results are stated (and hold) in the theoretical setting of worst-case analysis, this talk will focus primarily on general algorithmic design principles that may also be employed in a variety of practical settings.
Designing a Simple Folder Structure for a Complex Domain
Torkil Clemmensen (Copenhagen Business School)
In this talk, I present an emerging framework for human work
interaction design (HWID), and explore a case of designing a simple
folder structure for a new e-learning software program for a university
study program. The aim is to contribute to the theoretical base for
human work interaction design (HWID) by identifying the type of
relations connecting design artifacts with work analysis and
interaction design processes. I present a piece of research in which
the action research method was used, with me in a double role as
university researcher and project manager of a developer group within
the university. The analysis was conducted through grounded theory,
inspired by the HWID framework. I outline how the findings support the
use of a holistic framework with asymmetrical relations between work
analysis and design artifacts, and between design artifacts and
interaction design. In conclusion, I give suggestions for modifying the
general framework, and recommendations for a HWID approach to design
artifacts. The talk is intended to inspire discussion about how to
connect empirical work analysis and interaction design activities.
Empty Promises and Half Truths: Participatory Design with Children
Janet Read (University of Central Lancashire)
This talk will put participatory design of interactive technology with
children under scrutiny. I will examine the practices of the author, and
others, around the inclusion of children in participatory design
activities and will highlight some of the concerns that are often skimmed
over in the literature. An example PD activity which resulted in
technologies being provided for a school in Uganda will be used as a
backdrop to the discussion. The talk will be well suited to anyone
interested in the ethical involvement of children in research, in anyone
interested in cross cultural design and will also be relevant to those
considering user based research and / or development in a range of
contexts.
On the Preciseness of Subtyping in Session Types
Mariangiola Dezani (Università di Torino) The subtyping systems of mobile processes have
been extensively studied since early 1990s as one of the most
applicable concepts in the types for concurrency. Their
correctness has been usually provided as the soundness of
subtyping relations with respect to the type safety. The
converse direction, the completeness, has been largely ignored
in spite of its usefulness to define the greatest subtyping
relation possible without violating type safety, solely based
on operational semantics. This talk discusses a technique
for formalising and proving that a subtyping system is precise
(i.e. both sound and complete) with respect to the type safety
in the pi-calculus, and applies it to synchronous and
asynchronous session calculi. The well-known session
subtyping, the branching-selection subtyping, turns out to be
sound and complete in the synchronous calculus. Instead in the
asynchronous calculus, this common subtyping is incomplete
with respect to the type-safety: that is, there exist session
types T and S such that T can safely be considered as a
subtype of S, but T is not a subtype of S. We then propose an
asynchronous subtyping system which is sound and complete with
respect to the asynchronous calculus. The method gives a
general guidance to design rigorous channel-based subtyping
systems with respect to desired safety properties.
Composing and Scaling Algebraic Computations
Phil Trinder (SCIEnce Team, HPC-GAP Team, Glasgow University)
The talk describes interdisciplinary work between Computer Science and
Mathematics. We start by outlining Computer Algebra Systems (CAS), an
important class of Mathematical software. We describe the design and
implementation of a new way of combining CAS, the Symbolic Computation
Software Composability Protocol (SCSCP). We show how SCSCP-compliant
components may be combined to solve scientific problems that cannot be
solved within a single system, or may offer mathematical software
services. The SCSCP Protocol has become a de facto standard for
mathematical software with implementations available for 7 CAS, and
libraries for C/C++ and Java.
We sketch the challenges posed by large-scale algebraic
computations, and outline the SymGrid-Par architecture designed to
address these challenges. We report performance results showing that
SymGrid-Par delivers performance comparable with bespoke parallel CAS,
copes with high levels of irregular parallelism, can generate new
Mathematical results, and can provide performance portability. We
briefly outline the design of SymGrid-Par2, a scalable, reliable
successor to SymGrid-Par, and report good performance on an 1800 core
architecture.
The work is supported by both the EU Framework Programme and the EPSRC.
Representing and Generating Curve-based Diagrams
Andrew Fish (University of Brighton) The EPSRC funded project
Automatic Diagram Generation aims to build a unified framework
for the automatic generation of mixed-type diagrams arising as
the integration of Euler diagrams, knot diagrams, and
graphs. The intent is to bring theoretical benefits via
methods which make use of commonality of abstraction, together
with practical-oriented benefits in terms of generic tool
support for areas such as diagrammatic logics, or ontology and
network visualisations. The talk will primarily discuss a new
encoding for Euler diagrams, adapting the notion of Gauss
Paragraphs for knot diagrams, providing a richer abstraction
than has been previously considered, encapsulating the
structure of the diagram, whilst also providing generation
machinery. We provide a further connection between knots and
Euler diagrams via the construction of rewriting methods
which, for example, yield a family of Brunnian links which
project to Edwards' construction of Venn diagrams. The talk
should be accessible, containing examples demonstrating the
underlying algorithmic procedures. A live demo of the
associated software, which integrates theories about knots and
Euler diagrams, whist utilising existing graph drawing
libraries, may even be provided.
Statistical Machine Learning in Interactive Theorem Proving.
Katya Komendantskaya (University of Dundee) Development of Interactive Theorem Provers has
led to the creation of big libraries and varied
infrastructures for formal mathematical proofs and
verification of hardware and software systems. These
frameworks usually involve thousands of definitions and
theorems (for instance, there are approximately 4200
definitions and 15000 theorems in the formalisation of the
Feit-Thompson theorem; and more than 5000 theorems in the
formalisation of a C compiler). The size of the libraries, as
well as their technical and notational sophistication often
stand on the way of efficient knowledge re-use. Very often, it
is easier to start a new library from scratch rather than
search the existing proof libraries for potentially common
heuristics and techniques. The main question we address in
this talk is: Can statistical pattern recognition help in
creation of more convenient and user-friendly proof
environments, in which statistically significant proof
patterns could be used as proof-hints? We will survey the
most successful state-of-the art machine-learning tools for
theorem proving, and will present the tool ML4PG
(“Machine-Learning for Proof General”) – an Emacs-based
interface that allows Coq/SSReflect users to find interesting
proof patterns interactively during the proof-development. We
will show how ML4PG works for mathematical and industrial
libraries. ML4PG tool is the main outcome of the EPSRC
First Grant “Machine-Learning Automated Proofs”. The webpage
of ML4PG with downloadable software and papers can be found
here.
Homotopy Type Theory For Dummies
Thorsten Altenkirch (University of Nottingham) I have recently spend a semester at the Institute of Advanced Study in Princeton
to attend a special year on ”Homotopy Type Theory” (HoTT) and participated
in creating the HoTT book
which is freely available on the internet. In
my talk I will give you my own account why I think that HoTT is interesting
especially for Computer Science, I will give you a taste of HoTT and also tell
you what are the missing pieces of the puzzle.
Dynamic defence in the Cloud via Introspection
Behzad Bordbar (University of Birmingham) The Cloud is intended to handle large amounts of
data. In addition, to benefit from the economies of scale, the
applications and Operating Systems are homogenized to a few
images restricting the variations of products used within the
Cloud. As a result, vulnerability can be exploited on a large
number of machines and the attacker can be sure of a high
pay-off for their activities. This makes the Cloud a prime
target for malicious activities. There is a clear need to
develop automated, adaptive and computationally-inexpensive
methods of discovering malicious behaviour as soon as they
start such that remedial actions can be adopted before
substantial damage is done.
In this seminar, we will describe a method of detecting
malware by identifying the symptoms of malicious behaviour as
opposed to looking for the malware itself. This can be
compared to the use of symptoms in human pathology, in which
study of symptoms direct physicians to diagnosis of a disease
or possible causes of illnesses. The main advantage of
shifting the attention to the symptoms is that a wide range of
malicious behaviour can result in the same set of symptoms. We
will also describe our current implementation of the proposed
approach with the help of very small Virtual Machines (VM)
that can monitor other VMs to discover the symptoms. FVMs
collaborate with each other in identifying symptoms by
exchanging messages via secure channels. The FVMs report to a
Command and Control module that collects and correlates the
information so that suitable remedial actions can take place
in real-time. The Command and Control can be compared to the
physician who infers possibility of an illness from the
occurring symptoms. A sketch of our current implementation
which involves using Mini-OS on the Xen virtualisation
platform will also be presented. This research is in
collaboration with Cloud and Security lab at HP.
What happened to my plan? How Model-Based Diagnosis can help Plan Repair in Multiagent Scenarios
Roberto Micalizio (Università degli Studi di Torino) This seminar focuses on the problem of designing
robust agent-based systems. More precisely, we restrict our
attention on systems that can be modeled as Multiagent Plans
(MAPs). MAPs are in fact very common in real-world
applications, and present some interesting characteristics:
agents cooperate with one another in order to reach a common
goal, agents are assigned tasks/actions, and the causal
dependencies among them are known in advance. The actual
execution of a MAP in the real world, however, may be
challenging. First of all, the environment where the agents
operate is in general just partially observable. Moreover, the
actual execution of an action can be affected by a plan
threat; i.e., an unexpected event that changes abruptly the
health state of some agent's devices. As a consequence of
the partial observability, an agent must be able to deal with
some form of ambiguous belief state. On the other hand, since
plan threats may occur, an agent must be able to deal with
action failures. The purpose of the seminar is to present
an approach to robust execution of MAPs based on a control
loop involving three main steps: - plan execution
monitoring - agent diagnosis - recovery from action
failure. The basic idea is that each agent in the system
is in charge of monitoring and diagnosing its own actions and,
in case of an action failure, repair its own plan. The
recovery process is modeled as a replanning problem aimed at
restoring the nominal conditions in the faulty components
identified by the agent diagnosis. However, since the
diagnosis is in general ambiguous (a failure may be explained
by alternative faults), the recovery has to deal with such an
uncertainty. This seminar advocates the adoption of a
conformant planner, which guarantees that the recovery plan,
if it exists, is executable no matter what the actual cause of
the failure.
|
|
Author: Alexander Kurz (kurz mcs le ac uk), T: 0116 252 5356. |