Computer Science Seminars
Marija Slavkovik (University of Bergen)
Machine ethics is an emerging discipline in artificial intelligence research that is concerned with ensuring that the behaviour of machines towards humans and other machines they interact with is ethical. The idea of machine ethics has tickled the fancy of numerous science fictions writers and aficionados, but as of recently, the advancements in robotics and automated systems have shaped this idea into a research topic. Any machine ethics has to be based on human understanding of morality. What kind of autonomous systems should be enhanced with ethical reasoning capabilities? What are the main approaches and challenges to transform vague ideas of right and wrong into machine-comprehensible specifications? Once you get autonomous machines to behave ethically, how can you make sure that they will? How does machine ethics impact other facets of society, or, when a machine misbehaves who is to blame? This talk will explain these questions and the ongoing research efforts towards answering them.
Helen Petrie (University of York)
Danica Vukadinovic Greetham (University of Reading)
In this talk, I will look at the problem of centrality ranking in dynamic networks. Centrality is an intuitive concept that tells us how important or involved an individual node is in a network. Many different definitions exist, but not all will work in evolving networks where edges are created and destroyed in each time step. We focus here on one that works well, slightly modified, in evolving networks, Katz centrality, and show how it can be calculated efficiently for relatively large networks.
We then use this calculation to inform a decision on a network based intervention and report the results from several small scale human studies where we observed (and interfered with) the mood dynamics on evolving communication networks.
This is a joint work with Zhivko Stoyanov from Reading and Peter Grindrod from Oxford University.
Kirsten Barrett (Department of Geography)
The amount of energy emitted by a fire is directly related to the amount of biomass consumed and subsequent emissions of greenhouse gases such as carbon dioxide and methane. Quantifying fire intensity is therefore useful in considering the climate impacts of wildfire disturbance. Optical satellites, which record both reflected and emitted energy (light and heat), can be used to identify where fires are active and how much energy they emit at the time of the overpass. The methods for analysing these data are typical array operations using satellite imagery. Active fires are identified as pixels that are brighter and hotter than the mean of neighboring pixels within a given window. The total amount of energy released by a fire is a function of the intensity at the time of observation, and the duration of a fire within a given pixel. Additional processing allows us to estimate the amount of burning that is likely to be obscured by clouds within an image. Such methods are currently used to estimate global fire emissions, which are estimated to be about half of those from fossil fuels each year.
Iain Stewart (Durham)
The design of data centre networks is becoming an important aspect of computing provision as software and infrastructure services increasingly migrate to the cloud and the demand for bigger and bigger data centres grows. This design is ordinarily undertaken by network engineers and is usually holistic in that not only is an interconnection topology prescribed but more practical aspects are often encompassed relating to, for example, routing, fault-tolerance and throughput. Consequently, these initial designs, though often impressive, tend to be prototypical with the intention being to demonstrate proof-of-concept and practical viability rather than to detail the finished article. Not only does improving and optimizing data centre networks provide opportunities for theoreticians to delve into their mathematical toolboxes but particular aspects of data centre network design can also lead to new problems that perhaps might not be immediately practically relevant but are interesting in their own right from a theoretical perspective. In this talk I shall introduce some of the practical considerations behind data centre network design and show how different aspects of mathematics impinge on this world; such aspects might include isoperimetric sets within graphs, the utilization of combinatorial design theory and the reduction of core data centre problems to combinatorics.
Dietrich Kuske (TU Ilmenau)
We consider Presburger arithmetic (PA) extended with modulo counting quantifiers. We show that its complexity is essentially the same as that of PA, i.e., we give a doubly exponential space bound. This is done by giving and analysing a quantifier elimination procedure similar to Reddy and Loveland's procedure for PA. Furthermore, we show that the complexity of the automata-based decision procedure for PA with modulo counting quantifiers has the same triple-exponential complexity as the one for PA when using least significant bit first encoding. (joint work with Peter Habermehl, LIAFA)
Jonathan Hayman (Cambridge)
Causal reasoning is a powerful tool in analysing security protocols, as seen in the popularity of the strand space model. However, for protocols that branch, more subtle models are needed to capture the ways in which protocols can interact with a possibly malign environment, for example allowing the expression of security properties of the form ``no matter what the attacker does, the protocol can still achieve a particular goal''. We study a model for security protocols encompassing causal reasoning and interaction, developing a semantics for a simple security protocol language based on concurrent games played on event structures. We show how it supports causal reasoning about a protocol for secure digital signature exchange. The semantics paves the way for the application of more sophisticated forms of concurrent game, for example including symmetry and probability, to the analysis of security protocols.
Alessandro Abate (Oxford)
This talk looks at the development of abstraction techniques based on quantitative approximations, in order to investigate the dynamics of complex systems and to provide computable approaches for the synthesis of control architectures. The approach employs techniques and concepts from the formal verification area, such as that of (approximate probabilistic) bisimulation, over models and problems known in the field of systems and control. While emphasising the generality of the approach over a number of diverse model classes, this talk zooms in on stochastic hybrid systems, which are probabilistic models with heterogeneous dynamics (continuous/discrete, i.e. hybrid, as well as nonlinear). A case study in energy networks, dealing with the problem of demand response, is employed to clarify concepts and techniques.
Mehrnoosh Sadrzadeh (Queen Mary)
Vector spaces offer a contextual way of reasoning about word meanings and have been rather successfully applied to natural language processing tasks. But the basic contextuality hypothesis of these models does not quite work when it comes to phrases and sentences. Herein, one needs to apply Frege's principle of Compositionality and use operations on word vectors to combine them. The high level categorical models of vector spaces (inspired by the work of Abramsky and Coecke on quantum models) have provided a general such setting for compositionality of contextual models. In this talk, I will present this model (joint work with Clark and Coecke), go over a number of its concrete instantiations, and present experimental results on language tasks (joint work with Grefenstette and Kartsaklis). The latter have outperformed the results of the simple, non-categorical models.
Pawel Sobocinski (Southampton)
In recent years there have been a number of developments in the area of symmetric monoidal theories (SMTs), a generalisation of algebraic theories in which one considers operations of arbitrary arity and coarity together with equations, and in which linearity is default: variables cannot be copied and discarded. String diagrams are an intuitive and powerful graphical syntax for the terms of SMTs. I will report on recent joint work with Filippo Bonchi and Fabio Zanasi, in which, building on the work of Steve Lack on composing SMTs via distributive laws and Yves Lafont on a string-diagrammatic theory of boolean circuits, we discovered the theory of Interacting Hopf Algebras. As well as being closely related to Coecke and Duncan's ZX-calculus for quantum circuits, the associated string diagrams amount to a rigorous and intuitive diagrammatic universe for linear algebra: familiar concepts such as linear transformations and linear spaces appear as certain string diagrams. As well as giving a new way to look at classical topics in linear algebra, string diagrams sometimes carry useful computational information. I will show how a special case of the theory of Interacting Hopf Algebras captures signal flow graphs, which are a classical circuit notation for linear dynamical systems. When signal flow graphs are considered as string diagrams, diagrammatic reasoning gives a novel, sound and complete technique to reason about them. I will explain how the underlying mathematics leads us to reconsider some popular assumptions about the roles of causality and direction of signal flow, which---in contrast to the primary role traditionally given to them---turn out to be secondary, derived notions: in this way, this work connects with recent trends in control theory, in particular the Behavioural Approach, popularised by Jan Willems.
Rami Bahsoon (University of Birmingham)
The architecture of the system is the first design artifact that realizes the non-functional requirements of the system such as security, reliability, availability, modifiability, real-time performance and their trade-offs. We discuss the challenges in managing change and guiding evolution in large scale software architecture, where uncertainty and dynamism are the norm. We report on ongoing work on evaluating software architectures for sustainability. We posit that the linkage between architectural design decisions, evolution trends of non-functional requirements and sustainability should be explicit; such linkage can provide insights on the success (failure) of software evolution. We argue that one of major indicators of architecture sustainability is the extent to which the architecture design decisions can endure evolutionary changes in non-functional requirements, while continuing to add value. We motivate the need for an economics-driven approach based on the principles of Real Options for valuing software architecture for sustainability. The objective is to assist the architect in strategic “what if” analysis for sustainability involving valuing the flexibility of the architecture to change and/or valuing the worthiness of designing or reengineering for change.
Stephane Ducasse (INRIA)
In this presentation I will argue for the need of dedicated tools and present the approach developed around Moose (http://www.moosetechnology.org) and engineered in Synectique (http://www.synectique.eu) a spinoff of the RMOD group. I will present some industrial cases and the meta tools that we have such as Roassal, Glamour.
Fernando Schlindwein (Bioengineering Research Group, University of Leicester)
Atrial Fibrillation is the most common cardiac arrhythmia. Patients with AF are five times more likely to have a stroke. We are going to show how using real-time signal processing can help cure AF using catheter ablation of the areas responsible for the generation or maintenance of AF.
Michel Chaudron (Chalmers)
(Slides) Modeling is a common part of modern day software engineering practice. Little evidence is known about how models are used and if/how they help in producing better software. In this talk I will present highlights from the last decade of research in the area of modeling software designs using UML. Topics that will be addressed in this talk are: - What is the state of UML modeling in practice? - How can we assess the quality of UML models? - Does UML modelling actually help in creating better software? - If time permits, I will show some snapshots of ongoing research in these areas.
(Imperial College London)
Graphical choreographies, or global graphs, are general multiparty session specifications featuring expressive constructs such as forking, merging, and joining for representing application-level protocols. Global graphs can be directly translated into modelling notations such as BPMN and UML. In this talk, I will present (i) an algorithm whereby a global graph can be constructed from asynchronous interactions represented by communicating finite-state machines (CFSMs); and (ii) a sound and complete characterisation of a subset of safe CFSMs from which global graphs can be constructed.
Alberto Lluch Lafuente (Technical University of Denmark)
Memory consistency is sometimes given up for the sake of performance both in concurrent and in distributed systems. Operational models of consistency relaxations like relaxed memory models offer suitable abstractions that can be used for various purposes (e.g. verification) but pose some challenges in state space exploration activities (e.g. verification) in part due to their inherent non-determinism which contributes to the state space explosion. Several techniques have been proposed to mitigate those problems so to make the analysis of programs under relaxed consistency feasible. We discuss how to adopt some of those techniques in a Maude-based approach to language prototyping, and suggest the use of other techniques that have been shown successful for similar state space exploration purposes.
Amal Elgammal (Trinity College Dublin)
Today's enterprises demand a high degree of compliance of business processes to meet regulations, such as Sarbanes-Oxley and Basel III. To ensure continuous guaranteed compliance, it should be enforced during all phases of the business process lifecycle, from the phases of analysis and design to deployment, monitoring and adaptation. This course of research primarily concentrates on design-time aspects of compliance management and secondarily on business process runtime monitoring; hence, providing preventive lifetime compliance support. While current manual or ad-hoc solutions provide limited assurances that business processes adhere to relevant constraints, there is still a lack of an established generic framework for managing these constraints; integrating their relationships and maintaining their traceability to sources and processes; and automatically verifying their satisfiability. In this talk, I will present my research results that address the problems of automating the compliance verification activities during design-time and the reasoning and analysis of detected compliance violations.
Massimo Bartoletti (University of Cagliari)
Distributed applications can be constructed by composing services which interact by exchanging messages according to some global communication pattern, called choreography. Under the assumption that each service adheres to its role in the choreography, the overall application is communication-correct. However, in wild scenarios like the Web or cloud environments, services may be deployed by different participants, which may be mutually distrusting (and possibly malicious). In these cases, one can not assume (nor enforce) that services always adhere to their roles. Many formal techniques focus on verifying the adherence between services and choreographic roles, under the assumption that no participant is malicious; in this case, strong communication-correctness results can be obtained, e.g. that the application is deadlock-free. However, in wild scenarios such techniques can not be applied. In this talk we present a paradigm for designing distributed applications in wild scenarios. Services use contracts to advertise their intended communication behaviour, and interact via sessions once a contractual agreement has been found. In this setting, the goal of a designer is to realise honest services, which respect their contracts in all execution contexts (also in those where other participants are malicious). A key issue is that the honesty property is undecidable in general. In this talk we discuss veriﬁcation techniques for honesty, targeted at agents specified in the contract-oriented calculus CO2. In particular, we show how to safely over-approximate the honesty property by a model-checking technique which abstracts from the contexts a service may be engaged with.
Jorge Perez (Groningen, The Netherlands)
(based on joint works with Luis Caires, Frank Pfenning, and Bernardo Toninho) A central concept in the theory of programming languages is the so-called Curry-Howard isomorphism: it tightly relates, on the one hand, logical propositions and types; on the other hand, it connects proofs and functional programs. This isomorphism has proved essential to endow reasoning techniques over sequential programs with clean principles, which have both operational and logical meanings. In 2010, Caires and Pfenning put forward an interpretation of linear logic propositions as session types for communicating processes. Remarkably, this result defines a propositions-as-types/proofs-as-programs correspondence, in the style of the Curry-Howard isomorphism, but in the context of concurrent processes defined in the pi-calculus. In this talk, I will give an overview to Caires and Pfenning's interpretation, from the perspective of languages for concurrency. I will also present some associated developments, which complement the interpretation by Caires and Pfenning: logical relations, typed behavioral equivalences, and type isomorphisms. If time permits, I will discuss two further developments. The first concerns the analysis of choreographies (multiparty protocols defined by session types) using the type discipline induced by Caires and Pfenning's interpretation. The second result extends the Caires and Pfenning's interpretation to a domain-aware setting, relying on a linear logic augmented with "worlds" from hybrid logic, here interpreted as distributed locations (or sites).
Pat Nicholson (MPI Saarbruecken)
Given an array A containing arbitrary (positive and negative) numbers, we consider the problem of supporting range maximum-sum segment queries on A: i.e., given an arbitrary range [i,j], return the subrange [i' ,j' ] ⊆ [i,j] such that the sum of the numbers in A[i'..j'] is maximized. Chen and Chao [Disc. App. Math. 2007] presented a data structure for this problem that occupies Θ(n) words, can be constructed in Θ(n) time, and supports queries in Θ(1) time. Our first result is that if only the indices [i',j'] are desired (rather than the maximum sum achieved in that subrange), then it is possible to reduce the space to Θ(n) bits, regardless the numbers stored in A, while retaining the same construction and query time. We also improve the best known space lower bound for any data structure that supports range maximum-sum segment queries from n bits to 1.89113n bits. Finally, we provide a new application of this data structure which simplifies a previously known linear time algorithm for finding k-covers: i.e., given an array A of n numbers and a number k, find k disjoint subranges [i_1 ,j_1 ],...,[i_k ,j_k ], such that the total sum of all the numbers in the subranges is maximized.
Andrzej Murawski (University of Warwick)
We present APEX, a tool for analysing probabilistic programs that are open, i.e. where variables or even functions can be left unspecified. APEX transforms a program into an automaton that captures the program’s probabilistic behaviour under all instantiations of the unspecified components. The translation is compositional and effectively leverages state reduction techniques. APEX can then further analyse the produced automata; in particular, it can check two automata for equivalence which translates to equivalence of the corresponding programs under all environments. In this way, APEX can verify a broad range of anonymity and termination properties of randomised protocols and other open programs, sometimes with an exponential speed-up over competing state-of-the-art approaches. This is joint work with S. Kiefer, J. Ouaknine, B. Wachter and J. Worrell.
Anders Bruun (Aalborg University)
The concept of User Experience (UX) in Human-Computer Interaction has evolved over the past 10 years. UX is typically considered to cover a broad range of dimension going beyond usability of interactive products. This talk will firstly provide a brief overview of state-of-the art in UX research. Secondly, the talk will present results from a recent experiment questioning the reliability of current practices for assessing UX. UX is typically measured retrospectively through subjective questionnaire ratings, yet we know little of how well these retrospective ratings reflect concurrent experiences of an entire interaction sequence. The talk will present findings from an empirical study of the gap between concurrent and retrospective ratings of UX. Alternative methods of assessing UX will be discussed, which have considerable implications for practice.
Jun Zhao (Lancaster University)
Scientific integrity is facing growing scrutiny. We read more and more high profile stories about scientific ‘flaws’ and embarrassment, which not only have serious impact on the life of generations but also that of the scientists themselves. All of these have recently led to serious doubts about the significance of the results and quality of the peer-review process, with peer review costing an estimated $2 billion US dollars each year. Provenance information records where a data item came from and how it was produced. A driving theme in provenance research is to understand how best to use provenance as the critical evidence to support scientists justifying and validating their findings and improve the efficiency and efficacy of the peer review process. In this talk, I will discuss the role of provenance research in enhancing reproducibility of computational research and examine the critical gap in existing provenance research for fulfilling this goal. I will outline our approach of applying semantic web technologies to provenance data mining and argue an important role to be played by data mining technologies in provenance data analytics at a large scale.
Cecilia Mascolo (University of Cambridge)
In this talk I will describe our research in the area of mobile sensing and mobile phone sensing. I will introduce a variety of projects which we have conducted to monitor people’s mobility and behaviour with handheld devices as well as embedded technology. I will introduce the challenges related to mobile sensing and the solutions which we have proposed in terms of power preservation and behaviour inference. The specific studies described include Emotion Sense, a platform to monitor mood and behaviour for psychological studies, our work on using location and accelerometer data to understand parking and driving patterns and a study using RFID technology to monitor interaction and use of space in buildings.
Shin-Ichi Minato (Hokkaido University, Japan)
In many scientific communities using experiment databases, one of the crucial problems is how to assess the statistical significance (p-value) of a discovered hypothesis. Especially, combinatorial hypothesis assessment is a hard problem because it requires a multiple-testing procedure with a very large factor of the p-value correction. Recently, Terada et al. proposed a novel method of the p-value correction, called "Limitless Arity Multiple-testing Procedure" (LAMP), which is based on frequent itemset enumeration to exclude meaninglessly infrequent itemsets which will never be significant. The LAMP makes much more accurate p-value correction than previous method, and it empowers the scientific discovery. However, the original LAMP implementation is sometimes too time-consuming for practical databases. We propose a new LAMP algorithm that essentially executes itemset mining algorithm once, while the previous one executes many times. Our experimental results show that the proposed method is much (10 to 100 times) faster than the original LAMP. This algorithm enables us to discover significant p-value patterns in quite short time even for very large-scale databases.
Author: Alexander Kurz (kurz mcs le ac uk), T: 0116 252 5356.