Computer Science Seminars
Julien Lange (Imperial College London)
Alberto Lluch Lafuente (Technical University of Denmark)
Amal Elgammal (Trinity College Dublin)
Today's enterprises demand a high degree of compliance of business processes to meet regulations, such as Sarbanes-Oxley and Basel III. To ensure continuous guaranteed compliance, it should be enforced during all phases of the business process lifecycle, from the phases of analysis and design to deployment, monitoring and adaptation. This course of research primarily concentrates on design-time aspects of compliance management and secondarily on business process runtime monitoring; hence, providing preventive lifetime compliance support. While current manual or ad-hoc solutions provide limited assurances that business processes adhere to relevant constraints, there is still a lack of an established generic framework for managing these constraints; integrating their relationships and maintaining their traceability to sources and processes; and automatically verifying their satisfiability. In this talk, I will present my research results that address the problems of automating the compliance verification activities during design-time and the reasoning and analysis of detected compliance violations.
Massimo Bartoletti (University of Cagliari)
Distributed applications can be constructed by composing services which interact by exchanging messages according to some global communication pattern, called choreography. Under the assumption that each service adheres to its role in the choreography, the overall application is communication-correct. However, in wild scenarios like the Web or cloud environments, services may be deployed by different participants, which may be mutually distrusting (and possibly malicious). In these cases, one can not assume (nor enforce) that services always adhere to their roles. Many formal techniques focus on verifying the adherence between services and choreographic roles, under the assumption that no participant is malicious; in this case, strong communication-correctness results can be obtained, e.g. that the application is deadlock-free. However, in wild scenarios such techniques can not be applied. In this talk we present a paradigm for designing distributed applications in wild scenarios. Services use contracts to advertise their intended communication behaviour, and interact via sessions once a contractual agreement has been found. In this setting, the goal of a designer is to realise honest services, which respect their contracts in all execution contexts (also in those where other participants are malicious). A key issue is that the honesty property is undecidable in general. In this talk we discuss veriﬁcation techniques for honesty, targeted at agents specified in the contract-oriented calculus CO2. In particular, we show how to safely over-approximate the honesty property by a model-checking technique which abstracts from the contexts a service may be engaged with.
Jorge Perez (Groningen, The Netherlands)
(based on joint works with Luis Caires, Frank Pfenning, and Bernardo Toninho) A central concept in the theory of programming languages is the so-called Curry-Howard isomorphism: it tightly relates, on the one hand, logical propositions and types; on the other hand, it connects proofs and functional programs. This isomorphism has proved essential to endow reasoning techniques over sequential programs with clean principles, which have both operational and logical meanings. In 2010, Caires and Pfenning put forward an interpretation of linear logic propositions as session types for communicating processes. Remarkably, this result defines a propositions-as-types/proofs-as-programs correspondence, in the style of the Curry-Howard isomorphism, but in the context of concurrent processes defined in the pi-calculus. In this talk, I will give an overview to Caires and Pfenning's interpretation, from the perspective of languages for concurrency. I will also present some associated developments, which complement the interpretation by Caires and Pfenning: logical relations, typed behavioral equivalences, and type isomorphisms. If time permits, I will discuss two further developments. The first concerns the analysis of choreographies (multiparty protocols defined by session types) using the type discipline induced by Caires and Pfenning's interpretation. The second result extends the Caires and Pfenning's interpretation to a domain-aware setting, relying on a linear logic augmented with "worlds" from hybrid logic, here interpreted as distributed locations (or sites).
Pat Nicholson (MPI Saarbruecken)
Given an array A containing arbitrary (positive and negative) numbers, we consider the problem of supporting range maximum-sum segment queries on A: i.e., given an arbitrary range [i,j], return the subrange [i' ,j' ] ⊆ [i,j] such that the sum of the numbers in A[i'..j'] is maximized. Chen and Chao [Disc. App. Math. 2007] presented a data structure for this problem that occupies Θ(n) words, can be constructed in Θ(n) time, and supports queries in Θ(1) time. Our first result is that if only the indices [i',j'] are desired (rather than the maximum sum achieved in that subrange), then it is possible to reduce the space to Θ(n) bits, regardless the numbers stored in A, while retaining the same construction and query time. We also improve the best known space lower bound for any data structure that supports range maximum-sum segment queries from n bits to 1.89113n bits. Finally, we provide a new application of this data structure which simplifies a previously known linear time algorithm for finding k-covers: i.e., given an array A of n numbers and a number k, find k disjoint subranges [i_1 ,j_1 ],...,[i_k ,j_k ], such that the total sum of all the numbers in the subranges is maximized.
Andrzej Murawski (University of Warwick)
We present APEX, a tool for analysing probabilistic programs that are open, i.e. where variables or even functions can be left unspecified. APEX transforms a program into an automaton that captures the program’s probabilistic behaviour under all instantiations of the unspecified components. The translation is compositional and effectively leverages state reduction techniques. APEX can then further analyse the produced automata; in particular, it can check two automata for equivalence which translates to equivalence of the corresponding programs under all environments. In this way, APEX can verify a broad range of anonymity and termination properties of randomised protocols and other open programs, sometimes with an exponential speed-up over competing state-of-the-art approaches. This is joint work with S. Kiefer, J. Ouaknine, B. Wachter and J. Worrell.
Anders Bruun (Aalborg University)
The concept of User Experience (UX) in Human-Computer Interaction has evolved over the past 10 years. UX is typically considered to cover a broad range of dimension going beyond usability of interactive products. This talk will firstly provide a brief overview of state-of-the art in UX research. Secondly, the talk will present results from a recent experiment questioning the reliability of current practices for assessing UX. UX is typically measured retrospectively through subjective questionnaire ratings, yet we know little of how well these retrospective ratings reflect concurrent experiences of an entire interaction sequence. The talk will present findings from an empirical study of the gap between concurrent and retrospective ratings of UX. Alternative methods of assessing UX will be discussed, which have considerable implications for practice.
Jun Zhao (Lancaster University)
Scientific integrity is facing growing scrutiny. We read more and more high profile stories about scientific ‘flaws’ and embarrassment, which not only have serious impact on the life of generations but also that of the scientists themselves. All of these have recently led to serious doubts about the significance of the results and quality of the peer-review process, with peer review costing an estimated $2 billion US dollars each year. Provenance information records where a data item came from and how it was produced. A driving theme in provenance research is to understand how best to use provenance as the critical evidence to support scientists justifying and validating their findings and improve the efficiency and efficacy of the peer review process. In this talk, I will discuss the role of provenance research in enhancing reproducibility of computational research and examine the critical gap in existing provenance research for fulfilling this goal. I will outline our approach of applying semantic web technologies to provenance data mining and argue an important role to be played by data mining technologies in provenance data analytics at a large scale.
Cecilia Mascolo (University of Cambridge)
In this talk I will describe our research in the area of mobile sensing and mobile phone sensing. I will introduce a variety of projects which we have conducted to monitor people’s mobility and behaviour with handheld devices as well as embedded technology. I will introduce the challenges related to mobile sensing and the solutions which we have proposed in terms of power preservation and behaviour inference. The specific studies described include Emotion Sense, a platform to monitor mood and behaviour for psychological studies, our work on using location and accelerometer data to understand parking and driving patterns and a study using RFID technology to monitor interaction and use of space in buildings.
Shin-Ichi Minato (Hokkaido University, Japan)
In many scientific communities using experiment databases, one of the crucial problems is how to assess the statistical significance (p-value) of a discovered hypothesis. Especially, combinatorial hypothesis assessment is a hard problem because it requires a multiple-testing procedure with a very large factor of the p-value correction. Recently, Terada et al. proposed a novel method of the p-value correction, called "Limitless Arity Multiple-testing Procedure" (LAMP), which is based on frequent itemset enumeration to exclude meaninglessly infrequent itemsets which will never be significant. The LAMP makes much more accurate p-value correction than previous method, and it empowers the scientific discovery. However, the original LAMP implementation is sometimes too time-consuming for practical databases. We propose a new LAMP algorithm that essentially executes itemset mining algorithm once, while the previous one executes many times. Our experimental results show that the proposed method is much (10 to 100 times) faster than the original LAMP. This algorithm enables us to discover significant p-value patterns in quite short time even for very large-scale databases.
Author: Alexander Kurz (kurz mcs le ac uk), T: 0116 252 5356.