University of Leicester


Computer Science Seminars

Useful links:

Semester 1, 2015/16

Seminar programme

Seminar details

Frank Wolter (University of Liverpool)
Fri 11th Dec, 14:00 in BENL LT (Host: Paula Severi, Alexander Kurz)

Geographic Information and Information Geographies

Stefano De Sabbata (University of Leicester)
Fri 27th Nov, 14:00 in BENL LT (Host: Alexander Kurz)

Information has always had geography. It is from somewhere; about somewhere; it evolves and is transformed somewhere. The first part of the talk will be concerned with the place where geography meets information retrieval and mobile information services. What is special about the spatial component of the data that describe our world, that we store, map, and retrieve? What criteria do we have to take into account when searching for geographic information and how to assess the geographic relevance of information? The second part of the talk will focus on the different geographies of information, and in particular on the geographies of the internet. What factors influence internet access, and how does this reflect on content production? Where does internet content come from, which places are more or less represented, and how does this matter?

Putting the lab on the map: A wireless sensor network system for border security and surveillance

Mohammad Hammoudeh (Manchester Metropolitan University)
Fri 20th Nov, 14:00 in BENL LT (Host: Alexander Kurz)

Traditionally, countries viewed international border control as mostly immigration- and customs-based challenge. However, with the increased risks of terrorism, illegal movement of drugs, weapons, contraband and people, these countries face unprecedented challenges in securing borders effectively. Securing international borders is a complex task that involves international collaboration, deployment of advanced technological solutions and professional skill-sets. In the current financial climate, governments strive to secure their borders, but also ensure that costs are kept low.

Wireless Sensor Networks (WSNs) is a low cost technology that can provide an effective solution to the range of problems faced in securing borders effectively. The ability of a WSN to operate without human involvement and in situations where other surveillance technologies are impractical has made it favourite for deployment in hostile hazardous environments. This technology offers intelligence-led approach to strengthen vulnerable points on the international borders. This class of WSN applications imposes a linear network topology, which has nodes daisy chained using radio communication. Linear WSN topologies are characterised by sparse node deployment, long data transmission distances and alignment of nodes along a virtual line. This talk presents solutions to address the new challenges introduced by Linear WSNs, including: What is the minimum network density to achieve k-barrier coverage in a given belt region? Given an appropriate network density, how to determine if a region is indeed k-barrier covered? How to find a path connecting the two ends of the border such that every point on the path is covered by a sensor node? How to balance workload across sensor nodes? How to elongate network life time and meet quality of service requirements?

Are Semantics and Scalability Incompatible?

Ian Horrocks (University of Oxford)
Fri 13th Nov, 14:00 in BENL LT (Host: Paula Severi)

So called "Semantic Technologies" are rapidly becoming mainstream technologies, with RDF and OWL now being deployed in diverse application domains, and with major technology vendors starting to augment their existing systems accordingly. This is, however, only the first step for Semantic Web research; we need to demonstrate that the Semantic Technologies we are developing can (be made to) exhibit robust scalability if deployments in large scale applications are to be successful. In this talk I will review the evolution of Semantic Technologies to date, and show how research ideas from logic based knowledge representation developed into a mainstream technology. I will then go on to examine the scalability challenges arising from deployment in large scale applications, and discuss ongoing research aimed at addressing them.

Level-based Analysis of Evolutionary Algorithms under Uncertainty

Per Kristian Lehre (University of Nottingham)
Fri 6th Nov, 14:00 in BENL LT (Host: Thomas Erlebach)

Evolutionary algorithms have been applied for decades to tackle hard, real-world optimisation problems in logistics, the automotive industry, the pharmaceutical industry, and elsewhere. Despite their wide-spread use, the theoretical understanding of evolutionary algorithms is still relatively poorly developed. Partly due to lack of methods, time-complexity analysis of evolutionary algorithms have mainly focused on simple algorithms, such as the (1+1) EA which do not use a population or sophisticated genetic operators.

This talk introduces a new method for time-complexity analysis of evolutionary algorithms, the level-based method (Corus et al, PPSN'2014). This method provides upper bounds on the expected hitting times of a wide class of stochastic population models from easy to verify conditions. Level-based analysis has already been applied to study complex algorithms, such as genetic algorithms using crossover, and estimation of distribution algorithms.

As an application of the level-based method, we investigate how evolutionary algorithms cope under different forms of uncertainty in pseudo-Boolean optimisation. We first consider the classical additive noise model, where the objective function is perturbed by a noise term. Secondly, we consider partial evaluation, in which only a random subset of the terms in the objective function are evaluated. The analysis shows that with appropriate settings of mutation rate, selective pressure, and population size, an evolutionary algorithm can cope with surprisingly high levels of uncertainty.

Engineering machine ethics in contemporary artificial intelligence research

Marija Slavkovik (University of Bergen)
Fri 30th Oct, 14:00 in RAT LT (Host: Alexander Kurz)

Machine ethics is an emerging discipline in artificial intelligence research that is concerned with ensuring that the behaviour of machines towards humans and other machines they interact with is ethical. The idea of machine ethics has tickled the fancy of numerous science fictions writers and aficionados, but as of recently, the advancements in robotics and automated systems have shaped this idea into a research topic. Any machine ethics has to be based on human understanding of morality. What kind of autonomous systems should be enhanced with ethical reasoning capabilities? What are the main approaches and challenges to transform vague ideas of right and wrong into machine-comprehensible specifications? Once you get autonomous machines to behave ethically, how can you make sure that they will? How does machine ethics impact other facets of society, or, when a machine misbehaves who is to blame? This talk will explain these questions and the ongoing research efforts towards answering them.

Accent the Positive

Alistair Edwards (University of York)
Fri 23rd Oct, 14:00 in RAT LT (Host: Effie Law)

The way people speak tells a lot about their origins – geographical and social, but when someone can only speak with the aid of an artificial voice (such as Stephen Hawking), conventional expectations are subverted. This seminar will explore some of the limitations and possibilities of speech technology.

Radius of centrality in dynamic networks (or Why grumpy people shouldn't get an ice-cream)

Danica Vukadinovic Greetham (University of Reading)
Fri 16th Oct, 14:00 in RAT LT (Host: Thomas Erlebach)

In this talk, I will look at the problem of centrality ranking in dynamic networks. Centrality is an intuitive concept that tells us how important or involved an individual node is in a network. Many different definitions exist, but not all will work in evolving networks where edges are created and destroyed in each time step. We focus here on one that works well, slightly modified, in evolving networks, Katz centrality, and show how it can be calculated efficiently for relatively large networks.
We then use this calculation to inform a decision on a network based intervention and report the results from several small scale human studies where we observed (and interfered with) the mood dynamics on evolving communication networks.
This is a joint work with Zhivko Stoyanov from Reading and Peter Grindrod from Oxford University.

Calculating wildfire intensity and greenhouse gas emissions using earth observation data

Kirsten Barrett (Department of Geography)
Fri 9th Oct, 14:00 in RAT LT (Host: Alexander Kurz)

The amount of energy emitted by a fire is directly related to the amount of biomass consumed and subsequent emissions of greenhouse gases such as carbon dioxide and methane. Quantifying fire intensity is therefore useful in considering the climate impacts of wildfire disturbance. Optical satellites, which record both reflected and emitted energy (light and heat), can be used to identify where fires are active and how much energy they emit at the time of the overpass. The methods for analysing these data are typical array operations using satellite imagery. Active fires are identified as pixels that are brighter and hotter than the mean of neighboring pixels within a given window. The total amount of energy released by a fire is a function of the intensity at the time of observation, and the duration of a fire within a given pixel. Additional processing allows us to estimate the amount of burning that is likely to be obscured by clouds within an image. Such methods are currently used to estimate global fire emissions, which are estimated to be about half of those from fossil fuels each year.

Semester 2, 2014/15

Seminar programme

Seminar details

Some mathematical aspects of data centre networks

Iain Stewart (Durham)
Fri, May 15, 14:00 in PHY LTA (Host: Rajeev Raman)

The design of data centre networks is becoming an important aspect of computing provision as software and infrastructure services increasingly migrate to the cloud and the demand for bigger and bigger data centres grows. This design is ordinarily undertaken by network engineers and is usually holistic in that not only is an interconnection topology prescribed but more practical aspects are often encompassed relating to, for example, routing, fault-tolerance and throughput. Consequently, these initial designs, though often impressive, tend to be prototypical with the intention being to demonstrate proof-of-concept and practical viability rather than to detail the finished article. Not only does improving and optimizing data centre networks provide opportunities for theoreticians to delve into their mathematical toolboxes but particular aspects of data centre network design can also lead to new problems that perhaps might not be immediately practically relevant but are interesting in their own right from a theoretical perspective. In this talk I shall introduce some of the practical considerations behind data centre network design and show how different aspects of mathematics impinge on this world; such aspects might include isoperimetric sets within graphs, the utilization of combinatorial design theory and the reduction of core data centre problems to combinatorics.

Integer arithmetic with counting quantifiers

Dietrich Kuske (TU Ilmenau)
Fri, Mar 27, 14:00 in PHY LTA (Host: Michael Hoffmann)

We consider Presburger arithmetic (PA) extended with modulo counting quantifiers. We show that its complexity is essentially the same as that of PA, i.e., we give a doubly exponential space bound. This is done by giving and analysing a quantifier elimination procedure similar to Reddy and Loveland's procedure for PA. Furthermore, we show that the complexity of the automata-based decision procedure for PA with modulo counting quantifiers has the same triple-exponential complexity as the one for PA when using least significant bit first encoding. (joint work with Peter Habermehl, LIAFA)

Interaction and causality in digital signature exchange protocols

Jonathan Hayman (Cambridge)
Fri, Mar 20, 14:00 in PHY LTA (Host: Reiko Heckel)

Causal reasoning is a powerful tool in analysing security protocols, as seen in the popularity of the strand space model. However, for protocols that branch, more subtle models are needed to capture the ways in which protocols can interact with a possibly malign environment, for example allowing the expression of security properties of the form ``no matter what the attacker does, the protocol can still achieve a particular goal''.

We study a model for security protocols encompassing causal reasoning and interaction, developing a semantics for a simple security protocol language based on concurrent games played on event structures. We show how it supports causal reasoning about a protocol for secure digital signature exchange. The semantics paves the way for the application of more sophisticated forms of concurrent game, for example including symmetry and probability, to the analysis of security protocols.

Computable analysis and control synthesis of complex dynamical systems via formal verification

Alessandro Abate (Oxford)
Fri, Mar 13, 14:00 in PHY LTA (Host: Nir Piterman)

This talk looks at the development of abstraction techniques based on quantitative approximations, in order to investigate the dynamics of complex systems and to provide computable approaches for the synthesis of control architectures. The approach employs techniques and concepts from the formal verification area, such as that of (approximate probabilistic) bisimulation, over models and problems known in the field of systems and control. While emphasising the generality of the approach over a number of diverse model classes, this talk zooms in on stochastic hybrid systems, which are probabilistic models with heterogeneous dynamics (continuous/discrete, i.e. hybrid, as well as nonlinear). A case study in energy networks, dealing with the problem of demand response, is employed to clarify concepts and techniques.

Multi-Linear Algebraic Semantics for Natural Language

Mehrnoosh Sadrzadeh (Queen Mary)
Fri, March 6, 14:00 in PHY LTA (Host: Alexander Kurz)

Vector spaces offer a contextual way of reasoning about word meanings and have been rather successfully applied to natural language processing tasks. But the basic contextuality hypothesis of these models does not quite work when it comes to phrases and sentences. Herein, one needs to apply Frege's principle of Compositionality and use operations on word vectors to combine them. The high level categorical models of vector spaces (inspired by the work of Abramsky and Coecke on quantum models) have provided a general such setting for compositionality of contextual models.

In this talk, I will present this model (joint work with Clark and Coecke), go over a number of its concrete instantiations, and present experimental results on language tasks (joint work with Grefenstette and Kartsaklis). The latter have outperformed the results of the simple, non-categorical models.

Diagrammatic mathematics and signal flow graphs

Pawel Sobocinski (Southampton)
Fri, Feb 27, 14:00 in PHY LTA (Host: Alexander Kurz)

In recent years there have been a number of developments in the area of symmetric monoidal theories (SMTs), a generalisation of algebraic theories in which one considers operations of arbitrary arity and coarity together with equations, and in which linearity is default: variables cannot be copied and discarded.

String diagrams are an intuitive and powerful graphical syntax for the terms of SMTs. I will report on recent joint work with Filippo Bonchi and Fabio Zanasi, in which, building on the work of Steve Lack on composing SMTs via distributive laws and Yves Lafont on a string-diagrammatic theory of boolean circuits, we discovered the theory of Interacting Hopf Algebras. As well as being closely related to Coecke and Duncan's ZX-calculus for quantum circuits, the associated string diagrams amount to a rigorous and intuitive diagrammatic universe for linear algebra: familiar concepts such as linear transformations and linear spaces appear as certain string diagrams.

As well as giving a new way to look at classical topics in linear algebra, string diagrams sometimes carry useful computational information. I will show how a special case of the theory of Interacting Hopf Algebras captures signal flow graphs, which are a classical circuit notation for linear dynamical systems. When signal flow graphs are considered as string diagrams, diagrammatic reasoning gives a novel, sound and complete technique to reason about them. I will explain how the underlying mathematics leads us to reconsider some popular assumptions about the roles of causality and direction of signal flow, which---in contrast to the primary role traditionally given to them---turn out to be secondary, derived notions: in this way, this work connects with recent trends in control theory, in particular the Behavioural Approach, popularised by Jan Willems.

Sustainable Software Architectures: An Economics-Driven Perspective

Rami Bahsoon (University of Birmingham)
Fri, Feb 21 , 14:00 in ATT LT1 (Host: Stephan Reiff-Marganiec)

The architecture of the system is the first design artifact that realizes the non-functional requirements of the system such as security, reliability, availability, modifiability, real-time performance and their trade-offs. We discuss the challenges in managing change and guiding evolution in large scale software architecture, where uncertainty and dynamism are the norm. We report on ongoing work on evaluating software architectures for sustainability. We posit that the linkage between architectural design decisions, evolution trends of non-functional requirements and sustainability should be explicit; such linkage can provide insights on the success (failure) of software evolution. We argue that one of major indicators of architecture sustainability is the extent to which the architecture design decisions can endure evolutionary changes in non-functional requirements, while continuing to add value. We motivate the need for an economics-driven approach based on the principles of Real Options for valuing software architecture for sustainability. The objective is to assist the architect in strategic “what if” analysis for sustainability involving valuing the flexibility of the architecture to change and/or valuing the worthiness of designing or reengineering for change.

Building dedicated tools

Stephane Ducasse (INRIA)
Fri, Feb 13 , 14:00 in ATT LT1 (Host: Neil Walkinshaw)

In this presentation I will argue for the need of dedicated tools and present the approach developed around Moose ( and engineered in Synectique ( a spinoff of the RMOD group. I will present some industrial cases and the meta tools that we have such as Roassal, Glamour.

Signal processing helping cure cardiac arrhythmias

Fernando Schlindwein (Bioengineering Research Group, University of Leicester)
Fri, Feb 6 , 14:00 in ATT LT1 (Host: Alexander Kurz)

Atrial Fibrillation is the most common cardiac arrhythmia. Patients with AF are five times more likely to have a stroke. We are going to show how using real-time signal processing can help cure AF using catheter ablation of the areas responsible for the generation or maintenance of AF.

Is Modeling Any Good? Empirical Evidence on Modeling of Design in Software Development

Michel Chaudron (Chalmers)
Fri, Jan 30, 14:00 in ATT LT1 (Host: Reiko Heckel)

(Slides) Modeling is a common part of modern day software engineering practice. Little evidence is known about how models are used and if/how they help in producing better software. In this talk I will present highlights from the last decade of research in the area of modeling software designs using UML. Topics that will be addressed in this talk are:

- What is the state of UML modeling in practice?

- How can we assess the quality of UML models?

- Does UML modelling actually help in creating better software?

- If time permits, I will show some snapshots of ongoing research in these areas.

Semester 1, 2014/15

Seminar programme

Seminar details

From Communicating Machines to Graphical Choreographies

Julien Lange (Imperial College London)
12th Dec, 14:00 in GP LTB (Host: Emilio Tuosto)

Graphical choreographies, or global graphs, are general multiparty session specifications featuring expressive constructs such as forking, merging, and joining for representing application-level protocols. Global graphs can be directly translated into modelling notations such as BPMN and UML. In this talk, I will present (i) an algorithm whereby a global graph can be constructed from asynchronous interactions represented by communicating finite-state machines (CFSMs); and (ii) a sound and complete characterisation of a subset of safe CFSMs from which global graphs can be constructed.

On state space problems when designing languages with relaxed consistency in Maude

Alberto Lluch Lafuente (Technical University of Denmark)
5th Dec, 14:00 in GP LTB (Host: Emilio Tuosto)

Memory consistency is sometimes given up for the sake of performance both in concurrent and in distributed systems. Operational models of consistency relaxations like relaxed memory models offer suitable abstractions that can be used for various purposes (e.g. verification) but pose some challenges in state space exploration activities (e.g. verification) in part due to their inherent non-determinism which contributes to the state space explosion. Several techniques have been proposed to mitigate those problems so to make the analysis of programs under relaxed consistency feasible. We discuss how to adopt some of those techniques in a Maude-based approach to language prototyping, and suggest the use of other techniques that have been shown successful for similar state space exploration purposes.

Toward a Comprehensive Framework for Business Process Compliance

Amal Elgammal (Trinity College Dublin)
4th Dec, 10:00 in BEN LT (Host: Artur Boronat)

Today's enterprises demand a high degree of compliance of business processes to meet ‎regulations, such as Sarbanes-Oxley and Basel III. To ensure continuous guaranteed ‎compliance, it should be enforced during all phases of the business process lifecycle, from the ‎phases of analysis and design to deployment, monitoring and adaptation. This course of ‎research primarily concentrates on design-time aspects of compliance management and ‎secondarily on business process runtime monitoring; hence, providing preventive lifetime ‎compliance support. While current manual or ad-hoc solutions provide limited assurances ‎that business processes adhere to relevant constraints, there is still a lack of an established ‎generic framework for managing these constraints; integrating their relationships and ‎maintaining their traceability to sources and processes; and automatically verifying their ‎satisfiability. In this talk, I will present my research results that address the problems of automating the compliance verification activities during design-time and the reasoning and analysis of detected compliance violations.

Choreographies in the wild

Massimo Bartoletti (University of Cagliari)
28th Nov, 14:00 in GP LTB (Host: Emilio Tuosto)

Distributed applications can be constructed by composing services which interact by exchanging messages according to some global communication pattern, called choreography. Under the assumption that each service adheres to its role in the choreography, the overall application is communication-correct.

However, in wild scenarios like the Web or cloud environments, services may be deployed by different participants, which may be mutually distrusting (and possibly malicious). In these cases, one can not assume (nor enforce) that services always adhere to their roles.

Many formal techniques focus on verifying the adherence between services and choreographic roles, under the assumption that no participant is malicious; in this case, strong communication-correctness results can be obtained, e.g. that the application is deadlock-free. However, in wild scenarios such techniques can not be applied.

In this talk we present a paradigm for designing distributed applications in wild scenarios. Services use contracts to advertise their intended communication behaviour, and interact via sessions once a contractual agreement has been found. In this setting, the goal of a designer is to realise honest services, which respect their contracts in all execution contexts (also in those where other participants are malicious).

A key issue is that the honesty property is undecidable in general. In this talk we discuss verification techniques for honesty, targeted at agents specified in the contract-oriented calculus CO2. In particular, we show how to safely over-approximate the honesty property by a model-checking technique which abstracts from the contexts a service may be engaged with.

Logical Foundations for Session-Based Concurrency: Overview and Recent Developments

Jorge Perez (Groningen, The Netherlands)
21st Nov, 14:00 in GP LTB (Host: Paula Severi, Alexander Kurz)

(based on joint works with Luis Caires, Frank Pfenning, and Bernardo Toninho) A central concept in the theory of programming languages is the so-called Curry-Howard isomorphism: it tightly relates, on the one hand, logical propositions and types; on the other hand, it connects proofs and functional programs. This isomorphism has proved essential to endow reasoning techniques over sequential programs with clean principles, which have both operational and logical meanings.

In 2010, Caires and Pfenning put forward an interpretation of linear logic propositions as session types for communicating processes. Remarkably, this result defines a propositions-as-types/proofs-as-programs correspondence, in the style of the Curry-Howard isomorphism, but in the context of concurrent processes defined in the pi-calculus.

In this talk, I will give an overview to Caires and Pfenning's interpretation, from the perspective of languages for concurrency. I will also present some associated developments, which complement the interpretation by Caires and Pfenning: logical relations, typed behavioral equivalences, and type isomorphisms.

If time permits, I will discuss two further developments. The first concerns the analysis of choreographies (multiparty protocols defined by session types) using the type discipline induced by Caires and Pfenning's interpretation. The second result extends the Caires and Pfenning's interpretation to a domain-aware setting, relying on a linear logic augmented with "worlds" from hybrid logic, here interpreted as distributed locations (or sites).

Encodings of Range Maximum-Sum Segment Queries and Applications

Pat Nicholson (MPI Saarbruecken)
7th Nov, 14:00 in GP LTB (Host: Rajeev Raman)

Given an array A containing arbitrary (positive and negative) numbers, we consider the problem of supporting range maximum-sum segment queries on A: i.e., given an arbitrary range [i,j], return the subrange [i' ,j' ] ⊆ [i,j] such that the sum of the numbers in A[i'..j'] is maximized. Chen and Chao [Disc. App. Math. 2007] presented a data structure for this problem that occupies Θ(n) words, can be constructed in Θ(n) time, and supports queries in Θ(1) time. Our first result is that if only the indices [i',j'] are desired (rather than the maximum sum achieved in that subrange), then it is possible to reduce the space to Θ(n) bits, regardless the numbers stored in A, while retaining the same construction and query time. We also improve the best known space lower bound for any data structure that supports range maximum-sum segment queries from n bits to 1.89113n bits. Finally, we provide a new application of this data structure which simplifies a previously known linear time algorithm for finding k-covers: i.e., given an array A of n numbers and a number k, find k disjoint subranges [i_1 ,j_1 ],...,[i_k ,j_k ], such that the total sum of all the numbers in the subranges is maximized.

APEX: An Analyzer for Open Probabilistic Programs

Andrzej Murawski (University of Warwick)
31st Oct, 14:00 in ATT LT1 (Host: Thomas Erlebach)

We present APEX, a tool for analysing probabilistic programs that are open, i.e. where variables or even functions can be left unspecified. APEX transforms a program into an automaton that captures the program’s probabilistic behaviour under all instantiations of the unspecified components. The translation is compositional and effectively leverages state reduction techniques. APEX can then further analyse the produced automata; in particular, it can check two automata for equivalence which translates to equivalence of the corresponding programs under all environments. In this way, APEX can verify a broad range of anonymity and termination properties of randomised protocols and other open programs, sometimes with an exponential speed-up over competing state-of-the-art approaches.

This is joint work with S. Kiefer, J. Ouaknine, B. Wachter and J. Worrell.

Measuring User Experience in Human-Computer Interaction: Current status, caveats and alternative methods

Anders Bruun (Aalborg University)
24th Oct, 14:00 in ATT LT1 (Host: Effie Law)

The concept of User Experience (UX) in Human-Computer Interaction has evolved over the past 10 years. UX is typically considered to cover a broad range of dimension going beyond usability of interactive products. This talk will firstly provide a brief overview of state-of-the art in UX research. Secondly, the talk will present results from a recent experiment questioning the reliability of current practices for assessing UX.

UX is typically measured retrospectively through subjective questionnaire ratings, yet we know little of how well these retrospective ratings reflect concurrent experiences of an entire interaction sequence. The talk will present findings from an empirical study of the gap between concurrent and retrospective ratings of UX. Alternative methods of assessing UX will be discussed, which have considerable implications for practice.

Hansel and Gretel: Connecting the dots of evidences via provenance trails

Jun Zhao (Lancaster University)
17th Oct, 14:00 in ATT LT1 (Host: Ruzanna Chitchyan)

Scientific integrity is facing growing scrutiny. We read more and more high profile stories about scientific ‘flaws’ and embarrassment, which not only have serious impact on the life of generations but also that of the scientists themselves. All of these have recently led to serious doubts about the significance of the results and quality of the peer-review process, with peer review costing an estimated $2 billion US dollars each year.

Provenance information records where a data item came from and how it was produced. A driving theme in provenance research is to understand how best to use provenance as the critical evidence to support scientists justifying and validating their findings and improve the efficiency and efficacy of the peer review process.

In this talk, I will discuss the role of provenance research in enhancing reproducibility of computational research and examine the critical gap in existing provenance research for fulfilling this goal. I will outline our approach of applying semantic web technologies to provenance data mining and argue an important role to be played by data mining technologies in provenance data analytics at a large scale.

The Power of Mobile Sensing and its Applications

Cecilia Mascolo (University of Cambridge)
10th Oct, 14:00 in ATT LT1 (Host: Thomas Erlebach)

In this talk I will describe our research in the area of mobile sensing and mobile phone sensing. I will introduce a variety of projects which we have conducted to monitor people’s mobility and behaviour with handheld devices as well as embedded technology. I will introduce the challenges related to mobile sensing and the solutions which we have proposed in terms of power preservation and behaviour inference. The specific studies described include Emotion Sense, a platform to monitor mood and behaviour for psychological studies, our work on using location and accelerometer data to understand parking and driving patterns and a study using RFID technology to monitor interaction and use of space in buildings.

Fast Statistical Assessment for Combinatorial Hypotheses Based on Frequent Itemset Enumeration

Shin-Ichi Minato (Hokkaido University, Japan)
Fri, 19th Sept, 13:00 in Bennett LT5 (Host: Rajeev Raman)

In many scientific communities using experiment databases, one of the crucial problems is how to assess the statistical significance (p-value) of a discovered hypothesis. Especially, combinatorial hypothesis assessment is a hard problem because it requires a multiple-testing procedure with a very large factor of the p-value correction. Recently, Terada et al. proposed a novel method of the p-value correction, called "Limitless Arity Multiple-testing Procedure" (LAMP), which is based on frequent itemset enumeration to exclude meaninglessly infrequent itemsets which will never be significant. The LAMP makes much more accurate p-value correction than previous method, and it empowers the scientific discovery. However, the original LAMP implementation is sometimes too time-consuming for practical databases. We propose a new LAMP algorithm that essentially executes itemset mining algorithm once, while the previous one executes many times. Our experimental results show that the proposed method is much (10 to 100 times) faster than the original LAMP. This algorithm enables us to discover significant p-value patterns in quite short time even for very large-scale databases.

| [University Home]|[Faculty of Science]|[MCS Home]|[CS Home]|Seminars|[University Index A-Z]|[University Search]|[University Help]|

Author: Alexander Kurz (kurz mcs le ac uk), T: 0116 252 5356.
University of Leicester . Last modified: 23rd November 2015, 10:15:07.
CS Web Maintainer. This document has been approved by the Head of Department.