# informatics

## PhD Seminar: Summer Term 2008

#### Seminar details

On Space Efficient Two Dimensional Range Minimum Data Structures

Pooya Davoodi (University of Aarhus, Denmark)
15th February 2010, 2pm in LIB SR

The Two Dimensional Range Minimum Query (\mbox{2D-RMQ}) problem is to preprocess a static two dimensional array~$A$ of size $m\times n$, where~$m\le n$ such that subsequent queries asking for the index of the minimum element in a rectangular range within~$A$ can be answered efficiently. We show that every algorithm enabled to access~$A$ during the query and using~$O(N/c)$ bits additional space, requires query time~$\Omega(c)$ for $N=m\cdot n$ and any value of~\mbox{$1 \le c \le N$}. In particular this lower bound holds for the 1D-RMQ problem. We complement this lower bound with an algorithm that with~$O(N)$ preprocessing time and~$O(N/c)$ bits additional space achieves~$O(c\log2 c)$ query time. For~$c=1$, this is the first optimal algorithm using $O(N)$ bits additional space with $O(1)$ query time. We also consider the problem in the Encoding model where the query algorithms use an encoding data structure to solve the problem without utilizing~$A$. For this model, we present an~$O(1)$ query time algorithm using~$O(mn\log n)$ bits for the encoding data structure. We also give an alternative proof for the formerly known~$\Omega(mn\log m)$ bits for the size of the encoding data structure.

Word problems of groups and intersections of context-free languages

Tara Brough (University of Warwick, Host: Rick Thomas, Chair: Christian Kissig)
17 March 2010, 4pm in MA1.19

The word problem of a group G with respect to a finite generating set X is the set of all words in X* (the free monoid on X) which are equal to the identity in G. We regard this as a formal language with alphabet $X$, and consider what the formal language type of the word problem can tell us about the structure of the group.

My research is on the class of groups whose word problem is poly-CF, that is, an intersection of finitely many context-free languages. It has been proven by Muller and Schupp that the only groups with context-free word problem are the virtually free groups. I conjecture that the word problem of G is poly-CF if and only if G is virtually a direct product of finitely many free groups.

I will present two methods of proving a language not to be poly-CF and explain how I used these to show the word problems of certain groups not to be poly-CF, resulting in a proof of my conjecture in the cases of metabelian and torsion-free soluble groups.

I will not assume any knowledge of group theory in this talk.

Process-oriented Robotics

Carl Ritson (University of Kent) Jon Simpson (University of Kent)
22 April 2010, 10am in BEN LT5

Process-oriented programs are composed of concurrently executing processes communicating synchronously over channels. This talk will cover the work carried out by the authors and others at Kent over the past five years in using a process-oriented methodology for robotic control, motivated both pedagogically and to the effective application of parallelism and concurrency in robot control problems. We will also cover our current work in collaboration with Leicester, to support and develop tools for parallel programming on the Mindstorms NXT.

Relevant papers:

1. Toward Process Architectures for Behavioural Robotics
Jonathan Simpson and Carl G. Ritson
In Communicating Process Architectures 2009, volume 67 of Concurrent Systems Engineering. IOS Press, Amsterdam, The Netherlands, November 2009
http://jonsimpson.co.uk/papers/2009/toward-process-architectures-behavioural-robotics.pdf
2. Patterns for Programming in Parallel, Pedagogically
Matthew C. Jadud, Jonathan Simpson and Christian L. Jacobsen.
In SIGCSE 08: Proceedings of the 39th SIGCSE Technical Symposium on Computer Science Education. ACM Press, New York, NY, USA, March 2008.
http://jonsimpson.co.uk/papers/2008/patterns-programming-parallel-pedagogically.pdf

Functional Hybrid Modelling

George Giorgidze (University of Nottingham, Host: Christian Kissig)
28 April 2010, 4pm in KE 3.22

Modelling and simulation of physical systems plays important role in design, implementation and analysis of systems in numerous areas of science and engineering, e.g., electrical engineering, astronomy, particle physics, biology, climatology and finance (to mention just few). To cope with ever increasing size and complexity of real-world systems, a number of languages have been developed specifically for modelling and simulation.

We will overview the state-of-the-art languages for modelling and simulation and identify their key advantages and shortcomings, with a particular emphasis on language aspects of reusability, composability and hybrid (i.e. mixed discrete and continuous) simulation. Next, we will introduce a new approach to the design of modelling and simulation languages called Functional Hybrid Modelling (FHM). FHM approach extends a functional programming language with a notion of a model defined using implicitly formulated Differential Algebraic Equations (DAEs). Models are first class entities in FHM and functional combinators are provided for their composition and discrete switching.

Our hypothesis is that the FHM approach will result in modelling languages that are relatively simple, have clear, purely declarative semantics, and, aided by this, advance the state of the art of modelling and simulation languages. At present, our central research vehicle to this end is the design and implementation of a new such language as a domain-specific language embedded in Haskell (a modern purely functional programming language). We will overview the current implementation and emphasise capabilities that go beyond to what can be simulated using current modelling languages.

This is joint work with Henrik Nilsson.

Abstract Interpretation for Liveness using Metric Spaces

Aziem Chawdhary (Queen Mary University of London and Durham University, Host: Christian Kissig)
19 May 2010, 4pm in MA1.19

We will give a brief overview of a abstract interpretation based framework for defining static analyses for liveness properties. Currently abstract interpretation provides an elegant framework for defining and proving the soundness of abstract interpreters for safety properties. Unfortunately we do not have an equivelant understanding for liveness properties. In this work we make a novel use of metric spaces in order to prove the soundness of an abstract interpreter to prove termination for a simple language with arbitary recursion. We make use of existing ideas from semantics of programming languages using metric spaces to define a general framework for proving liveness properties using abstract interpretation.

Synchronous Kleene Algebra

Cristian Prisacariu (University of Oslo, Norway; Host: Tomoyuki Suzuki)
14 June 2010, 2pm in MA1.19

The work presented investigates the combination of Kleene algebra with the synchrony model of concurrency from Milner's SCCS calculus. The resulting algebraic structure is called synchronous Kleene algebra. Models are given in terms of sets of synchronous strings and finite automata accepting synchronous strings. The extension of synchronous Kleene algebra with Boolean tests is presented together with models on sets of guarded synchronous strings and the associated automata on guarded synchronous strings. Completeness w.r.t. the standard interpretations is given for each of the two new formalisms. Decidability follows from completeness. Kleene algebra with synchrony should be included in the class of true concurrency models. In this direction, a comparison with Mazurkiewicz traces is made which yields their incomparability with synchronous Kleene algebras (one cannot simulate the other). On the other hand, we isolate a class of pomsets which captures exactly synchronous Kleene algebras. We present an application to Hoare-like reasoning about parallel programs in the style of synchrony.

Keywords: Universal algebra, Kleene algebra, Boolean tests, synchrony, SCCS calculus, concurrency models, automata theory, completeness, Hoare logic

A Little Goes a Long Way: Semantic-based Approaches to Publishing Archaeological Data

Leif Isaksen (University of Southampton; Host: Monika Solanki)
23 June 2010, 4pm in MA1.19

"Interoperability" is often cited as a fundamental end-goal for archaeological information systems, but the highly abstract nature of this supposed benefit sits uneasily with the task-oriented realities of day-to-day data management. The approach most frequently advocated is to increase the number of formal standards used by the system. This increases the possibilities for system integration, but raises additional barriers to entry that reduce the potential pool of systems to interoperate with. Semantic technologies in particular (and associated ontological approaches, such as the CIDOC CRM) have frequently faced accusations that the costs associated with development outweigh the perceived benefits of use.

We identify several different communities that publish Cultural Heritage information and argue that in order to encourage contribution across the spectrum, a multi-level conception of digital semantics should be established. This is especially necessary for data-driven microproviders "the group within which most archaeologists fall" as they are currently poorly served by most of the semantic technologies developed to date.

We discuss this problem in reference to current doctoral research being undertaken in archaeological data integration. We argue that technologies that either heavily front-load or defer dealing with semantic complexity are unlikely to be viable across the user spectrum. An approach which offers multiple "pay-off points" is inherently more attractive to potential adopters.

An Overview Of Data Parallel Programming With CUDA And Accelerator

Alexander Cole (University of Leicester)
14 July 2010, 4pm in Library, Seminar Room, Ground Floor

Note the new date

CUDA is the industry de-facto standard for programming general purpose GPUs but it is very hard to use well. Removing this barrier to entry would allow for non-expert programmers to leverage the data parallel nature of these systems to do more work faster. Accelerator is a high level model for GPGPU programming which is significantly easier to program and is compared here to CUDA and other systems. Using this system also allows for speed improvements without ever changing your code.

Programming robots using occam and Handel-C

Daniel Slipper (University of Leicester)
13 August 2010, 2pm in MA1.19

Note the new date

This seminar introduces a research project concerned with programming robotics systems using the occam and Handel-C languages. Since these two programming languages are based around the model of concurrency introduced by the CSP modelling language, we experiment with the development of a common program architecture, using both Field Programmable Gate Array, and processor based implementations. As the basis of these testbeds, a lightweight occam virtual machine is presented for use with the LEGO Mindstorms NXT embedded platform, and an equivalent system that has been developed purely in hardware on a Spartan-3A FPGA.

The research highlights the integration issues overcome during development and contributes proof that designing a software program around a common structure does not necessarily guarantee identical behaviour during implementation on different platforms, despite being able to model this behaviour at an abstract level.