University of Leicester

informatics

PhD Seminar

The PhD Seminar is a series of presentations given by PhD and research students in computer science and mathematics from Leicester and other universities in the UK. In particular, the talks address students and staff of the University of Leicester. The seminar is open to anyone interested. A strong background in any of the subjects is not assumed.

The presentations reflect the research interests of PhD students in the department of Computer Science and are intended to stimulate the interaction between researchers in Computer Science, Software Engineering, Logic, and Mathematics.

If you are interested in giving a talk yourself or want to receive the weekly reminder, contact

Follow phdseminar on Twitter

Seminar programme


Seminar details

Using Dynamic Meta Modeling for Quality Assurance

Christian Soltenborn (University of Paderborn, Germany, Host: Reiko Heckel, Tamim Ahmed Khan)
17th March 2011, 10:00 in BEN LT3

Visual, behavioral languages such as UML Activities tend to play an increasingly important role within software development processes. However, to make full use of such languages, their syntax as well as their behavioral semantics have to be defined formally - otherwise, it is not possible to automatically analyze the quality of the language itself or of sentences of the language.

One way to specify the behavioral semantics of languages is Dynamic Meta Modeling (DMM), a semantics specification technique developed at the University of Paderborn. During my Ph.D. thesis I have developed several techniques which help to ensure semantical quality of the developed artifacts during the complete language lifecycle. In this talk, I will give an overview on that research.

The talk basically consists of three parts: In the first part, I will introduce the language DMM, i.e., I will show the abstract and concrete syntax of DMM specifications as well as their semantics by means of a transformation into Groove graph grammars.

A semantics specification is useless if it contains flaws itself. Therefore, the quality of the specification must be ensured. For this, I have suggested the approach of test-driven semantics specification, which is the topic of the talk's second part.

Finally, the last part discusses how the behavior of models can be analyzed with DMM. Functional requirements can be verified by formulating them in terms of a visual language based on Business Process Pattern. The verification of non-functional requirements is done by translating a model's semantics into a PEPA model - this is ongoing research with Prof. Reiko Heckel.

Supercompilation by evaluation

Max Bolingbroke (University of Cambridge, UK, Host: Frank Nebel, Julien Lange)
30th March 2011, 16:00 in MA1.19

Supercompilation is a powerful program transformation technique which can be used to both automatically prove theorems about programs and greatly improve the efficiency with which they execute. Despite its remarkable power, the transformation is simple, principled and fully automatic. Supercompilation is closely related to partial evaluation, but can achieve strictly more optimising transformations.

I intend to give an introduction to supercompilation for those new to the topic, using the framework from our recently accepted Haskell Symposium paper. I will also discuss the difficulties involved in extending the algorithm to a language with recursive let bindings, and how we can use well-known techniques from operational semantics to solve them. Time allowing, I will discuss the surprising issues raised when building supercompilers for a call-by-value language.

Change Impact Analysis in Product-Line Architectures

Jessica Diaz (Technical University of Madrid, Spain, Host: Josť Fiadeiro, Frank Nebel)
20th June 2011, 14:00 in MA1.19

Change impact analysis is fundamental in software evolution, since it allows one to determine potential effects upon a system resulting from changing requirements. While prior work has generically considered change impact analysis at architectural level, there is a distinct lack of support for the kind of architecture used to realize software product lines, so-called product-line architecture (PLA). In particular, prior approaches do not account for variability, a specific characteristic of software product lines. This talk presents a new technique for change impact analysis that targets product-line architectures. This technique joins a traceability-based algorithm and a rule-based inference engine to effectively traverse modeling artifacts that account for variability. In contrast to prior approaches, this technique supports the mechanisms for (i) specifying variability in PLAs, (ii) documenting PLA knowledge, and (iii) tracing variability between requirements and PLAs. The technique is exemplified by applying it to the analysis of requirements changes in the product-line architecture of a banking system.

| [University Home]|[Faculty of Science]|[MCS Home]|[CS Home]||[University Index A-Z]|[University Search]|[University Help]|

Author: J Lange, F Nebel, D Petrisan, M Birks ( ck112,dlp10,mb259 @le.ac.uk ).
© University of Leicester. Last modified: 17th June 2011, 10:36:31.
Informatics Web Maintainer.