The PhD Seminar is a series of presentations given by PhD and research students in computer science and mathematics from Leicester and other universities in the UK. In particular, the talks address students and staff of the University of Leicester. The seminar is open to anyone interested. A strong background in any of the subjects is not assumed.
The presentations reflect the research interests of PhD students in the department of Computer Science and are intended to stimulate the interaction between researchers in Computer Science, Software Engineering, Logic, and Mathematics.
If you are interested in giving a talk yourself or want to receive the weekly reminder, contact
(University of Paderborn, Germany, Host: Reiko Heckel, Tamim Ahmed Khan)
Visual, behavioral languages such as UML Activities tend to play an increasingly important role within software development processes. However, to make full use of such languages, their syntax as well as their behavioral semantics have to be defined formally - otherwise, it is not possible to automatically analyze the quality of the language itself or of sentences of the language.
One way to specify the behavioral semantics of languages is Dynamic Meta Modeling (DMM), a semantics specification technique developed at the University of Paderborn. During my Ph.D. thesis I have developed several techniques which help to ensure semantical quality of the developed artifacts during the complete language lifecycle. In this talk, I will give an overview on that research.
The talk basically consists of three parts: In the first part, I will introduce the language DMM, i.e., I will show the abstract and concrete syntax of DMM specifications as well as their semantics by means of a transformation into Groove graph grammars.
A semantics specification is useless if it contains flaws itself. Therefore, the quality of the specification must be ensured. For this, I have suggested the approach of test-driven semantics specification, which is the topic of the talk's second part.
Finally, the last part discusses how the behavior of models can be analyzed with DMM. Functional requirements can be verified by formulating them in terms of a visual language based on Business Process Pattern. The verification of non-functional requirements is done by translating a model's semantics into a PEPA model - this is ongoing research with Prof. Reiko Heckel.
(University of Cambridge, UK, Host: Frank Nebel, Julien Lange)
Supercompilation is a powerful program transformation technique which can be used to both automatically prove theorems about programs and greatly improve the efficiency with which they execute. Despite its remarkable power, the transformation is simple, principled and fully automatic. Supercompilation is closely related to partial evaluation, but can achieve strictly more optimising transformations.
I intend to give an introduction to supercompilation for those new to the topic, using the framework from our recently accepted Haskell Symposium paper. I will also discuss the difficulties involved in extending the algorithm to a language with recursive let bindings, and how we can use well-known techniques from operational semantics to solve them. Time allowing, I will discuss the surprising issues raised when building supercompilers for a call-by-value language.
(Technical University of Madrid, Spain, Host: José Fiadeiro, Frank Nebel)
Change impact analysis is fundamental in software evolution, since it allows one to determine potential effects upon a system resulting from changing requirements. While prior work has generically considered change impact analysis at architectural level, there is a distinct lack of support for the kind of architecture used to realize software product lines, so-called product-line architecture (PLA). In particular, prior approaches do not account for variability, a specific characteristic of software product lines. This talk presents a new technique for change impact analysis that targets product-line architectures. This technique joins a traceability-based algorithm and a rule-based inference engine to effectively traverse modeling artifacts that account for variability. In contrast to prior approaches, this technique supports the mechanisms for (i) specifying variability in PLAs, (ii) documenting PLA knowledge, and (iii) tracing variability between requirements and PLAs. The technique is exemplified by applying it to the analysis of requirements changes in the product-line architecture of a banking system.