MP

Prof. Dr. Michael Philippsen

Professors

Address

Martensstraße 3 91058 Erlangen
Room: 05.139

Consultation Hours

  • Monday: 12:00 - 13:00 Nach Vereinbarung (per Email) / By appointment (via email)

 

Current Projects

  • Automatic Testing of Compilers


    (Own Funds)
    Project leader:
    Term: 01.01.2018 - 30.11.2028
    Acronym: AutoCompTest
    URL: https://www.ps.tf.fau.de/forschung/forschungsprojekte/autocomptest/
    Compilers for programming languages are very complex applications and their correctness is crucial: If a compiler is erroneous (i.e., if its behavior deviates from that defined by the language specification), it may generate wrong code or crash with an error message. Often, such errors are hard to detect or circumvent. Thus, users typically demand a bug-free compiler implementation.

    Unfortunately, research studies and online bug databases suggest that probably no real compiler is bug-free. Several research works therefore aim to improve the quality of compilers. Since the formal verification (i.e., a proof of a compiler's correctness) is often prohibited in practice, most of the recent works focus on techniques for extensively testing compilers in an automated way. For this purpose, the compiler under test is usually fed with a test program and its behavior (or that of the generated program) is checked: If the actual behavior does not match the expectation (e.g., if the compiler crashes when fed with a valid test program), a compiler bug has been found. If this testing process is to be carried out in a fully automated way, three main challenges arise:

    • Where do the test programs come from that are fed into the compiler?
    • What is the expected behavior of the compiler or its output program? How can one determine if the compiler worked correctly?
    • How can test programs that indicate an error in the compiler be prepared to be most helpful in fixing the error in the compiler?

    While the scientific literature proposes several approaches for dealing with the second challenge (which are also already established in practice), the automatic generation of random test programs still remains a challenge. If all parts of a compiler should be tested, the test programs have to conform to all rules of the respective programming language, i.e., they have to be syntactically and semantically correct (and thus compilable). Due to the large number of rules of "real" programming languages, the generation of such compilable programs is a non-trivial task. This is further complicated by the fact that the program generation has to be as efficient as possible: Research suggests that the efficiency of such an approach significantly impacts its effectivity -- in a practical scenario, a tool can only be used for detecting compiler bugs if it can generate many (and large) programs in short time.

    The lack of an appropriate test program generator and the high costs associated with the development of such a tool often prevent the automatic testing of compilers in practice. Our research project therefore aims to reduce the effort for users to implement efficient program generators.

    Large programs generated by efficient automatic generation of random test programs are difficult to use for debugging. Typically, only a small part of the program is the cause of the error, and as many other parts as possible must be automatically removed before the error can be corrected.
    This so-called test case reduction also uses the solutions already mentioned for detecting the expected behavior so that a joint consideration makes sense.
    Test case reduction is an essential component for automatically generated programs and should be designed to process error-triggering programs from all sources.

    Unfortunately, it is often unclear which of the various methods presented in the scientific literature is best suited to a particular situation. Additionally, test case reduction can be a time-consuming process. Our research project aims to create a significant collection of unreduced test cases and to use them to compare and improve existing procedures.

    In 2018, we started the development of such a tool. As input, it requires a specification of a programming language's syntactic and semantic rules by means of an abstract attribute grammar. Such a grammar allows for a short notation of the rules on a high level of abstraction. Our newly devised algorithm then generates test programs that conform to all of the specified rules. It uses several novel technical ideas to reduce its expected runtime. This way, it can generate large sets of test programs in acceptable time, even when executed on a standard desktop computer. A first evaluation of our approach did not only show that it is efficient and effective, but also that it is versatile. Our approach detected several bugs in the C compilers gcc and clang (and achieved a bug detection rate which is comparable to that of a state-of-the-art C program generator from the literature) as well as multiple bugs in different SMT solvers. Some of the bugs that we detected were previously unknown to the respective developers.

    In 2019, we implemented additional features for the definition of language specifications and improved the efficiency of our program generator. These two contributions considerably increased the throughput of our tool. By developing additional language specifications, we were also able to uncover bugs in compilers for the programming languages Lua and SQL. The results of our work led to a publication that we submitted at the end of 2019 (and which has been accepted by now). Besides the work on our program generator, we also began working on a test case reduction technique. It reduces the size of a randomly generated test program that triggers a compiler bug since this eases the search for the bug's root cause.

    In 2020, we focussed on language-agnostic techniques for the automatic reduction of test programs. The scientific literature has proposed different reduction techniques, but since there is no conclusive comparison of these techniques yet, it is still unclear how efficient and effective the proposed techniques really are. We identified two main reasons for this, which also hamper the development and evaluation of new techniques. Firstly, the available implementations of the proposed reduction techniques use different implementation languages, program representations and input grammars. Therefore, a fair comparison of the proposed techniques is almost impossible with the available implementations. Secondly, there is no collection of (still unreduced) test programs that can be used for the evaluation of reduction techniques. As a result, the published techniques have only been evaluated with few test programs each, which compromises the significance of the published results. Furthermore, since some techniques have only been evaluated with test programs in a single programming language, it is still unclear how well these techniques generalize to other programming languages (i.e., how language-agnostic they really are). To close these gaps, we initiated the development of a framework that contains implementations of the most important reduction techniques and that enables a fair comparison of these techniques. In addition, we also started to work on a benchmark that already contains about 300 test programs in C and SMT-LIB 2 that trigger about 100 different bugs in real compilers. This benchmark not only enables conclusive comparisons of reduction techniques but also reduces the work for the evaluation of future techniques. Some first experiments already exposed that there is no reduction technique yet that performs best in all cases.

    In this year, we also investigated how the random program generator that has been developed in the context of this research project can be extended to not only detect functional bugs but also performance problems in compilers. A new technique has been developed within a thesis that first generates a set of random test programs and then applies an optimization technique to gradually mutate these programs. The goal is to find programs for which the compiler under test has a considerably higher runtime than a reference implementation. First experiments have shown that this approach can indeed detect performance problems in compilers.

    In 2021, we finished the implementation of the most important test case reduction techniques from the scientific literature as well as the construction of a benchmark for their evaluation. Building upon our framework and benchmark, we also conducted a quantitative comparison of the different techniques; to the best of our knowledge, this is by far the most extensive and conclusive comparison of the available reduction techniques to date. Our results show that there is no reduction technique yet that performs best in all cases. Furthermore, we detected that there are possible outliers for each technique, both in terms of efficiency (i.e., how quickly a reduction technique is able to reduce an input program) and effectiveness (i.e., how small the result of a reduction technique is). This indicates that there is still room for future work on test case reduction, and our results give some insights for the development of such future techniques. For example, we found that the hoisting of nodes in a program's syntax tree is mandatory for the generation of small results (i.e., to achieve a high effectiveness) and that an efficient procedure for handling list structures in the syntax tree is necessary. The results of our work led to a publication submitted and accepted in 2021.

    In this year, we also investigated if and how the effectiveness of our program generator can be increased by considering the coverage of the input grammar during the generation. To this end and within a thesis, several context-free coverage metrics from the scientific literature have been adapted, implemented and evaluated. The results showed that the correlation between the coverage w.r.t. a context-free coverage metric and the ability to detect bugs in a compiler is rather limited. Therefore, more advanced coverage metrics that also consider context-sensitive, semantic properties should be evaluated in future work.

    In 2022, we initiated the development of a new framework for the implementation of language-adapted reduction techniques. This framework introduces a novel domain-specific language (DSL) that allows the specification of reduction techniques in a simple and concise way. The framework and the developed DSL make is possible to easily adapt existing reduction techniques to the peculiarities and requirements of a specific programming language. It is our hope that such language-adapted reduction techniques can be even more efficient and effective than the existing, language-agnostic reduction techniques. In addition, the developed framework should also reduce the effort for the development of future reduction techniques; this way, our framework could make a valuable contribution to the research in this area.

    In 2023, the focus of the research project was on list structures, which had already been briefly addressed in 2021:
    Almost all methods investigated since 2021 group nodes in the syntax tree into lists in order to select only the necessary nodes from these lists using a list reduction. Our experiments have shown that in some cases 70% or more of the reduction time is spent on lists with more than 2 elements. These lists are relevant because there are several list reduction methods in the scientific literature, but they do not differ for lists with 2 or fewer elements. Since they take such a large fraction of time, we have worked on integrating these different list reduction methods into our implementations of the major reduction methods developed in 2020/2021. In addition to the methods found in the literature, we also considered methods that are only described on a website or whose source code is freely accessible.

    We also investigated how a list reduction can be interrupted at one point and resumed later. The idea was to reduce another list in the meantime, based on a prioritization, so that the list with the greater impact on the reduction always comes first. In some cases, the hoped-for speedup occurred, but questions remain that require further experiments with prioritizing reducers and interrupted list reduction methods.

    In 2024, we successfully published the first results from the list reduction study: Replacing the list reductions can accelerate established reduction techniques by up to 74.7%. As expected, techniques that generate long lists benefit most from the change. We also found that the order of the list elements can save up to 44.1% of the runtime. But two aspects reduce the effectiveness of reordering:

    1. The textual order in which the list elements are usually lined up is already quite a good order.
    2. The same aspects that make a list procedure fast make it less sensitive to the order.

    In two final theses we investigated two more aspects:

    1. The tool developed from 2018 - 2021 for generating test programs uses the compiler under test only as a so-called “black box”, i.e., it generates programs without accessing any information from the tested compiler. The thesis used coverage information from the tested compiler to improve the generated programs.
    2. Caching the results of the reductions saves time, as the compiler under test does not need to re-execute reduction candidates. However, naive implementations of these caches become very large. In 2023, a special caching method was introduced that can reduce the size of the cache by about 90%. The thesis dealt with the fact that unfortunately the original caching method was not suitable for all the reduction methods in our framework.

    In 2025, we investigated two under-explored questions in test case reduction:
    1. First, we compared the language-agnostic reducers from our 2021 framework with the language-specific reducer CReduce. Overall, CReduce tends to produce smaller outputs but takes longer per run; however, its built-in parallelization offsets some of the slowdown and keeps it competitive.
    2. Second, we examined how the size of intermediate artifacts produced during reduction relates to total reduction time. While smaller files are generally processed faster, size alone does not determine runtime; other factors also play a role.

    In addition, two theses explored the following:
    1. Integrating our 2018-2021 test-program generator with Bonsai Fuzzing to keep generated programs small by default. This approach was less successful than hoped because Bonsai Fuzzing relies on significantly slower oracles and has a linearly increasing memory requirement that exceeds commonly available resources.
    2. Examining  how the grammar used to build syntax trees affects reduction properties. By transforming grammars without changing the accepted language - and thereby altering syntax trees in predictable ways - it showed that test-case reducers generally rely on specific grammatical features, with some tools far more sensitive than others. When those features change, results can shift substantially. These findings may help explain why reducer characteristics differ so widely across languages and sources.
  • OpenMP for reconfigurable heterogenous architectures


    (Third Party Funds Group – Sub project)
    Overall project: OpenMP für rekonfigurierbare heterogene Architekturen
    Project leader:
    Term: 01.11.2017 - 31.12.2023
    Acronym: ORKA
    Funding source: Bundesministerium für Forschung, Technologie und Raumfahrt (BMFTR)
    URL: https://www.ps.tf.fau.de/forschung/forschungsprojekte/orka/
    High-Performance Computing (HPC) is an important component of Europe's capacity for innovation and it is also seen as a building block of the digitization of the European industry. Reconfigurable technologies such as Field Programmable Gate Array (FPGA) modules are gaining in importance due to their energy efficiency, performance, and flexibility.
    There is also a trend towards heterogeneous systems with accelerators utilizing FPGAs. The great flexibility of FPGAs allows for a large class of HPC applications to be realized with FPGAs. However, FPGA programming has mainly been reserved for specialists as it is very time consuming. For that reason, the use of FPGAs in areas of scientific HPC is still rare today.
    In the HPC environment, there are various programming models for heterogeneous systems offering certain types of accelerators. Common models include OpenCL (http://www.opencl.org), OpenACC (https://www.openacc.org) and OpenMP (https://www.OpenMP.org). These standards, however, are not yet available for the use with FPGAs.

    Goals of the ORKA project are:
    1. Development of an OpenMP 4.0 compiler targeting heterogeneous computing platforms with FPGA accelerators in order to simplify the usage of such systems.
    2. Design and implementation of a source-to-source framework transforming C/C++ code with OpenMP 4.0 directives into executable programs utilizing both the host CPU and an FPGA.
    3. Utilization (and improvement) of existing algorithms mapping program code to FPGA hardware.
    4. Development of new (possibly heuristic) methods to optimize programs for inherently parallel architectures.

    In 2018, the following important contributions were made:
    • Development of a source-to-source compiler prototype for the rewriting of OpenMP C source code (cf. goal 2).
    • Development of an HLS compiler prototype capable of translating C code into hardware. This prototype later served as starting point for the work towards the goals 3 and 4.
    • Development of several experimental FPGA infrastructures for the execution of accelerator cores (necessary for the goals 1 and 2).
    In 2019, the following significant contributions were achieved:
    • Publication of two peer-reviewed papers: "OpenMP on FPGAs - A Survey" and "OpenMP to FPGA Offloading Prototype using OpenCL SDK".
    • Improvement of the source-to-source compiler in order to properly support OpenMP-target-outlining for FPGA targets (incl. smoke tests).
    • Completion of the first working ORKA-HPC prototype supporting a complete OpenMP-to-FPGA flow.
    • Formulation of a genome for the pragma-based genetic optimization of the high-level synthesis step during the ORKA-HPC flow.
    • Extension of the TaPaSCo composer to allow for hardware synchronization primitives inside of TaPaSCo systems.
    In 2020, the following significant contributions were achieved:
    • Improvement of the Genetic Optimization.
    • Engineering of a Docker container for reliable reproduction of results.
    • Integration of software components from project partners.
    • Development of a plugin architecture for Low-Level-Platforms.
    • Implementation and integration of two LLP plugin components.
    • Broadening of the accepted subset of OpenMP.
    • Enhancement of the test suite.
    In 2021, the following significant contributions were achieved:
    • Enhancement of the benchmark suite.
    • Enhancement of the test suite.
    • Successful project completion with live demo for the project sponsor.
    • Publication of the paper "ORKA-HPC - Practical OpenMP for FPGAs".
    • Release of the source code and the reproduction package.
    • Enhancement of the accepted OpenMP subset with new clauses to control the FPGA related transformations.
    • Improvement of the Genetic Optimization.
    • Comparison of the estimated performance data given by the HLS and the real performance.
    • Synthesis of a linear regression model for performance prediction based on that comparison.
    • Implementation of an infrastructure for the translation of OpenMP reduction clauses.
    • Automated translation of the OpenMP pragma `parallel for` into a parallel FPGA system.
    In 2022, the following significant contributions were achieved:
    • Generation and publication of an extensive dataset on HLS area estimates and actual performance.
    • Creation and comparative evaluation of different regression models to predict actual system performance from early (area) estimates.
    • Evaluation of the area estimates generated by the HLS.
    • Publication of the paper “Reducing OpenMP to FPGA Round-trip Times with Predictive Modelling”.
    • Development of a method to detect and remove redundant read operations in FPGA stencil codes based on the polyhedral model.
    • Implementation of the method for ORKA-HPC.
    • Quantitative evaluation of that method to show the strength of the method and to show when to use it.
    • Publication of the paper “Employing Polyhedral Methods to Reduce Data Movement in FPGA Stencil Codes”.
    In 2023, the following significant contributions were achieved:
    • Development and implementation of an optimization method for canonical loop shells (e.g. from OpenMP target regions) for FPGA hardware generation using HLS. The core of the method is a loop restructuring based on the polyhedral model that uses loop tiling, pipeline processing, and port widening to avoid unnecessary data transfers from/to the onboard RAM of the FPGA, increase the number of parallel active circuits, maximize data throughput to FPGA board RAM, and hide read/write latencies.
    • Quantitative evaluation of the strengths and application areas of this optimization method using ORKA-HPC.
    • Publication of the method in the conference paper "Employing polyhedral methods to optimize stencils on FPGAs with stencil-specific caches, data reuse, and wide data bursts".
    • Publication of a reproduction package for the optimization method.
    • Presentation of the method at the conference "14th International Workshop on Polyhedral Compilation Techniques" in a half-hour talk.
    • Development of a method for the fully automatic integration of multi-purpose caches into FPGA solutions generated from OpenMP.
    • Evaluation of multi-purpose caches in combination with HLS generated hardware blocks.
    • Publication of the paper "Multipurpose Cacheing to Accelerate OpenMP Target Regions on FPGAs" (Best Paper Award).
    In 2024, the following significant contributions were achieved:
    • Adaptation of several already published cacheing approaches to offloaded OpenMP codes and integration of the methods into ORKA-HPC
    • Development and evaluation of novel multi-layer caches for HLS kernels
    • Publication of the results in the publication “Multilayer Multipurpose Caches for OpenMP Target Regions on FPGAs” and presentation of the work at IWOMP 2024 in Perth
  • Software Watermarking


    (Own Funds)
    Project leader:
    Term: 01.01.2016 - 30.11.2028
    Acronym: SoftWater
    URL: https://www.ps.tf.fau.de/forschung/forschungsprojekte/softwater/
    Software watermarking means hiding selected features in code, in order to identify it or prove its authenticity. This is useful for fighting software piracy, but also for checking the correct distribution of open-source software (like for instance projects under the GNU license). The previously proposed methods assume that the watermark can be introduced at the time of software development, and require the understanding and input of the author for the embedding process. The goal of our research is the development of a watermarking framework that automates this process by introducing the watermark during the compilation phase into newly developed or even into legacy code. As a first approach we studied a method that is based on symbolic execution and function synthesis.
    In 2018, two bachelor theses analyzed two methods of symbolic execution and function synthesis in order to determine the most appropriate one for our approach.
    In 2019, we investigated the idea to use concolic execution in the context of the LLVM compiler infrastructure in order to hide a watermark in an unused register. Using a modified register allocation, one register can be reserved for storing the watermark.
    In 2020, we extended the framework (now called LLWM) for automatically embedding software watermarks into source code (based on the LLVM compiler infrastructure) with further dynamic methods. The newly introduced methods rely on replacing/hiding jump targets and on call graph modifications.
    In 2021, we added other adapted, dynamic methods that have already been published, as well as a newly developed method to LLWM. The added methods are based, among other things, on the conversion of conditional constructs into semantically equivalent loops or on the integration of hash functions, that leave the functionality of the program unchanged but increase its resilience. Our newly developed method IR-Mark now not only specifically selects the functions in which the code generator avoids using a certain register. IR-Mark now adds some dynamic computation of fake values that makes use of this register to blur what is going on. There is a publication on both LLWM and IR-Mark.
    In 2022, we added another adapted procedure to the LLWM framework. The method uses exception handling to hide the watermark.
    In 2023, we adapted more methods to expand the LLWM framework. These include embedding techniques based on principles of number theory and aliasing.
    In 2024, we developed three new watermarking techniques: Register Expansion, SemaCall, and SideData. They construct hash-like arithmetics that generate a watermarking message from a secret key. The first two techniques have been published in the paper "Register Expansion and SemaCall: 2 Low-overhead Dynamic Watermarks Suitable for Automation in LLVM" in the proceedings of the CheckMATE'24 workshop in Salt Lake City.
    In 2025, the extended paper "Register Expansion, SemaCall, and SideData: Three Low-Overhead Dynamic Watermarks Suitable for Automation in LLVM" was published in the DTRAP journal. We developed a new technique that embeds the watermark by means of an undecidable problem. We are working on new automated attacking techniques based on LLMs (Large Language Models) and test case reducers that allow to empirically evaluate the resilience of watermarking techniques.
  • Automatic grading of Java and Scala homework assignments


    (Own Funds)
    Project leader: ,
    Term: 18.07.2013 - 30.11.2028
    Acronym: AuDoscore/ScExFuSS
    URL: https://www.ps.tf.fau.de/forschung/forschungsprojekte/audoscore-scexfuss/

    Many students practice object-oriented or functional programming early on by independently implementing homework assignments. The huge number of participants and diverging approaches to solving problems make it difficult for lecturers to grade homework assignments (often exam requirements) according to a uniform standard.
    That is why we developed an extension of JUnit in 2013 (at that time based on Java-1.7, JUnit-4, and Scala-2.12), the source code of which we publish at https://github.com/FAU-Inf2/AuDoscore (Java) and https://github.com/FAU-Inf2/ScExFuSS (Scala). Annotations assign test cases a bonus or penalty score. The results of the test execution are recorded and used to calculate a total score fully automatically. The evaluation is carried out in four stages, each of which provides detailed feedback immediately if necessary.
    In 2025, we completely redesigned AuDoscore and ScExFuSS after key components became unusable due to the abrupt evolution of Java, JUnit, and Scala, which could no longer be kept running through constant adjustments. Since Java-25, the SecurityManager has been disabled as a security infrastructure. The severe restrictions imposed on the Java compiler API rendered it unusable for our purposes. Due to syntactic changes to the source and byte code, the previous pattern-based problem detection became non-deterministic. Newer JUnit versions have fundamentally different extension mechanisms (that are incompatible with the old ones).
    Among others, this raised the following questions:
    - How can we reliably prevent students from (un)intentionally disrupting the assessment system itself (previously through SecurityManager)?
    - How can we detect when students use explicitly prohibited API functions (declared @Forbidden/@NotForbidden annotations)?
    - How can we take into account in the assessment that students implement functions that build on each other incorrectly (consequential errors)?
    - How can we integrate AuDoscore and ScExFuSS into the latest JUnit infrastructure?
    To solve those problems in AuDoscore, we now use the "Classfile Package" from the Java-25 API. As a replacement for the SecurityManager and to implement the "@[Not]Forbidden" annotations, we use it to directly examine the bytecode for dangerous or prohibited function calls. To avoid consequential errors, we transplant classes, methods, or fields from the bytecode of the sample solution into the student's solution. This involves dealing with many difficult special cases (e.g. due to "type erasure", "lambdas", and many more), for which we may also transfer parts of the bytecode that are not directly in the code block of the method to be replaced and retranslate the tests appropriately for each test case.
    To solve the above questions in ScExFuSS, we currently use the TASTy files (Typed Abstract Syntax Trees) generated by the compiler using the "Scala-3 Tasty Inspector". As a replacement for the SecurityManager and to handle the "@[Not]Forbidden" annotations, we statically check which functions are actually used. The consequential error handling based on the TASTy files is now also available for Scala for the first time.
    For the purpose of migrating to JUnit-6, we ported AuDoscore, ScExFuSS, and all tests to JUnit-Jupiter. The "hooking" into the entire test execution process and the logging of evaluation events was implemented from scratch. As a result, we also updated all existing self-tests and added new ones to ensure that all changes and all new language features of Java-25, Scala-3, and JUnit-6 are handled correctly.

Completed Projects


2025

2024

2023

2022

2021

2020

2019

2018

2017

2016

2015

2014

2013

2012

2011

2010

2009

2008

2007

2006

2005

2004

2003

2002

2001

2000

1999

1998

1997

1996

1995

1994

1993

1992

1991

1990


Ausgewählte Kapitel aus dem Übersetzerbau

Title Ausgewählte Kapitel aus dem Übersetzerbau
Short text inf2-ueb3
Module frequency nur im Wintersemester
Semester hours per week 2

Es ist keine Anmeldung erforderlich.

In der Vorlesung werden Aspekte des Übersetzerbaus beleuchtet, die über die Vorlesungen "Grundlagen des Übersetzerbaus" und "Optimierungen in Übersetzern" hinausgehen.
Voraussichtliche Themen sind:

  • Übersetzer u. Optimierungen für funktionale Programmiersprachen
  • Übersetzung aspektorientierter Programmiersprachen
  • Erkennung von Wettlaufsituationen
  • Software Watermarking
  • Statische Analyse und symbolische Ausführung
  • Binden von Objektcode und Unterstützung für dynamische Bibliotheken
  • Strategien zur Ausnahmebehandlung
  • Just-in-Time-Übersetzer
  • Speicherverwaltung und Speicherbereinigung
  • LLVM

Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.

1. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Fri, 10:15 - 11:45 17.10.2025 - 06.02.2026 26.12.2025
19.12.2025
02.01.2026
  • Prof. Dr. Michael Philippsen
  • Tobias Heineken
  • David Schwarzbeck
  • Lukas Rotsching
11302.02.133

Grundlagen des Übersetzerbaus

Title Grundlagen des Übersetzerbaus
Short text inf2-ueb
Module frequency nur im Wintersemester
Semester hours per week 2

Voraussetzung zur Teilnahme an der Modulprüfung ist die erfolgreiche Bearbeitung der Übungsaufgaben.

1. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Wed, 12:15 - 13:45 15.10.2025 - 04.02.2026 31.12.2025
24.12.2025
  • Prof. Dr. Michael Philippsen
11301.00.005

Parallele und Funktionale Programmierung

Title Parallele und Funktionale Programmierung
Short text PFP
Module frequency nur im Wintersemester
Semester hours per week 2

Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.

1. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Tue, 12:15 - 13:45 14.10.2025 - 03.02.2026 06.01.2026
30.12.2025
23.12.2025
  • Prof. Dr. Michael Philippsen
  • Dr.-Ing. Norbert Oster
11907.01.030

Machine Learning: Advances

Title Machine Learning: Advances
Short text SemML-II
Module frequency nur im Wintersemester
Semester hours per week 2

Anmeldung mit Themenanfrage per E-Mail vor Beginn des Seminars; Die Themen werden nach dem Prinzip "Wer zuerst kommt, mahlt zuerst" verteilt.

1. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
nach Vereinbarung - -
  • Tobias Feigl
Einzeltermin Thu, 14:00 - 15:00 23.10.2025 - 23.10.2025 11302.04.150
Blockveranstaltung+Sa Sat, 09:00 - 16:00 03.01.2026 - 28.03.2026 06.01.2026
05.01.2026
03.01.2026

Machine Learning: Introduction

Title Machine Learning: Introduction
Short text SemML-I
Module frequency nur im Wintersemester
Semester hours per week 2

Anmeldung mit Themenanfrage per E-Mail vor Beginn des Seminars; Die Themen werden nach dem Prinzip "Wer zuerst kommt, mahlt zuerst" verteilt.

1. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
nach Vereinbarung - -
  • Tobias Feigl
Einzeltermin Thu, 14:00 - 15:00 23.10.2025 - 23.10.2025 11302.04.150
Blockveranstaltung+Sa Sat, 09:00 - 16:00 03.01.2026 - 28.03.2026 05.01.2026
06.01.2026
03.01.2026

Begleitseminar zu Bachelor- und Masterarbeiten

Title Begleitseminar zu Bachelor- und Masterarbeiten
Short text inf2-bs-bama
Module frequency in jedem Semester
Semester hours per week 3

1. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Mon, 12:15 - 13:45 13.10.2025 - 02.02.2026 22.12.2025
05.01.2026
29.12.2025
  • Prof. Dr. Michael Philippsen
11302.04.150

Übungen zu Ausgewählte Kapitel aus dem Übersetzerbau

Title Übungen zu Ausgewählte Kapitel aus dem Übersetzerbau
Short text inf2-ueb3-ex
Module frequency nur im Wintersemester
Semester hours per week 2

Blockveranstaltung n.V. nach der Vorlesungszeit.

Die Übungen zu Übersetzerbau 3 stellen eine Ergänzung zur Vorlesung dar. In der Vorlesung wird unter anderem die Architektur und Funktionsweise einer virtuellen Maschine beleuchtet. In den Übungen soll dies praktisch umgesetzt werden. Hierzu sollen die Studenten in einer Blockveranstaltung eine kleine virtuelle Maschine selbst implementieren. Den Anfang bildet das Einlesen des Byte-Codes und am Ende soll ein funktionsfähiger optimierender Just-in-Time-Übersetzer entstehen.
Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.

1. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
Blockveranstaltung Mon, 09:00 - 16:00 23.02.2026 - 27.02.2026
  • Tobias Heineken
  • David Schwarzbeck
11302.01.153

Übungen zu Grundlagen des Übersetzerbaus

Title Übungen zu Grundlagen des Übersetzerbaus
Short text inf2-ueb-ex
Module frequency nur im Wintersemester
Semester hours per week 2

Im Rahmen der Übungen werden die in der Vorlesung vorgestellten Konzepte und Techniken zur Implementierung eines Übersetzers in die Praxis umgesetzt. Ziel der Übungen ist es, bis zum Ende des Semesters einen funktionsfähigen Übersetzer für die Beispiel-Programmiersprache e2 zu implementieren. Die hierfür nötigen zusätzlichen Kenntnisse (z.B. Grundlagen des Assemblers für x86-64) werden in den Tafelübungen vermittelt. Die im Laufe des Semesters zu erreichenden Meilensteine sind im StudOn-Eintrag der Vorlesung aufgelistet. Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.

1. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Tue, 12:15 - 13:45 14.10.2025 - 03.02.2026 23.12.2025
06.01.2026
30.12.2025
  • David Schwarzbeck
11302.00.152

Im Rahmen der Übungen werden die in der Vorlesung vorgestellten Konzepte und Techniken zur Implementierung eines Übersetzers in die Praxis umgesetzt. Ziel der Übungen ist es, bis zum Ende des Semesters einen funktionsfähigen Übersetzer für die Beispiel-Programmiersprache e2 zu implementieren. Die hierfür nötigen zusätzlichen Kenntnisse (z.B. Grundlagen des Assemblers für x86-64) werden in den Tafelübungen vermittelt. Die im Laufe des Semesters zu erreichenden Meilensteine sind im StudOn-Eintrag der Vorlesung aufgelistet. Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.

2. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Fri, 12:15 - 13:45 17.10.2025 - 06.02.2026 26.12.2025
02.01.2026
19.12.2025
  • David Schwarzbeck
  • Tobias Heineken
11302.02.133

Im Rahmen der Übungen werden die in der Vorlesung vorgestellten Konzepte und Techniken zur Implementierung eines Übersetzers in die Praxis umgesetzt. Ziel der Übungen ist es, bis zum Ende des Semesters einen funktionsfähigen Übersetzer für die Beispiel-Programmiersprache e2 zu implementieren. Die hierfür nötigen zusätzlichen Kenntnisse (z.B. Grundlagen des Assemblers für x86-64) werden in den Tafelübungen vermittelt. Die im Laufe des Semesters zu erreichenden Meilensteine sind im StudOn-Eintrag der Vorlesung aufgelistet. Die Materialien zur Lehrveranstaltung werden über StudOn bereitgestellt.

3. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Tue, 16:15 - 17:45 14.10.2025 - 03.02.2026 06.01.2026
23.12.2025
30.12.2025
  • Tobias Heineken
11302.02.134

Übungen zu Parallele und Funktionale Programmierung

Title Übungen zu Parallele und Funktionale Programmierung
Short text UePFP
Module frequency nur im Wintersemester
Semester hours per week 2

1. Parallelgruppe

Maximum number of participants: 40

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Wed, 08:15 - 09:45 15.10.2025 - 04.02.2026 31.12.2025
24.12.2025
11302.02.133

2. Parallelgruppe

Maximum number of participants: 40

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Fri, 14:15 - 15:45 17.10.2025 - 06.02.2026 02.01.2026
19.12.2025
26.12.2025
11302.02.133

3. Parallelgruppe

Maximum number of participants: 40

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Mon, 10:15 - 11:45 13.10.2025 - 02.02.2026 29.12.2025
22.12.2025
05.01.2026
11302.02.133

4. Parallelgruppe

Maximum number of participants: 40

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Wed, 16:15 - 17:45 15.10.2025 - 04.02.2026 31.12.2025
24.12.2025
  • Dr.-Ing. Norbert Oster
  • Ludwig Schmotzer
11302.02.133

5. Parallelgruppe

Maximum number of participants: 40

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Tue, 14:15 - 15:45 14.10.2025 - 03.02.2026 30.12.2025
06.01.2026
23.12.2025
  • Dr.-Ing. Norbert Oster
  • Ludwig Schmotzer
11302.00.152

6. Parallelgruppe

Maximum number of participants: 40

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Wed, 10:15 - 11:45 15.10.2025 - 04.02.2026 24.12.2025
31.12.2025
11302.02.133

12. Parallelgruppe

Maximum number of participants: 25

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Fri, 14:15 - 15:45 17.10.2025 - 06.02.2026 19.12.2025
26.12.2025
02.01.2026
11302.00.153

13. Parallelgruppe

Maximum number of participants: 25

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Tue, 14:00 - 16:00 14.10.2025 - 03.02.2026 30.12.2025
06.01.2026
23.12.2025
11302.00.153

14. Parallelgruppe

Maximum number of participants: 25

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Wed, 14:00 - 16:00 15.10.2025 - 04.02.2026 24.12.2025
31.12.2025
11302.00.153

15. Parallelgruppe

Maximum number of participants: 25

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Thu, 16:00 - 18:00 16.10.2025 - 05.02.2026 01.01.2026
25.12.2025
  • Dr.-Ing. Norbert Oster
11302.00.153

11. Parallelgruppe

Maximum number of participants: 25

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Wed, 10:00 - 12:00 15.10.2025 - 04.02.2026 31.12.2025
24.12.2025
11302.00.153

7. Parallelgruppe

Maximum number of participants: 40

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Thu, 08:15 - 09:45 16.10.2025 - 05.02.2026 01.01.2026
25.12.2025
11302.02.133

16. Parallelgruppe

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
wöchentlich Thu, 10:15 - 11:45 16.10.2025 - 05.02.2026 25.12.2025
01.01.2026
  • Dr.-Ing. Norbert Oster
  • Ludwig Schmotzer
14201.00.001

 


  • (Secondary Application: WO2013091908)
    Inventor(s): , ,
  • (Secondary Application: WO2013091907)
    Inventor(s): , ,
  • (Priority Patent Application: DE102011089180)
    Inventor(s): , ,
  • (Priority Patent Application: DE102011089181)
    Inventor(s): , ,
  • (Priority Patent Application: EP2469496 (EP10196851))
    Inventor(s): ,

Programme-/Steering Committees

Working groups, commissions, committees

Current:

Former:

  • Mitglied Berufungsausschuss W3 Data- und Software-Engineering (Nachfolge Leis), 12/2022-08/2024.
  • Mitglied Berufungskommission W3 Informatik (Systemsoftware), Friedrich-Schiller Universität FSU Jena, 01/2020-06/2022
  • Vorsitzender Berufungsausschuss W2 Didaktik der Informatik (Nachfolge Romeike), 11/2018-11/2019.
  • Kommissarische Leitung der Professur für Didaktuk der Infrmatik, 10/2018-11/2019.
  • Mitglied der Studienkommission Informatik, 11/2018-11/2019.
  • Mitglied des Vorstands des Zentrums für Lehrerbildung, 10/2018-11/2019.
  • Mitglied Berufungsausschuss W3 Experimentelle Astroteilchenphysik (Nachfolge Anton), 12/2017-05/2019.
  • Mitglied Berufungsausschuss W3 Visual Computing (Nachfolge Greiner), 06/2016-12/2017.
  • Mitglied der Raum- und Baukommission der Technischen Fakultät, 10/2013-04/2015.
  • Stellvertretender Sprecher der Kollegialen Leitung des Department Informatik, 10/2011-09/2013.
  • Kommissarische Leitung der Professur für Didaktik der Informatik, 08/2012-09/2013.
  • Vorsitzender Berufungsverfahren W2-Professur Didaktik der Informatik (Nachfolge Brinda), 07/2012-02/2013.
  • Mitglied der Prüfungsausschusses MA Internationale Wirtschaftsinformatik, 12/2008-11/2019.
  • Mitglied der Berufungskommission W1-Professur für Digitalen Sport, 12/2009-02/2011.
  • Mitglied der Berufungskommission W3-Professur für IT-Sicherheitsinfrastrukturen, 10/2009-12/2010.
  • Schriftführer des Berufungsausschusses W2-Professur Open Source Software, 07/2008-09/2009.
  • Externes Mitglied der Berufungskommission W3-Professur für Softwaresysteme an der Universität Passau, 06/2008-10/2008.
  • Mitglied des Senats und des Hochschulrats der Friedrich-Alexander-Universität, 10/2007-09/2009.
  • Mitglied des Fachbereichsrats der Technischen Fakultät, 10/2004-09/2009.
  • Mitglied der Kommission zur Verteilung und Verwendung der Studienbeiträge der Informatik, 11/2006-09/2009 (für den Studiengang Informatik: 11/2006-09/2007, für den Studiengang IuK 10/2007-09/2009, 05/2010-09/2010)
  • Mitglied der Berufungskommission W2-Professur für technisch-wissenschaftl. Höchstleistungsrechnen, 05/2006-12/2007.
  • IT-Generalist der DFG-Expertenkommission zur Begleitung des Projekts Online-Wahl der Fachkollegien 2007, 04/2006-06/2008.
  • Mitglied der Berufungskommission W2-Professur für Informatik (Datenbanksysteme, Nachfolge Jablonski), 01/2006-09/2007.
  • Kommissarische Leitung des Lehrstuhls Informatik 3, Rechnerarchitektur, 10/2005-02/2009.
  • Exportbeauftragter des Instituts für Informatik, 10/2005-11/2007.
  • Arbeitsgruppe Bachelor/Master für den Studiengang Informatik, 05/2005-08/2007.
  • Arbeitsgruppe Bachelor/Master für den Studiengang Informations- und Kommunikationstechnik, 05/2005-01/2006.
  • Geschäftsführender Vorstand des Instituts für Informatik, 10/2004-09/2005.
  • Mitglied der Strukturkommission der Technischen Fakultät, 10/2004-09/2005.
  • Mitglied des Consilium Techfak, 10/2004-09/2005.
  • Vorstand des Interdisziplinären Zentrums für funktionale Genomik, FUGE, 09/2004-06/2009.
  • Arbeitsgruppe Bibliotheksmodernisierung, 04/2004-12/2012.
  • Mitglied der Berufungskommission W3-Professur für Informatik (Rechnerarchitektur, Nachfolge Dal Cin), 11/2003-02/2009.
  • Mitglied der Studienkommission Informations- und Kommunikationstechnik, 10/2003-09/2005.
  • Mitglied der Studienkommission Wirtschaftsinformatik, seit 04/2002.
  • Mitglied der Studienkommission Informatik, 04/2002-09/2011.
  • Senatsberichterstatter Berufungsverfahren C3-Professur Organische Chemie (Nachf. Saalfrank), 08/2004-02/2005.
  • Schriftführer Berufungsverfahren C3-Professur Didaktik der Informatik, 04/2004-03/2005.
  • Mitglied der Berufungskommission C3-Professur für Informatik (Nachfolge Müller), 11/2003-07/2004.
  • Mitglied der Berufungskommission C3-Professur für Numerische Simulation mit Höchstleistungsrechnern, 07/2002-01/2003.
  • Mitglied der Berufungskommission C4-Professur für Informatik (Rechnernetze und Kommunikationssysteme), Nachfolge Herzog, 04/2002-02/2003.

Reviewing for journals

  • ACM Transactions on Software Engineering and Methodology, TOSEM
  • ACM Transactions on Programming Languages and Systems, TOPLAS
  • IEEE Transactions on Parallel and Distributed Systems
  • Journal of Parallel and Distributed Computing
  • Concurrency - Practice and Experience
  • Software - Practice and Experience
  • Journal of Systems and Software
  • Informatik-Spektrum
  • Informatik - Forschung und Entwicklung

Memberships

Doctoral theses and postdoctoral dissertations


Supervised:

2024

2023

2022

2021

2019

2018

2017

2014

2012

2010

2009

2007

2006

2004

2003

2002


Own:

Student theses

Our thesis are managed using StudOn.
Please use the available filters to search for specific entries.

Student theses at KIT (Karlsruhe)

  1. Marqc Schanne: Software-Architekturen für lokalitätsabhägige Diensterbringung auf mobilen Endgeräten. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 2002
  2. Sven Buth: Persistenz von verteilten Objekten im Rahmen eines offenen, verteilten eCommerce-Frameworks. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 2002
  3. Jochen Reber: Verteilter Garbage Collector für JavaParty. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 2000
  4. Thorsten Schlachter: Entwicklung eines Java-Applets zur diagrammbasierten Navigation innerhalb des WWW. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1999
  5. Edwin Günthner: Komplexe Zahlen für Java. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 1999
  6. Christian Nester: Ein flexibles RMI Design für eine effiziente Cluster Computing Implementierung. [DA]
    Betreuer: Philippsen, M. abgeschlossen 1999
  7. Daniel Lukic: ParaStation-Anbindung für Java. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1998
  8. Jörg Afflerbach: Vergleich von verteilten JavaParty-Servlets mit äquivalenten CGI-Skripts. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1998
  9. Thomas Dehoust: Abbildung heterogener Datensätze in Java. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1998
  10. Guido Malpohl: Erkennung von Plagiaten unter einer Vielzahl von ähnlichen Java-Programmen. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1997
  11. Bernhard Haumacher: Lokalitätsoptimierung durch statische Typanalyse in JavaParty. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 1997
  12. Matthias Kölsch: Dynamische Datenobjekt- und Threadverteilung in JavaParty. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1997
  13. Christian Nester: Parallelisierung rekursiver Benchmarks für JavaParty mit expliziter Datenobjekt- und Threadverteilung. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1997
  14. Matthias Jacob: Parallele Realisierung geophysikalischer Basisalgorithmen in JavaParty. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 1997
  15. Oliver Reiff: Optimierungsmöglichkeiten für Java-Bytecode. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1996
  16. Marc Schanne: Laufzeitverhalten und Portierungsaspekte der Java-VM und ausgewählter Java-Bibliotheken. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1996
  17. Edwin Günthner: Portierung der Java VM auf den Multimedia Video Prozessor MVP TMS320C80. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1996
  18. Matthias Zenger: Transparente Objektverteilung in Java. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1996
  19. Matthias Winkel: Erweiterung von Java um ein FORALL. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1996
  20. Roland Kasper: Modula-2*-Benchmarks in einem Netz von Arbeitsplatzrechnern. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1993
  21. Markus Mock: Alignment in Modula-2*. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 1992
  22. Stefan Hänßgen: Ein symbolishcer X Windows Debugger für Modula-2*. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1992
  23. Paul Lukowicz: Code-Erzeugung für Modula-2* für verschiedene Maschinenarchitekturen. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 1992
  24. Hendrik Mager: Die semantische Analyse von Modula-2*. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1992
  25. Ernst Heinz: Automatische Elimination von Synchronisationsbarriere in synchronen FORALLs. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 1991
  26. Stephan Teiwes: Die schnellste Art zu multiplizieren? - Der Algorithmus von Schönhage und Strassen auf der Connection Machine. [SA]
    Betreuer: Philippsen, M.: abgeschlossen 1991
  27. Ralf Kretzschmar: Ein Modula-2*-Übersetzer für die Connection Machine. [DA]
    Betreuer: Philippsen, M.: abgeschlossen 1991


Professional career

04/02 - heuteFull Professor (W3), Chair of the Programming Systems Group (Informatik 2) of the Friedrich-Alexander Universität Erlangen-Nürnberg, Germany
06/10Rejected appointment as Full Professor (W3) for Parallel and Distributed Architectures at the Johannes-Gutenberg University Mainz
01/98 - 03/02Department manager of the Softwaretechnik/Authorized Java Center group at FZI Forschungszentrum Informatik, Karlsruhe, Germany
09/95 - 09/01Assistant Professor (Hochschulassistent, C1) at IPD, Institute for Programming Systems, Chair of Prof. Tichy, at KIT, Karlsruhe Institute of Technology
09/94 - 08/95Post-Doc at ICSI (International Computer Science Institute) of the University of Berkeley, California
02/90 - 08/94Research Assistant (BAT IIa) and PhD student at IPD, Institute for Programming Systems, Chair of Prof. Tichy, at KIT, Karlsruhe Institute of Technology

Education 

07/01Habilitation in Computer Science at KIT, Karlsruhe Institute of Technology, Topic: Performance Aspects of Parallel Object-Oriented Programming Languages.
11/93PhD (Dr. rer. nat.) in Computer Science (summa cum laude), at KIT, Karlsruhe Institute of Technology, Topic: Optimization Techniques for Compiling Parallel Programming Languages; Advisors: Prof. Dr. Walter F. Tichy and Prof. Dr. G. Goos
WS 85/86 - 89/90Diplom (BA and MA) in Computer Science with Minor Industrial Engineering and Management (Wirtschaftsingenieurwesen), at KIT, Karlsruhe Institute of Technology
01/90 Diplom/MA (A/sehr gut)
08/89 - 12/89 Diploma/MA thesis at ENC, IBM European Networking Center, in Heidelberg; Topic: Replication for a distributed file system in a heterogeneous network; Advisors: Dr. Ulf Hollberg (IBM) and Prof. Dr. G. Krüger (Institute of Telematics)
01/89 - 03/89 BA thesis (Studienarbeit) at the Institute of Telematics; Topic: Classification of consistency protocols for replicated file systems in distributed systems; Advisors: Dr. Cora Förster and Prof. Dr. G. Krüger
04/88 - 07/88 Student Assistant at IPD, the Institute for Programming Systems, Chair of Prof. Tichy, Teaching Assistant for Informatik IV
04/88 - 12/88 Student Assistant at FZI Forschungszentrum Informatik, Distributed Relational Databases, Project Kardamom, Principle Investigator Prof. Dr. P. Lockemann
10/87 Vordiplom/BA (B/gut)
05/85Abitur (secondary school exam/university entrance level qualification), (1.6, third of class of 1985)
08/76 - 05/85Theodor-Heuss-Gymnasium, Essen-Kettwig, Germany
08/72 - 06/76Schmachtenbergschule, Kettwig, Germany

Prizes, awards, nominations

2021
2023
2019
2015
2014von der Technischen Fakultät der Friedrich-Alexander Universität Erlangen-Nürnberg und ihrer Fachschaft Informatik nominiert für den Ars legendi-Preis für exzellente Hochschullehre des Stifterverbands und der Hochschulrektorenkonferenz
2008
2008

International experience

05/15 - 16/15ICSI (International Computer Science Institute) of UCB, University of Berkeley, CA
12/10 - 03/11Microsoft Research, Research in Software Engineering (RiSE) Group, Redmond, WA
09/94 - 08/95ICSI (International Computer Science Institute) of UCB, University of Berkeley, CA
02/96 - 04/96another research stay at ICSI in Berkeley, CA
02/92 - 03/92Research stay at INRIA (Institut National de Recherche en Informatique et en Automatique), Sophia Antipolis, France
02/91 - 03/91another research stay at INRIA in Sophia Antipolis, France
02/90 - heuteCountless trips to international scientific conferences to give formal presentations

Consulting

04/91 - heuteSelf-empoyed management consulting and expert's reports for various industry and crafts enterprises
10/99 - 01/13Design and development of a use-case specific Content-Management-Systems; for ISO Arzneimittel GmbH & Co. KG
12/95 - 12/97Design and development of a Java extension for scalable Internet services and electronic trade; .for Electric Communities, CA
01/96 - 05/96Design of an applicaiton for Mercedes-Benz Lease & Finanz GmbH (now Mercedes-Benz-Bank AG)
07/85 - 03/91Working student at Stinnes Organisationsberatung GmbH, various tasks across both Stinnes AG (now DB Schenker AG) and Veba AG (now E.ON AG)
01/84 - 12/86Freelance system analyst and software developer at the headquarter of Horten AG (now Galeria Karstadt Kaufhof GmbH)
07/84 - 08/84Working student at Brenntag Mineralöl GmbH; analysis and black box testing of an externally procured merchandise planning and control system


Software-Wasserzeichen

(Projekt aus Eigenmitteln)


Project leader:
Project members: ,
Start date: 01.01.2016
Acronym: SoftWater
URL: https://www.ps.tf.fau.de/forschung/forschungsprojekte/softwater/

Abstract:

Unter Software-Wasserzeichnen versteht man das Verstecken von ausgewählten Merkmalen in Programme, um sie entweder zu identifizieren oder zu authentifizieren. Das ist nützlich im Rahmen der Bekämpfung von Software-Piraterie, aber auch um die richtige Nutzung von Open-Source Projekten (wie zum Beispiel unter der GNU Lizenz stehende Projekte) zu überprüfen. Die bisherigen Ansätze gehen davon aus, dass das Wasserzeichen bei der Entwicklung des Codes hinzugefügt wird und benötigen somit das Verständnis und den Beitrag der Programmierer für den Einbettungsprozess. Ziel unseres Forschungsprojekts ist es, ein Wasserzeichen-Framework zu entwickeln, dessen Verfahren automatisiert beim Übersetzen des Programms Wasserzeichen sowohl in neu entwickelte als auch in bestehende Programme hinzufügen. Als ersten Ansatz untersuchten wir eine Wasserzeichenmethode, die auf einer symbolischen Ausführung und anschließender Funktionsynthese basiert.
Im Jahr 2018 wurden im Rahmen von zwei Bachelorarbeiten Methoden zur symbolischen Ausführung und Funktionssynthese untersucht, um zu ermitteln, welche sich für unseren Ansatz am Besten eignet.
Im Jahr 2019 wurde ein Ansatz auf Basis der LLVM Compiler Infrastruktur untersucht, der mittels konkolischer Ausführung (concolic execution, eine Kombination aus  symbolischer und konkreter Ausführung) ein Wasserzeichen in einem ungenutzten Hardwareregister versteckt. Hierzu wurde der LLVM-Registerallokator dahingehend verändert, dass er ein Register für das Wasserzeichen freihält.
Im Jahr 2020 wurde das inzwischen LLWM genannte Rahmenprogramm für das automatische Einfügen von Software-Wasserzeichen in Quellcode auf Basis der LLVM Compiler Infrastruktur um weitere dynamische Verfahren erweitert. Grundlage der hinzugefügten Verfahren sind, unter anderem, das Ersetzen/Verschleiern von Sprungadressen sowie Modifikationen des Aufrufgraphen.
Im Jahr 2021 wurde das Rahmenprogramm LLWM um weitere angepasste, bereits in der Literatur bekannte, dynamische Verfahren sowie um das eigene Verfahren erweitert, das wir nun IR-Mark nennen Die hinzugefügten Verfahren basieren unter anderem auf der Umwandlung von bedingten Konstrukten in semantisch äquivalenten Schleifen oder auf Integrieren von Hashfunktionen, die die Funktionalität des Programms unverändert lassen, die Widerstandsfähigkeit aber erhöhen. IR-Mark wählt nun nicht nur gezielt die wenigen Funktionen aus, in denen die Registerverwendung bei der Code-Erzeugung verändert wird, sondern umfasst nun auch dynamische Aspekte um in den freigehaltenen Registern sinnvoll erscheinende Tarnwerte zu berechnen. Ein Artikel über LLWM und IR-Mark konnte publiziert werden.
Im Jahr 2022 wurde das Rahmenprogramm LLWM um ein weiteres angepasstes Verfahren ergänzt. Die Methode nutzt  Ausnahmebehandlungen, um das Wasserzeichen zu tarnen.
Im Jahr 2023 wurden mehr Methoden angepasst, um das LLWM-Framework zu erweitern. Hierzu zählen Techniken zum Einbetten, die auf Prinzipien der Zahlentheorie und des Aliasings beruhen.
Im Jahr 2024 wurden drei neue Wasserzeichen entwickelt: Register Expansion, SemaCall und SideData. Diese Techniken konstruieren streufunktionsartige arithmetische Berechnungen, um während der Laufzeit einen Schlüsselwert in die Wasserzeichennachricht umzurechnen. Die ersten beiden Techniken wurden in dem Papier „Register Expansion and SemaCall: 2 Low-overhead Dynamic Watermarks Suitable for Automation in LLVM" auf dem CheckMATE'24 Workshop in Salt Lake City publiziert. 
Im Jahr 2025 wurde das erweiterte Papier "Register Expansion, SemaCall, and SideData: Three Low-Overhead Dynamic Watermarks Suitable for Automation in LLVM" in dem DTRAP-Journal publiziert. Es wurde eine neue Wasserzeichentechnik entwickelt, die ein unentscheidbares Problem benutzt, um Wasserzeichen in Programme einzubetten. Es wird an automatisierten Angriffstechniken auf Basis von LLMs (Large Language Model) und Testfall Reduzierern gearbeitet, die erlauben, die Resilienz einzelner Wasserzeichentechniken empirisch zu vermessen.

Publications: