Dr.-Ing. Josef Adersberger
Design for Diagnosability
(Third Party Funds Single)Term: 15.05.2013 - 30.09.2018
Funding source: Bayerisches Staatsministerium für Wirtschaft und Medien, Energie und Technologie (StMWIVT) (ab 10/2013)
URL: http://www2.informatik.uni-erlangen.de/research/DfD/Many software systems behave obtrusively during the test phase or even in normal operation. The diagnosis and the therapy of such runtime anomalies is often time consuming and complex, up to being impossible. There are several possible consequences for using the software system: long response times, inexplicable behaviors, and crashes. The longer the consequences remain unresolved, the higher is the accumulated economic damage.
"Design for Diagnosability" is a tool chain targeted towards increasing the diagnosability of software systems. By using the tool chain that consists of modeling languages, components, and tools, runtime anomalies can easily be identified and solved, ideally already while developing the software system. Our cooperation partner QAware GmbH provides a tool called Software EKG that enables developers to explore runtime metrics of software systems by visualizing them as time series.
The research project Design for Diagnosability enhances the eco-system of the existing Software EKG. The Software-Blackbox measures technical and functional runtime values of a software system in a minimally intrusive way. We store the measured values as time series in a newly developed time series database, called Chronix. Chronix is an extremely efficient storage of time series that optimizes disk space as well as response times. Chronix is an open source project (www.chronix.io) and is free to use for everyone.
The newly developed Time-Series-API analyzes these values, e.g., by means of an outlier detection mechanism. The Time-Series-API provides multiple additional building blocks to implement further strategies for identifying runtime anomalies.
The mentioned tools in combination with the existing Software EKG will become the so-called Dynamic Analysis Workbench. This tool enables developers to diagnose, explain, and fix any occurring runtime anomalies both quickly and reliably. It will provide diagnosis plans to localize and identify the root causes of runtime anomalies. The full tool chain aims at increasing the quality of software systems, particularly with respect to the metrics mean-time-to-repair and mean-time-between-defects.
Before we have successfully completed the project in July 2016, we have made the following contributions:
- We have linked Chronix and a framework for distributed data processing so that our anomaly analyses now scale to huge sets of time series data.
- We extended Chronix with additional components. Among them are, for example, a more efficient storage model, some adapters for more time series databases, additional server-side analysis functions, and some new time series types.
- We have published our benchmark for time series databases.
- We have investigated and implemented an approach to link application-level calls, e.g., a login of a user, down to the resulting calls on the OS level.
Although funding expired in 2016, we made further contributions in 2017:
- We presented Chronix at the FAST conference in Santa Clara, CA in February 2017.
- We have equipped Chronix with interfaces to attach time series databases that are used in the industry.
- We have developed an approach that determines the ideal cluster configuration (w.r.t. processing time and costs) for a given analysis (specific function and set of time series).
- We have expanded Spark, a framework for distributed processing of large-scale data, so that it now can make use of GPUs in distributed time series analyses. We presented the results at the Apache Big Data Conference in Miami, Florida, in May 2017.
We continued to make further contributions to the research project in 2018:
- We have published a paper at PROFES 2018 that describes techniques and insights on how runtime data in a large software project can be offered to all project participants at the development stage to improve their collaboration.
- We have maintained the Chronix Open Source project and stabilized it further (updating versions, fixing bugs, etc.).
Software Project Control Center
(Third Party Funds Single)Term: 01.11.2009 - 31.12.2015
Funding source: Bundesministerium für Wirtschaft und Technologie (BMWi)
Prototypical implementation of a new tool for quality assurance during software development
Modern software systems are growing increasingly complex with respect to functional, technical and organizational aspects. Thus, both the number of requirements per system and the degree of their interconnectivity constantly increase. Furthermore the technical parameters, e.g., for distribution and reliability are getting more complex and software is developed by teams that are not only spread around the globe but also suffer from increasing time pressure. Due to this, the functional, technical, and organizational control of software development projects is getting more difficult.
The "Software Project Control Center" is a tool that helps the project leader, the software architect, the requirements engineer, or the head of development. Its purpose is to make all aspects of the development process transparent and thus to allow for better project control. To achieve transparence, the tool distills and gathers properties from all artifacts and correlations between them. It presents/visualizes this information in a way suitable for the individual needs of the users.
The Software Project Control Center unifies the access to relations between artifacts (traceability) and to their properties (metrics) within software development projects. Thus, their efficiency can be significantly increased. The artifacts, their relations, and related metrics are gathered and integrated in a central data store. This data can be analyzed and visualized, metrics can be computed, and rules can be checked.
For the Software Project Control Center project we cooperate with the QAware GmbH, Munich. The AIF ZIM program of the German Federal Ministry of Economics and Technology funded the first 30 months of the project.
The Software Project Control Center is divided into two subsystems. The integration pipeline gathers traceability data and metrics from a variety of software engineering tools. The analysis core allows to analyze the integrated data in a holistic way. Each subsystem is developed in a separate subproject.
The project partner QAware GmbH implemented the integration pipeline. The first step was to define TraceML, a modeling language for traceability information in conjunction with metrics. The language contains a meta-model and a model library. TraceML allows to define customized traceability models in an efficient way. The integration pipeline is realized using TraceML as lingua franca in all processing steps: From the extraction of traceability information to its transformation and integrated representation. We used the Eclipse Modeling Framework to define the TraceML models on each meta-model level. Furthermore, we used the Modeling Workflow Engine for model transformations and Eclipse CDO as our model repository. A set of wide-spread tools for software engineering are connected to the integration pipeline including Subversion, Eclipse, Jira, Enterprise Architect and Maven.
The main contribution of our group to this project is the analysis core, i.e., the design and implementation of a domain-specific language for graph-based traceability analysis. Our Traceability Query Language (TracQL) significantly reduces the effort that is necessary to implement traceability analyses. This is crucial for both industry and the research community as lack of expressiveness and inefficient runtimes of other known approaches used to hinder the implementation of traceability analysis. TracQL eases not only the extraction, but also the analysis of traceability data using graph traversals that are denoted in a concise functional programming style. The language itself is built on top of Scala, a multi-paradigm programming language, and was successfully applied to several real-world industrial projects.
In 2014, we improved the modularity of the language to make it both more adaptable and extendable in terms of structure and operations. This not only increases its expressiveness but also improves the reusability of existing traceability analyses.
In 2015, we evaluated and documented our approach in order to emphasize its core attributes and to show its effectiveness. The three core attributes are:
- Representation independence: TracQL is adaptable to various data sources at which their data types are available statically typed.
- Modularity: the approach is both modifiable and extendable in terms of structure and operations.
- Applicability: the language has a better expressiveness and performance than other approaches.
Chronix: Long Term Storage and Retrieval Technology for Anomaly Detection in Operational Data
15th USENIX Conference on File and Storage Technologies (FAST 17) (Santa Clara, CA, 27.02.2017 - 02.03.2017)
In: USENIX Association (ed.): Proceedings of the 15th USENIX Conference on File and Storage Technologies (FAST 17) 2017
Open Access: https://www.usenix.org/conference/fast17/technical-sessions/presentation/lautenschlager
, , , :
Leveraging the GPU on Spark
Apache: Big Data North America 2017 (Miami, FL, 16.05.2017 - 18.05.2017)
Fast and efficient operational time series storage: The missing link in dynamic software analysis
Symposium on Software Performance (SSP 2015) (München, 04.11.2015 - 06.11.2015)
In: Softwaretechnik-Trends (Band 35, Nr. 3): Proceedings of the Symposium on Software Performance (SSP 2015) 2015
, , , :
Rahmenwerk zur Ausreißererkennung in Zeitreihen von Software-Laufzeitdaten
Fachtagung Software Engineering & Management (SE 2015) (Dresden, Deutschland, 17.03.2015 - 20.03.2015)
In: Uwe Aßmann, Birgit Demuth, Thorsten Spitta, Georg Püschel, Ronny Kaiser (ed.): Software Engineering & Management (SE 2015), Bonn: 2015
, , , :
Design for Diagnosability
In: Java Magazin (2014), p. 44-50
, , , :
Modellbasierte Extraktion, Repräsentation und Analyse von Traceability-Informationen (Dissertation, 2012)
TracQL: A Domain-Specific Language for Traceability Analysis
Joint Working Conference on Software Architecture & 6th European Conference on Software Architecture (WICSA/ECSA 2012) (Helsinki, Finland, 20.08.2012 - 24.08.2012)
In: Ali Babar M., Cuesta C., Savolainen J., Männistö T. (ed.): Proceedings of the 2012 Joint Working Conference on Software Architecture & 6th European Conference on Software Architecture, Los Alamitos, CA: 2012
, , :
Das Softwareleitstand-Prinzip: Softwarequalität kontinuierlich messen, analysieren und steuern
German Testing Day 2011 (Frankfurt, 09.11.2011 - 09.11.2011)
In: German Testing Day 2011 2011
ReflexML: UML-based architecture-to-code traceability and consistency checking
5th European Conference on Software Architecture, ECSA 2011 (Essen, 13.09.2011 - 16.09.2011)
In: Ivica Crnkovic, Volker Gruhn, Matthias Book (ed.): Software Architecture Software Architecture, 5th European Conference, ECSA 2011, Berlin Heidelberg: 2011
A Statically Typed Query Language for Property Graphs
15th International Database Engineering and Applications Symposium (IDEAS'11) (Lissabon, Portugal, 21.09.2011 - 23.09.2011)
In: Bernardino, Jorge; Cruz, Isabel; Desai, Bipin C. (ed.): Proceedings of 15th International Database Engineering and Applications Symposium (IDEAS'11), New York: 2011
, , :
Dynamische Analyse mit dem Software-EKG
In: Informatik-Spektrum 34 (2011), p. 484-495
, , :
Dynamische Analyse mit dem Software-EKG
Software Engineering 2011 - Fachtagung des GI-Fachbereichs Softwaretechnik (Karlsruhe, 24.02.2011 - 25.02.2011)
In: Ralf Reussner, Matthias Grund, Andreas Oberweis, Walter Tichy (ed.): Lecture Notes in Informatics (LNI), P-183, Bonn: 2011
, , :