Tobias Feigl

Dr.-Ing. Tobias Feigl

Projects

  • Verification and validation in industrial practice

    (Own Funds)

    Term: since 01.01.2022

    Detection of flaky tests based on software version control data and test execution history

    Regression tests are carried out often and because of their volume also fully automatically. They are intended to ensure that changes to individual components of a software system do not have any unexpected side effects on the behavior of subsystems that they should not affect. However, even if a test case executes only unmodified code, it can still sometimes succeed and sometimes fail. This so-called "flaky" behavior can have different reasons, including race conditions due to concurrent execution or temporarily unavailable resources (e.g., network or databases). Flaky tests are a nuisance to the testing process in every respect, because they slow down or even interrupt the entire test execution and they undermine the confidence in the test results: if a test run is successful, it cannot necessarily be concluded that the program is really error-free, and if the test fails, expensive resources may have to be invested to reproduce and possibly fix the problem.

    The easiest way to detect test flakyness is to repeatedly run test cases on the identical code base until the test result changes or there is a reasonable statistical confidence that the test is non-flaky. However, this is rarely possible in an industrial environment, as integration or system tests can be extremely time-consuming and resource-demanding, e.g., because they require the availability of special test hardware. For this reason, it is desirable to classify test cases with regard to their flakyness without repeated re-execution, but instead to only use the information already available from the preceding development and test phases.

    In 2022, we implemented and compare various so-called black box methods for detecting test flakyness and evaluated them in a real industrial test process with 200 test cases. We classify test cases exclusively on the basis of generally available information from version control systems and test execution tools, i.e., in particular without an extensive analysis of the code base and without monitoring of the test coverage, which would in most cases be impossible for embedded systems anyway. From the 122 available indicators (including the test execution time, the number of lines of code, or the number of changed lines of code in the last 3, 14, and 54 days) we extracted different subsets and examined their suitability for detecting test flakyness using different techniques. The methods applied on the feature subsets include rule-based methods (e.g., "a test is flaky if it has failed at least five times within the observation window, but not five times in a row"), empirical evaluations (including the computation of the cumulative weighted "flip rate", i.e., the frequency of alternating between test success and failure) as well as various methods from the domain of machine learning (e.g., classification trees, random forest, or multi-layer perceptrons). By using AI-based classifiers together with the SHAP approach for explaining AI models we determined the four most important indicators ("features") for detecting test flakyness in the industrial environment under consideration. The so-called "gradient boosting" with the complete set of indicators has proven to be optimal (with an F1-score of 96.5%). The same method with only four selected features achieved just marginally lower accuracy and recall values (with almost the same F1 score).

    Synergies of a-priori and a-posteriori analysis methods to explain artificial intelligence

    Artificial intelligence is rapidly conquering more domains of everyday life and machines make more critical decisions: braking or evasive maneuvers in autonomous driving, credit(un)worthiness of individuals or companies, diagnosis of diseases from various examination results (e.g., cancer detection from CT/MRT scans), and many more. In order for such a system to receive trust in a real-life productive setting, it must be ensured and proven that the learned decision rules are correct and reflect reality. The training of a machine model itself is a very resource-intensive process and the quality of the result can usually only be quantified afterwards with extremely great effort and well-founded specialist knowledge. The success and quality of the learned model not only depends on the choice of a particular AI method, but is also strongly influenced by the magnitude and quality of the training data.

    In 2022, we therefore examined which qualitative and quantitative properties an input set must have ("a priori evaluation") in order to achieve a good AI model ("a posteriori evaluation"). For this purpose, we compared various evaluation criteria from the literature and we defined four basic indicators based on them: representativeness, freedom from redundancy, completeness, and correctness. The associated metrics allow a quantitative evaluation of the training data in advance of preparing the model. To investigate the impact of poor training data on an AI model, we experimented with the so-called "dSprites" dataset, a popular generator for image files used in the evaluation of image resp. pattern recognition methods. This way, we generated different training data sets that differ in exactly one of the four basic indicators and have quantitatively different "a priori quality". We used all of them to train two different AI models: Random Forest and Convolutional Neural Networks. Finally, we quantitatively evaluated the quality of the classification by the respective model using the usual statistical measures (accuracy, precision, recall, F1-score). In addition, we used SHAP (a method for explaining AI models) to determine the reasons for any misclassification in cases of poor data quality. As expected, the model quality highly correlates with the training data quality: the better the latter is with regard to the four basic indicators, the more precise is the classification of unknown data by the trained models. However, a noteworthy discovery has emerged while experimenting with the lack of redundancy: If a trained model is evaluated with completely new/unknown inputs, the accuracy of the classification is sometimes significantly worse than if the available input data is split into a training and an evaluation data set: In the latter case, the a posteriori evaluation of the trained AI system misleadingly suggests a higher model quality.

    Few-Shot Out-of-Domain Detection in Natural Language Processing Applications

    Natural language processing (NLP for short) using artificial intelligence has many areas of application, e.g., telephone or written dialogue systems (so-called chat bots) that provide cinema information, book a ticket, take sick leave, or answer various questions arising during certain industrial processes. Such chat bots are often also involved in social media, e.g., to recognize critical statements and to moderate them if necessary. With increasing progress in the field of artificial intelligence in general and NLP in particular, self-learning models are spreading that dynamically (and therefore mostly unsupervised) supplement their technical and linguistic knowledge from concrete practical use. But such approaches are susceptible to intentional or unintentional malicious disguise. Examples from industrial practice have shown that chat bots quickly "learn" for instance racist statements in social networks and then make dangerous extremist statements. It is therefore of central importance that NLP-based models are able to distinguish between valid "In-Domain (ID)" and invalid "Out-Of-Domain (OOD)" data (i.e., both inputs and outputs). However, the developers of an NLP system need an immense amount of ID and OOD training data for the initial training of the AI model. While the former are already difficult to find in sufficient quantities, the a priori choice of the latter is usually hardly possible in a meaningful way.

    In 2022, we therefore examined and compared different approaches to OOD detection that work with little to no training data at all (hence called "few-shot"). The currently best and most widespread, transformer-based and pre-trained language model RoBERTa served as the basis for the experimental evaluation. To improve the OOD detection, we applied "fine-tuning" and examined how reliable the adaptation of a pre-trained model to a specific domain can be done. In addition, we implemented various scoring methods and evaluated them to determine threshold values for the classification of ID and OOD data. To solve the problem of missing training data, we also evaluated a technique called "data augmentation": with little effort GPT3 ("Generative Pretrained Transformer 3", an autoregressive language model that uses deep learning to generate human-like text) can generate additional and safe ID and OOD data to train and evaluate NLP models.

    Application of weighted combinatorics in the generation and selection of parameters and their representatives in software testing

    Some functional testing methods (so-called black box tests), such as the equivalence class testing or boundary value analysis, focus on individual parameters. For these parameters, they determine representatives (values or classes of values) to be considered in the test. Since not just a single parameter but several parameters are usually required to perform such tests, representatives of several parameters must be combined with each other to be used for test execution. Well-understood combinatorial methods such as "All Combinations", "Pair-wise" or "Each choice" are usually used for this purpose. They do not take into account information about weights (attributes such as importance or priority) of the parameters and equivalence class representatives, which would affect the number of associated test cases (e.g. due to importance) or their recommended order (in terms of prioritization). In addition, in the case of the equivalence class method, there are scenarios in which a combination of several invalid classes in a single test case could optionally be explicitly desired, completely undesirable or limited to a certain number in order to specifically test fault combinations on the one hand, but also to simplify fault localization on the other. There is reason to believe that by considering such weights and options, more targeted and ultimately more efficient test cases can be derived.

    In 2023, we evaluated and compared known combinatorial approaches that take into account weights when combining parameters or their values. Based on this, we developed a novel approach to generate and select parameters and their representatives in software testing. The proposed method uses a weighting system to prioritize the individual parameters, their equivalence classes and concrete representatives, in a set of test cases. If necessary, their interactions can also be specifically weighted in order to allow certain combinations to occur more frequently in the generated test cases. To evaluate the approach, we defined a suitable prototype data structure that represents the various weightings. We then implemented evaluation functions for existing sets of test cases in order to quantitatively determine how well such a test case set satisfies the specified combinatorics. In a further step, we used these evaluation functions in combination with various systematic methods and heuristics (SAT solver Z3, simulated annealing, and genetic algorithms) to generate new test cases that match the weighting or to optimize existing sets by adding missing test cases. Simulated Annealing was the fastest and gave the best results in the test series. Although the SAT-approach worked well for small problems, it was no longer practical for larger test cases due to exorbitant runtimes.

  • Recurrent Neuronal Networks (RNNs) for Real-Time Estimation of Nonlinear Motion Models

    (Third Party Funds Single)

    Term: 01.10.2017 - 31.03.2021
    Funding source: Fraunhofer-Gesellschaft
    URL: https://www2.cs.fau.de/research/RuNN/
    With the growing availability of information about an environment (e.g., the geometry of a gymnasium) and about the objects therein (e.g., athletes in the gymnasium), there is an increasing interest in bringing that information together profitably (so-called information fusion) and in processing that information. For example, one would like to reconstruct physically correct animations (e.g., in virtual reality, VR) of complex and highly dynamic movements (e.g., in sports situations) in real-time. Likewise, e.g., manufacturing plants of the industry, which suffer from unfavorable environmental conditions (e.g., magnetic field interference or missing GPS signal), benefit from, e.g., high-precision goods location. Typically, to describe movements, one uses either poses that describe a "snapshot" of a state of motion (e.g., idle state, stoppage), or a motion model that describes movement over time (e.g., walking or running). In addition, human movements may be identified, detected, and sensed by different sensors (e.g., on the body) and mapped in the form of poses and motion models. Different types of modern sensors (e.g., camera, radio, and inertial sensors) provide information of varying quality.In principle, with the help of expensive and highly precise measuring instruments, the extraction of the poses and resp. of the motion model, for example, from positions on small tracking areas is possible without errors. Positions, e.g., of human extremities, can describe or be described by poses and motion models. Camera-based sensors deliver the required high-frequency and high-precision reference measurements on small areas. However, as the size of the tracking surface increases, the usability of camera-based systems decreases (due to inaccuracies or occlusion issues). Likewise, on large areas radio and inertial sensors only provide noisy and inaccurate measurements. Although a combination of radio and inertial sensors based on Bayesian filters achieves greater accuracy, it is still inadequate to precisely sense human motion on large areas, e.g., in sports, as human movement changes abruptly and rapidly. Thus, the resulting motion models are inaccurate.Furthermore, every human movement is highly nonlinear (or unpredictable). We cannot map this nonlinearity correctly with today's motion models. Bayes filters describe these models but these (statistical) methods break down a nonlinear problem into linear subproblems, which in turn cannot physically represent the motion. In addition, current methods produce high latency when they require accuracy.Due to these three problems (inaccurate position data on large areas, nonlinearity, and latency), today's methods are unusable, e.g., for sports applications that require short response times. This project aims to counteract these nonlinearities by using machine learning methods. The project includes research on recurrent neural networks (RNN) to create nonlinear motion models. As modern Bayesian filtering methods (e.g., Kalman and Particle filters) and other statistical methods can only describe the linear portions of nonlinear human movements (e.g., the relative position of the head w.r.t. trunk while walking or running) they are thus physically not completely correct.Therefore, the main goal is to evaluate how machine learning methods can describe complex and nonlinear movements. We therefore examined whether RNNs describe the movements of an object physically correctly and support or replace previous methods. As part of a large-scale parameter study, we simulated physically correct movements and optimized RNN procedures on these simulations. We successfully showed that, with the help of suitable training methods, RNN models can either learn physical relationships or shapes of movement.
    This project addresses three key topics:
    I. A basic implementation investigates how and why methods of machine learning can be used to determine models of human movement.
    In 2018, we first established a deeper understanding of the initial situation and problem definition. With the help of different basic implementations (different motion models) we investigated (1) how different movements (e.g., humans: walk, run, slalom; vehicles: meander, zig-zag) affect measurement inaccuracies of different sensor families, (2) how measurement inaccuracies of different sensor families (e.g., visible orientation errors, audible noise, and deliberated artificial errors) affect human motion, and (3) how different filter methods for error correction (that balance accuracy and latency) affect both motion and sensing. In addition, we showed (4) how measurement inaccuracies (due to the use of current Bayesian filtering techniques) correlate nonlinearly with human posture (e.g., gait apparatus) and predictably affect health (simulator sickness) through machine learning.We studied methods of machine and deep learning for motion detection (humans: head, body, upper and lower extremity; vehicles: single- and bi-axial) and motion reconstruction (5) based on inertial, camera, and radio sensors, as well as various methods for feature extraction (e.g., SVM, DT, k-NN, VAE, 2D-CNN, 3D-CNN, RNN, LSTM, M/GRU). These were interconnected into different hybrid filter models to enrich extracted features with temporal and context-sensitive motion information, potentially creating more accurate, robust, and close to real-time motion models. In this way, these mechanics learned (6) motion models for multi-axis vehicles (e.g., forklifts) based on inertial, radio, and camera data, which generalize for different environments or tracking surfaces (with varying size, shape, and sensory structure, e.g., magnetic field, multipath, texturing, and illumination). Furthermore (7), we gained a deeper understanding of the effects of non-constant accelerated motion models on radio signals. On the basis of these findings, we trained an LSTM model that predicts different movement speeds and motion forms of a single-axis robot (i.e., Segway) close to real-time and more accurately than conventional methods.
    In 2019, we found that these models can also predict human movement (human movement model). We also determined that the LSTM models can either be fully self-sufficient at runtime or integrated as support points into localization estimates, e.g., into Pedestrian Dead Reckoning (PDR) methods.
    II. Based on this, we try to find ways to optimize the basic implementation in terms of robustness, latency, and reusability.
    In 2018, we used the findings from I. (1-7) to stabilize so-called (1) relative Pedestrian Dead Reckoning (PDR) methods using motion classifiers. These enable a generalization to any environment. A deeper radio signal understanding (2) allowed to learn long-term errors in RNN-based motion models. This improves the position accuracy, stability, and a near real-time prediction. First experiments showed the robustness of the movement models (3) with the help of different real (unknown to the models) movement trajectories for one- and two-axis vehicles. Furthermore, we investigated (4) how hybrid filter models (e.g., interconnection of feature extractors such as 2D/3D-CNNs and time-series trackers such as RNNs-LSTM) provide more accurate, more stable, and filtered (outlier-corrected) results.
    In 2019, we showed that models of the RNN family extrapolate movements into the future so that they compensate for the latency of the processing pipeline and beyond. Furthermore, we examined the explainability, interpretability, and robustness of the models examined here, and their reusability on the human movement.With the help of a simulator, we generated physically correct movements, e.g., positions of pedestrians, cyclists, cars, and planes. Based on this data, we showed that RNN models can interpolate between different types of movement and can compensate for missing data points, interpret white and random noise as such, and can extrapolate movements. The latter enables processing-specific latency to be compensated and enables human movement to be predicted from radio and inertial data in real time.Novel RNN architecture. Furthermore, in 2019, we researched a new architecture, or topology, of a neural network, that balances the strengths and weaknesses of flat neural networks (NN) and recurrent networks. We found this optimal NN for determining physically correct movement in a large-scale parameter study. In particular, we also optimized the model architecture and parameters for human-centered localization. These optimal architectures predict human movement far into the future from as little sensor information as possible. The architecture with the least localization error combines two DNNs with an RNN.Interpretability of models. In 2019, we examined the functionality of this new model. For this purpose, we researched a new process pipeline for the interpretation and explanation of the model. The pipeline uses the mutual information flow and the mutual transfer entropy in combination with various targeted manipulations of the hidden states and suitable visualization techniques to describe the state of the model at any time, both subjectively and objectively. In addition, we adapted a variational auto-encoder (VAE) to better visualize and interpret extracted features of a neural network. We designed and parameterized the VAE such that the reconstruction error of the signal is within the range of the measurement noise and at the same time forced the model to store disentangled features in its latent space. This disentanglement enabled the first subjective statements about the interrelationships of the features that are really necessary to optimally code the channel state of a radio signal.Compression. In 2019, we discovered a side effect of the VAE that offers the possibility of decentralized preprocessing of the channel information directly on the antenna. This compression then reduces the data traffic, lowers the communication load, and thus increases the number of possible participants in the communication and localization in a closed sensor network.Influence of the variation of the input information. In 2019, we also examined how changes in the input sequence length of a recurrent neural network affect the learning success and the type of results of the model. We discovered that a longer sequence persuades the model to be a motion model, i.e., to learn the form of movement, while with shorter sequences the model tends to learn physical relationships. The optimal balance between short and long sequences represents the highest accuracy.We investigated speed estimation using the new method. When used in a PDR model, this increased the position accuracy. An initial work in 2019 has examined in detail which methods are best suited to estimate the speed of human movement from a raw inertial signal. A new process, a combination of a one-dimensional CNN and a BLSTM, has replaced the state of the art.
    In 2020, we optimized the architecture of the model,with regard to its prediction accuracy and investigated the effects of a deep fusion of Bayesian and DL methods on the prediction accuracy and robustness.
    Optimization. In 2020, we improved the existing CNN and RNN architecture and proposed the fusion of ResNet and BLSTM. We replaced the CNN with a residual network to extract deeper and higher quality features from a continuous data stream. We showed that this architecture entails higher computing costs, but surpasses the accuracy of the state-of-the-art. In addition, the RNN architecture can be scaled down to counter the blurring of the context vector of the LSTM cells with very long input sequences, as the remaining ResNet network offers more qualitative features.Deep Bayesian Method. In 2020 we investigated whether methods of the RNN family can extract certain movement properties from recorded movement data to replace the measurement-, process-, and transition-noise distributions of a Kalman filter (KF). We showed that highly optimized LSTM cells can reconstruct trajectories more robust (low error variance) and more precise (positional accuracy)  than an equally highly optimized KF. The deep coupling of LSTM in KF, so-called Deep Bayes, provided the most robust and precise positions and trajectories. This study also showed that methods trained on realistic synthetic data, the Deep Bayesian method, needed the least real data to adapt to a new unknown domain, e.g., unknown motion shapes and velocity distribution.

    III. Finally, a demonstration of feasibility shall be tested.
    In 2018, a large-scale social science study opened the world's largest virtual dinosaur museum and showed that (1) a pre-selected (application-optimized) model of human movement robustly and accurately (i.e., without a significant impact on simulator sickness) maps human motion, resp. predicts it. We used this as a basis for comparison tests with other models that are human-centered and generalize to different environments.
    In 2019, we developed two new live demonstrators that are based on the research results achieved in I and II. (1) A model railway that crosses a landscape with a tunnel at variable speeds. The tunnel represents realistic and typical environmental characteristics that lead to nonlinear multipath propagation of a radio transmitter to be located and ultimately to an incorrectly determined position. This demonstrator shows that the RNN methods researched as part of the research project can localize highly precisely and robustly, both on complex channel impulse responses and on dimensionally reduced response times, and also deliver better results than conventional Kalman filters. (2) We used the second demonstrator to visualize the movement of a person's upper extremity. We recorded human movement using inexpensive inertial sensors attached to both arm joints, classified using machine-based and deep learning, and derived motion parameters. A graphic user interface visualizes the movement and the derived parameters in near real time.The planned generalizability, e.g., of human-centered models and the applicability of RNN-based methods in different environments, has been demonstrated using (1) and (2).In 2019, we applied the proposed methods in the following applications:Application: Radio Signal. We classified the channel information of a radio system hierarchically. We translated the localization problem of a Line of Sight (LoS) and Non Line of Sight (NLoS) classifier into a binary problem. Hence, we now can precisely localize a position within a meter, based on individual channel information from a single antenna if the environment provides heterogeneous channel propagation.Furthermore, we simulated LoS and NLoS channel information and used it to interpolate between different channels. This enables the providers of radio systems to respond to changing or new environments in the channel information a-priori in the simulation. By selectively retraining the models with the simulated knowledge, we enabled more robust models.Application: Camera and Radio Signal. We have shown how the RNN methods relate to information from other sensor families, e.g., video images, by combining radio and camera systems when training a model, the two sensor information streams merge smoothly, even in the event of occlusion of the camera. This yields a more robust and precise localization of multiple people.Application: Camera Signal. We used an RNN method to examine the temporal relationships between events in images. In contrast to the previous work, which uses heterogeneous sensor information, this network only uses image information. However, the model uses the image information in such a way that it interprets the images differently: as spatial information, i.e., a single image, and as temporal information, i.e., several images in the input. This splitting implies that individual images can be used as two fictitious virtual sensor information streams to recognize results spatially (features) and to better predict them temporally (temporal relationships). Another work uses camera images to localize the camera itself. For this purpose, we built a new processing pipeline that breaks up the video signal over time and learns absolute and relative information in different neural networks and merges their outputs into an optimal pose in a fusion network.Application: EEG Signal. In a cooperation project we applied the researched methods to other sensor data. We recorded beta- and gamma-waves of the human brain in different emotional states. When used to train an RNN, it correctly predicted the emotions of a test person in 90% of all cases from raw EEG data. Application: Simulator Sickness. We have shown how the visualization in VR affects human perception and movement anomalies, resp. simulator sickness, and how the neural networks researched here can be used to predict the effects.In 2020, we developed a new live demonstrator based on the research results achieved in II.Application: Gait reconstruction in VR. In 2020, we used the existing CNN-RNN model to predict human movement, namely gait cycle and gait phases, using sensor data from a head-mounted inertial sensor to visualize a virtual avatar in VR in real time. We showed that the DL model has significantly lower latencies than the state-of-the-art, since it can recognize gait phases earlier and predict future ones more precisely. However, this is at the expense of the required computing effort and thus the required hardware.

    The project was successfully completed in 2021. In 2021, as part of a successfully completed dissertation, essential findings from the course of the project were linked and conclusions were drawn, and numerous research questions were addressed and answered.
    As part of the research project, more than 15 qualification theses, 6 patent families, and more than 20 scientific publications were successfully completed and published. The core contribution of the project is the knowledge of the applicability and pitfalls of recurrent neural networks (RNN), their different cell types and architectures, in different application areas. Conclusion: The ability of the RNN family to deal with dynamics in data streams, e.g., failures, delays, and different sequence lengths in time series data, makes them indispensable in a large number of application areas today.
    The project is continued within the framework of seminars at the FAU and extracurricular research activities at Fraunhofer IIS within the framework of the ADA Lovelace Center.
    In 2022, time series augmentation was investigated. For this purpose, various generative methods, namely Variational Autoencoder (VAE) and Generative Adversarial Networks (GAN), were evaluated for their ability to generate time series of different application domains, e.g., features of radio signals, e.g., signal strength, channel impulse response, characteristics of GNSS spectra and multidimensional signals from inertial sensors. A novel architecture called ARCGAN was proposed, which combines all the known advantages of state-of-the-art methods and can therefore generate significantly more similar (effective) time series than the state-of-the-art.
    In 2023, we investigated generative methods based on attention mechanisms, transformer architectures, and GPT with respect to their predictive performance for time series. To this end, we evaluated methods such as Legendre Units (LMU), novel transformer architectures, and TimeGPT to better forecast localization information. We could show that using appropriate input prompts and calibration, preconfigured GPT models can be adapted to new areas of application, to make the training significantly more efficient, and to also save energy.

    In 2024, we further examine GPT-like models for their uncertainty, explainability, and adaptability. In addition, we analyze the feasibility of these generative methods in relation to various fields of application, e.g., forecasting and anomaly detection, anomaly characterization, anomaly localization, and anomaly mitigation.

Current courses

Machine Learning: Advances

Title Machine Learning: Advances
Short text SemML-II
Module frequency nur im Wintersemester
Semester hours per week 2

Anmeldung mit Themenanfrage per E-Mail vor Beginn des Seminars; Die Themen werden nach dem Prinzip "Wer zuerst kommt, mahlt zuerst" verteilt.

Dieses Seminar führt in das Themengebiet des tiefen Lernens ein. Tiefes Lernen ist eine der gefragtesten Fähigkeiten in der künstlichen Intelligenz. Verfahren des tiefen Lernens haben beispielsweise alle bisherigen Benchmarks für die Klassifizierung von Bildern, Text und Sprache weit übertroffen. Tiefes Lernen ermöglicht und verbessert einige der interessantesten Anwendungen der Welt, wie autonome Fahrzeuge, Genomforschung, humanoide Robotik, Echtzeitübersetzung und es besiegt die besten menschlichen Go-Spieler der Welt.

Ziel des Seminars ist eine umfassende Einführung in das tiefe Lernen. Basierend auf maschinellem Lernen wird daher erklärt, wie tiefes Lernen funktioniert, wann und warum es wichtig ist und die wesentlichen Verfahren beleuchtet.

Zu den Verfahren gehören: (1) Architektur und Hyperparameter; (2) mehrschichtiges Perzeptron; (3) Mischungen neuronaler Netze; (4) tiefes Lernen für Sequenzen (Hidden Markov-Modelle, wiederkehrende neuronale Netze, bidirektionales/Langzeit-Kurzzeitgedächtnis, Gated Recurrent Unit, Temporal Convolutional Network); (5) tiefes Lernen für Bilder (Faltungs-Neuronale Netze); (6) tiefes/verstärkendes Lernen; (7) Markov-Prozesse (Gaußsche Prozesse und Bayes'sche Optimierung, grafische Modelle und Bayes'sche Netze, Kalman- und Partikelfilter); (8) Online-Lernen und Spieltheorie; (9) unüberwachtes Repräsentationslernen und generative Methoden (allgemeine gegnerische Netzwerke, Variational Autoencoder); (10) Datenerweiterung und Transferlernen.¹

Das Seminar gibt einen Einblick in die Welt des tiefen Lernens und befähigt den Studierenden eine wissenschaftliche Präsentation und Ausarbeitung anzufertigen, um individuell erworbenes Wissen einem Fachpublikum vermitteln zu können.

¹ Die Themen sind an den aktuellen Forschungsstand angepasst und wechseln sich jährlich ab.

1. Parallelgruppe

Literature references: - G. Goodfellow und Y. Bengio und A. C. Courville: Deep Learning, mitp-Verlag, 2015
- R. S. Sutton und A. G. Barto: Reinforcement Learning: An Introduction, MIT Press, 1998
- F. V. Jensen: An Introduction To Bayesian Networks, Springer, 1996
- R. Rojas: Theorie der neuronalen Netze - eine systematische Einführung, Springer, 1993
- J. Schmidhuber: Deep learning in neural networks: An overview, J. Intl. Neural Network Society (INNS), 2015
- D. Silver et al.: Mastering the game of Go with deep neural networks and tree search, J. Nature, 2016
- F. Chollet: Deep Learning with Python, Manning Publications, 2017
- A. Müller und S. Guido: Introduction to Machine Learning with Python: A Guide for Data Scientists, O'Reilly UK Ltd., 2016
- T. J. Hastie und R. Tibshirani und J. H. Friedman: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer Series in Statistics, 2009

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
Einzeltermin Wed, 15:00 - 16:00 18.10.2023 - 18.10.2023
  • Tobias Feigl
11302.04.150
Einzeltermin Sat, 09:00 - 16:00 20.01.2024 - 20.01.2024 11302.04.150
Einzeltermin Sat, 09:00 - 16:00 27.01.2024 - 27.01.2024 11302.04.150
Einzeltermin Sat, 09:00 - 16:00 03.02.2024 - 03.02.2024 11302.04.150
Einzeltermin Sat, 09:00 - 16:00 17.02.2024 - 17.02.2024 11302.04.150

Machine Learning: Introduction

Title Machine Learning: Introduction
Short text SemML-I
Module frequency nur im Wintersemester
Semester hours per week 2

Anmeldung mit Themenanfrage per E-Mail vor Beginn des Seminars; Die Themen werden nach dem Prinzip "Wer zuerst kommt, mahlt zuerst" verteilt.

Dieses Seminar führt in das Themengebiet des maschinellen Lernens (ML) ein. ML ist die Wissenschaft, Computer zum Handeln zu bewegen, ohne explizit programmiert zu werden. ML ist heute so allgegenwärtig, dass wir es wahrscheinlich täglich verwenden, ohne es zu wissen. So hat ML in den letzten Jahren beispielsweise selbstfahrende Autos, praktische Bild- und Spracherkennung und die effektive Partner- und Websuche ermöglicht.

Ziel des Seminars ist eine umfassende Einführung in das maschinelle Lernen, Analyse und Verarbeitung von Daten sowie statistische Mustererkennung. Zu den Themen gehören: (1) Klassifizierungs- und Regressionsprobleme; (2) überwachtes Lernen (parametrische und nicht parametrische Algorithmen, lineare und logistische Regression, k-nächster Nachbar, Support-Vector-Machines, Entscheidungsbäume, flache neuronale Netze); (3) unüberwachtes Lernen (K-Means, Clustering, Dimensionsreduktion, PCA, LDA, Empfehlungssysteme); (4) Ensemble- und Online-Lernen; (5) Regularisierung: Modelldiagnose, Fehleranalyse und Qualitätsmetriken sowie Interpretation der Ergebnisse; (5) evolutionäre Algorithmen; (6) Anomalieerkennung und Gaußsche Verteilungen; (7) Bayes, Kalman-Filter und Gaußsche Prozesse.¹

Das Seminar gibt einen Einblick in die Welt des maschinellen Lernens und befähigt den Studierenden eine wissenschaftliche Präsentation und Ausarbeitung anzufertigen, um individuell erworbenes Wissen einem Fachpublikum vermitteln zu können.

¹ Die Themen sind an den aktuellen Forschungsstand angepasst und wechseln sich jährlich ab.

1. Parallelgruppe

Literature references: - A. Müller und S. Guido: Introduction to Machine Learning with Python: A Guide for Data Scientists, O'Reilly UK Ltd., 2016
- K. P. Murphy: Machine learning - a probabilistic perspective, Adaptive computation and machine learning series, MIT Press, 2012.
- T. J. Hastie und R. Tibshirani und J. H. Friedman: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer Series in Statistics, 2009.
- T. M. Mitchell: Machine Learning, McGraw-Hill Education Ltd., 1997
- F. V. Jensen: An Introduction To Bayesian Networks, Springer, 1996
- J. A. Freeman: Simulating neural networks - with Mathematica, Addison-Wesley Professional, 1993
- J. A. Hertz und A. Krogh und R. G. Palmer: Introduction to the theory of neural computation, Westview Press, 1991
- R. Rojas: Theorie der neuronalen Netze - eine systematische Einführung, Springer, 1993
- W. Banzhaf und F. D. Francone und R. E. Keller und P. Nordin: Genetic programming - An Introduction: On the Automatic Evolution of Computer Programs and Its Applications, Morgan Kaufmann, 1998
- M. Mitchell: An introduction to genetic algorithms, MIT Press, 1996
- Z. Michalewicz: Genetic Algorithms + Data Structures = Evolution Programs, Springer, 1992
- M. Bishop: Pattern Recognition and Machine Learning (Information Science and Statistics), Springer, 2006

Link to Campo

Date and Time Start date - End date Cancellation date Lecturer(s) Comment Room
Einzeltermin Wed, 15:00 - 16:00 18.10.2023 - 18.10.2023
  • Tobias Feigl
11302.04.150
Einzeltermin Sat, 09:00 - 16:00 20.01.2024 - 20.01.2024 11302.04.150
Einzeltermin Sat, 09:00 - 16:00 27.01.2024 - 27.01.2024 11302.04.150
Einzeltermin Sat, 09:00 - 16:00 03.02.2024 - 03.02.2024 11302.04.150
Einzeltermin Sat, 09:00 - 16:00 17.02.2024 - 17.02.2024 11302.04.150

Publications

2024

2023

2022

2021

2020

2019

2018

2017

Supervised theses

Sorted alphabetically in UnivIS

Patents