Photo by ©FAU/Erich Malter

OPEN FAU

Online publication system of Friedrich-Alexander-Universität Erlangen-Nürnberg

The online publication system OPEN FAU is the central publication platform for Open Access publishing for all members of Friedrich-Alexander-Universität. Qualified works from research and teaching may be published here free of charge, either as a primary or secondary publication. The full texts are permanently available worldwide and are findable and citable via catalogues and search engines.


To search for documents in OPEN FAU, please select "Search" (via the magnifying glass at the top right); this will provide you with various search options. If you want to publish a document, go to "Login" and "My Publications". Then drag you document into the field provided and enter the metadata. In just a few steps, you can submit your document. Please note our guidelines, the publication contract and FAQs.

 

Recent Submissions

Doctoral thesis
Open Access
Evidential Relevance and Expressiveness of Digital Traces: An Investigative Perspective
(2024) Gruber, Jan; Freiling, Felix
In this day and age, almost any criminal investigation deals with some pieces of digital evidence. Given the wealth of digital data stored on both end-user devices and cloud infrastructure, a tremendous challenge for investigators and prosecutors is to determine the relevant pieces to solve the case; however, given an investigative question, there exists no straightforward method to find "sufficient digital evidence" to do so. Thus, the present thesis leaps to improve the understanding and interpretation of digital traces for criminal investigations on a foundational level. As a unifying result, we propose the Cyber-traceological Model, which provides a general way to translate investigative hypotheses to relevant traces - both in idealized and real-world scenarios. The model is grounded in formal definitions of when traces are generally relevant and how they can be expressive on a conceptual level. Building up on these concepts, we are able to define an investigative knowledge base in a precise manner. For digital systems, we then show how relevance can be determined to fill the knowledge base by calculating necessary and sufficient evidence in state machine representations. We use these concepts to refer to rigorous notions of different classes of reconstructability that investigators can use to uncover and comprehend past events. We expressed the concepts of necessity and sufficiency of digital traces in temporal logic and employed a model checker to calculate traces of those classes based on a model of the system under investigation to demonstrate practical feasibility. Since this necessitates the availability of a representation of the system under investigation as a transition system, which is often hard to achieve in real-world scenarios, we additionally investigate ways of collecting, representing, and using phenomenon-specific knowledge of criminal phenomena to establish a notion of evidential relevance from a more holistic and realistic perspective. Using cognitive maps as a particular form to express node-link relationships, we show how this phenomenon-specific knowledge can build a bridge from abstract process models to case-specific concretizations by constituting a meso-level abstraction supporting the quest to find relevant traces more pragmatically. We vividly illustrate the construction of an instance of such a phenomenon-specific knowledge base and its applicability in the example of botnet crime. Lastly, we study how expressiveness of digital traces could be hampered by undetected contamination effects. Here, we provide a novelly universal definition of evidence contamination - applicable both for physical and digital evidence - and aim to substantiate and validate the proposed definition by presenting examples, counterexamples, and edge cases of contamination of digital evidence to build the grounds for future research improving the understanding of contamination. In essence, the results of this dissertation are aggregated in the proposed Cyber-traceological Model, which systematically sketches out how to translate case-related hypotheses into relevant traces. It aims to span the arc from abstract considerations to concrete investigative work, thus hinting at the potential to solidify the practical application by insights gained from theoretical considerations of fundamental attributes of digital evidence.
Doctoral thesis
Open Access
Maschinelles Lernen zur automatisierten Klassifikation von pädiatrischen Anämien
(2024-04-12) Niemeier, Philipp Alexander; Zierk, Jakob
1.1. Abstract Objectives Anemia is a common laboratory finding in children, caused by a multitude of common to rare and minor to life-threatening diseases. Current diagnostic tests enable early and accurate recognition of the underlying diagnosis, but their use varies with physician experience and patient setting. The age- and sex-dependent dynamics of the blood count additionally complicate the diagnosis in childhood and require support for pediatric decision-making. We investigated whether machine-learning algorithms for the automated interpretation of pediatric blood counts can support the differential diagnosis of anemia. Design & Methods A comprehensive database of pediatric blood counts from different German pediatric departments (14.604 healthy children [Erlangen] and 926 pathological samples: 45.9% iron deficiency anemia [Erlangen], 9.3% Diamond Blackfan anemia [Freiburg], 28.7% myelodysplastic syndrome or severe aplastic anemia [Freiburg], 9.0% vitamin B12- deficiency [Erlangen], and 7.1% heterozygous β-Thalassemia [Berlin]) was used to develop and validate different machine learning models. We used Python and established machine learning algorithms from the scikit-learn library to implement classification methods for the specified diseases. Observations & Results The best performing models predicted the diagnoses in the validation dataset with an accuracy of 96% (95.0 – 97.0%) (classification as either “normal” or “abnormal”) and 84% (78.0 – 90.0%) (classification as “healthy” or specific diagnoses) and a balanced accuracy of 84.0% (78.0 – 90.0%). Even rare diseases (e.g. Diamond Blackfan anemia) were correctly classified with an accuracy of 96.0% (95.8 – 96.2%). Compared to a naive clinical diagnostic pathway, the machine learning algorithms were able to achieve a higher classification accuracy and a higher balanced accuracy. Conclusions We implemented different classification methods for pediatric anemias which achieve very good classification accuracy using only frequently performed laboratory tests. This provides a proof-of-concept application of machine learning algorithms to support the diagnosis of complex hematological diseases in children.
Doctoral thesis
Open Access
Untersuchungen von Unsicherheiten mittels Fuzzy-Zahlen unter Verwendung von Metamodellierung
Schriftenreihe Technische Mechanik : 53, (2024) Oberleiter, Thomas; Willner, Kai
Unsicherheiten in der Berechnung mechanischer Systeme basieren unter anderem auf den in der Realität unweigerlich auftretenden Abweichungen in der Fertigung, denen durch die Vorgabe von Fertigungstoleranzen Rechnung getragen werden muss. Die generelle Berücksichtigung solcher Unsicherheiten ist heute ein gängiges Vorgehen, um Simulationen solcher mechanischen Systeme zu verbessern. Dies steigert jedoch zunächst den Rechenaufwand, so dass hier geeignete Ansätze zur numerischen Behandlung erforderlich sind. Die vorliegende Arbeit ist eingebettet in eine Forschungsgruppe zum interdisziplinären Toleranzmanagement und beschäftigt sich konkret mit der Unsicherheitsbetrachtung unter Verwendung der sogenannten Fuzzy-Unsicherheit. Dabei werden Parameter und Nennmaße eines Bauteils als Fuzzy-Zahlen definiert. Da auch die Fuzzy-Unsicherheit eine hohe Anzahl an Systemauswertungen erfordert, ist sie oftmals nur indirekt anwendbar. Daher werden für komplexe Systeme Ersatzmodelle benötigt, welche in dieser Arbeit in Form von Metamodellen zur Anwendung kommen. Im Rahmen des interdisziplinären Toleranzmanagements müssen komplexe Prozesssimulationen, die eine Art „Black-Box“ darstellen, und aufwändige reale Versuchsdurchführungen in eine gemeinsame virtuelle Betrachtungsweise überführt werden. Der heute gängigste Ansatz der „neuronalen Netze“ kann bei dieser Herangehensweise aufgrund begrenzter Trainingsdaten kaum seine Stärken entfalten, weshalb auf andere Metamodelle zurückgegriffen wird. Geeignetere Ansätze bieten das Kriging-Verfahren oder die Radial Basis Functions. Speziell das Zusammenspiel der Fuzzy-Unsicherheit und dieser Metamodelle ist interessant und bietet großes Potential für wissenschaftliche Untersuchungen. Der Schwerpunkt dieser Arbeit liegt daher in der Optimierung von Metamodellen bei Unsicherheitsbetrachtung mit Fuzzy-Zahlen, die isoliert, aber auch als Zusammenschluss mehrerer Metamodelle betrachtet werden. Hierzu stellt die vorliegende Arbeit verschiedene Ansätze vor: i) Das Verfahren des Fuzzy Orientated Sampling Shift (FOSS), über das im Rahmen der Versuchsplanung die Fuzzy-Unsicherheit bereits in die Erstellung der Metamodelle integriert wird, ii) die direkte Auswertung und Ergebnisinterpretation über Fuzzy-Zahlen mittels des sogenannten O-Indexes sowie iii) die direkte Auswertung und Ergebnisinterpretation über Fuzzy-Zahlen mittels Fuzzy-basierter Sensitivitätsanalyse. Die genannten Ansätze bieten einen praktikablen Umgang beim Aufbau von Metamodellen und lassen sich auch in komplexen Verkettungen von Metamodellen anwenden. Damit eröffnen sie auch in komplexen Strukturen von Metamodellen Optimierungspotential. Anhand analytisch lösbarer Testfunktionen werden die Ansätze demonstriert und schließlich praxisnah auf den Produktlebenszyklus von Zahnrädern als konkretes Beispiel aus dem interdisziplinären Toleranzmanagements angewandt.
Doctoral thesis
Open Access
Integrated Multi-GNSS Receivers
(2024-04-12) Rügamer, Alexander; Thielecke, Jörn
The challenges of multi-band, multi-system (i.e., multi-) GNSS reception are higher bandwidths, higher sampling rates, and often multiple reception chains. All of these lead to an increase in complexity, size, and power consumption, especially for integrated radio frequency (RF) front-ends. This thesis aims to provide design procedures for lower-complexity and less power-consuming integrated multi-GNSS receivers and addresses the following research questions: How do RF front-end properties influence multi-GNSS reception and processing? How can different integrated multi-GNSS receiver implementations be compared and benchmarked? How can a multi-GNSS receiver be designed more efficiently than with conventional architectures? First, it is analyzed how RF front-end characteristics affect multi-GNSS reception and processing. Then, the research question of how to compare and benchmark different integrated multi-GNSS receiver implementations is addressed. Figures of merit (FOMs) are introduced that are successfully applied to benchmark the state of the art in integrated multi-GNSS front-ends. The second main contribution is the introduction of a novel receiver “overlay architecture” that allows simultaneous optimization of the FOMs. Since conventional receiver architectures do not allow such optimization, the new “overlay architecture” contributes to the research question of how to design multi-GNSS receivers more efficiently, taking into account and overcoming the challenges of multi-band, multi-system reception. The third main contribution is the theoretical and practical analysis of the new “overlay architecture”, in particular, the overlay loss it generates. It can be modeled using the Spectral Separation Coefficient (SSC) theory. An optimal, analytical solution of a path control that mitigates the overlay-introduced noise loss for a dual-frequency ionospheric-free linear combination is derived. This is validated by measurements in the signal, range, and positioning domains. Finally, an exemplary implementation of a multi-GNSS front-end using this “overlay architecture” is presented, analyzed, and compared to the state of the art in integrated multi-GNSS front-ends using the previously defined FOMs, to demonstrate the potential benefits of this new architecture.
Doctoral thesis
Open Access
Simulation and Intermediate Representations for Visual Inspection
(2024) Penk, Dominik; Stamminger, Marc
The extraction of information from visual data is widely used in academic and industrial applications. These visual inspection tasks often require significant manual effort, even when performed by experienced users. Consequently, (semi-)automated solutions are highly sought after, as they reduce the potential of human error and save time. In this work, we present solutions to specific tasks that exhibit a higher degree of automation than state-of-the-art solutions. At the same time, we aim to require as little domain expertise as possible for the productive use of our solutions. In the first example, we use interactive volume rendering to visualize the preprocessing and filtering of large, homogeneous atom point clouds. Our approach allows inexperienced users to quickly examine the point clouds and identify the atom types that exhibit interesting structures. Additionally, we demonstrate how this preprocessing can be used to extract complex surface structures fully automatically from the point cloud. Furthermore, we present an efficient and non-destructive method for reconstructing specular surfaces that can be integrated directly into a production line. This makes our solution particularly relevant for industrial applications. In addition to classical approaches, this work also investigates two learning-based applications. Here, the main part of the manual labor is expected to be in collecting and annotating training data rather than in the productive use of the models. Therefore, we introduce two processes for generating synthetic training data that can be used instead of real data, requiring only a CAD model of the target objects. We show that the quality loss generated by the inevitable simulation-to-reality gap can be minimized if the network is provided with the right type of data. For example, we show that a 6D pose estimator that works with depth maps can be easily trained on synthetic data. Finally, we present a pipeline where color images are first transformed into an abstract line representation. We show that various image-based tasks can be trained and solved using this representation with synthetic data. Overall, this dissertation contributes to the field of visual inspection by introducing new methods that are more automated, less reliant on expert knowledge, and time-saving compared to existing solutions.