Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published in DAGM GCPR 2023, 2023
Implicit generative models have gained significant popularity for modeling 3D data and have recently proven to be successful in generating high-quality 3D shapes. However, existing research predominantly concentrates on generating outer shells of 3D shapes, ignoring the representation of internal details. In this work, we alleviate this limitation by presenting an implicit generative model that facilitates the generation of complex 3D shapes with rich internal geometric details. Our proposed model utilizes unsigned distance fields, enabling the representation of nested 3D shapes by learning from watertight and non-watertight data. Furthermore, We employ a transformer-based auto-regressive model for shape generation that leverages context-rich tokens from vector quantized shape embeddings. The generated tokens are decoded into unsigned distance field values which further render into novel 3D shapes exhibiting intrinsic details. We demonstrate that our model achieves state-of-the-art point cloud generation results on the popular ShapeNet classes ’Cars’, ’Planes’, and ’Chairs’. Further, we curate a dataset that exclusively comprises shapes with realistic internal details from the ‘Cars’ class of ShapeNet, denoted FullCars. This dataset allows us to demonstrate our method’s efficacy in generating shapes with rich internal geometry.
Download here
Published in ICCV2023 Workshop, Paris, 2023
While being very successful in solving many downstream tasks, the application of deep neural networks is limited in real-life scenarios because of their susceptibility to domain shifts such as common corruptions, and adversarial attacks. The existence of adversarial examples and data corruption significantly reduces the performance of deep classification models. Researchers have made strides in developing robust neural architectures to bolster decisions of deep classifiers. However, most of these works rely on effective adversarial training methods, and predominantly focus on overall model robustness, disregarding class-wise differences in robustness, which are critical. Exploiting weakly robust classes is a potential avenue for attackers to fool the image recognition models. Therefore, this study investigates class-to-class biases across adversarially trained robust classification models to understand their latent space structures and analyze their strong and weak class-wise properties. We further assess the robustness of classes against common corruptions and adversarial attacks, recognizing that class vulnerability extends beyond the number of correct classifications for a specific class. We find that the number of false positives of classes as specific target classes significantly impacts their vulnerability to attacks. Through our analysis of the Class False Positive Score, we assess a fair evaluation of how susceptible each class is to misclassification.
Download here
Published in WACV 2025, 2025
Deep neural networks are susceptible to adversarial at- tacks and common corruptions, which undermine their ro- bustness. In order to enhance model resilience against such challenges, Adversarial Training (AT) has emerged as a prominent solution. Nevertheless, adversarial robustness is often attained at the expense of model fairness during AT, i.e., disparity in class-wise robustness of the model. While distinctive classes become more robust towards such adver- saries, hard to detect classes suffer. Recently, research has focused on improving model fairness specifically for per- turbed images, overlooking the accuracy of the most likely non-perturbed data. Additionally, despite their robustness against the adversaries encountered during model training, state-of-the-art adversarial trained models have difficulty maintaining robustness and fairness when confronted with diverse adversarial threats or common corruptions. In this work, we address the above concerns by introducing a novel approach called Fair Targeted Adversarial Training (FAIR- TAT). We show that using targeted adversarial attacks for adversarial training (instead of untargeted attacks) can al- low for more favorable trade-offs with respect to adversarial fairness. Empirical results validate the efficacy of our ap- proach.
Download here
Published:
Published:
Talk available Online
Master Thesis: Venkata Sai Tarak Padarthi, Mechatronics, University of Siegen, 2023
Master Student: Abdul-Karym Ismail, Computer Science, University of Siegen, 2024
Academic Experience, University of Mannheim & University of Siegen, 2025