Workshop
Topological, Algebraic, and Geometric Pattern Recognition with Applications Workshop Proposal
Tegan Emerson · Henry Kvinge · Timothy Doster · Alexander Cloninger · Bastian Rieck · Sarah Tymochko
East 16
Keywords: Geometry-Based Learning
Schedule
Sun 8:45 a.m. - 9:00 a.m.
|
Opening Remarks
|
Tegan Emerson · Timothy Doster 🔗 |
Sun 9:00 a.m. - 10:00 a.m.
|
A Survey of Topological Neural Networks
(
Keynote
)
Topological Neural Networks (TNNs) are deep learning architectures that process signals defined on topological domains, such as hypergraphs and cellular complexes -- hence generalizing Graph Neural Networks. The additional flexibility and expressivity of TNN architectures permits the representation and processing of complex natural systems such as proteins, neural activity, and many-body physical systems. This talk synthesizes the recent TNN literature using a single unifying notation and graphical summaries and sheds light on existing challenges and exciting opportunities for future development. |
Nina Miolane 🔗 |
Sun 10:00 a.m. - 10:45 a.m.
|
Coffee Break
|
🔗 |
Sun 10:45 a.m. - 11:00 a.m.
|
Topology Preserving Compositionality for Robust Medical Image Segmentation
(
Spotlight
)
Deep Learning based segmentation models for medical imaging often fail under subtle distribution shifts calling into question the robustness of these models. Medical images however have the unique feature that there is limited structural variability between patients. We propose to exploit this notion and improve the robustness of deep learning based segmentation models by constraining the latent space to a learnt dictionary of base components. We incorporate a topological prior using persistent homology in the sampling of our dictionary to ensure topological accuracy after composition of the components. We further improve robustness by deep topological supervision applied in an hierarchical manner. We demonstrate the effectiveness of our method under various perturbations and in two single domain generalisation tasks. |
Ainkaran Santhirasekaram · Mathias Winkler · Andrea Rockall · Ben Glocker 🔗 |
Sun 11:00 a.m. - 11:15 a.m.
|
Hamming Similarity and Graph Laplacians for Class Partitioning and Adversarial Image Detection
(
Spotlight
)
Researchers typically investigate neural network representations by examining activation outputs for one or more layers of a network. Here, we investigate the potential for ReLU activation patterns (encoded as bit vectors) to aid in understanding and interpreting the behavior of neural networks. We utilize Representational Dissimilarity Matrices (RDMs) to investigate the coherence of data within the embedding spaces of a deep neural network. From each layer of a network, we extract and utilize bit vectors to construct similarity scores between images. From these similarity scores, we build a similarity matrix for a collection of images drawn from 2 classes. We then apply Fiedler partitioning to the associated Laplacian matrix to separate the classes. Our results indicate, through bit vector representations, that the network continues to refine class detectability with the last ReLU layer achieving better than 95% separation accuracy. Additionally, we demonstrate that bit vectors aid in adversarial image detection, again achieving over 95% accuracy in separating adversarial and non-adversarial images using a simple classifier. |
Huma Jamil · Yajing Liu · Turgay Caglar · Christina Cole · Nathaniel Blanchard · Christopher Peterson · Michael Kirby 🔗 |
Sun 11:15 a.m. - 11:30 a.m.
|
TopFusion: Using Topological Feature Space for Fusion and Imputation in Multi-Modal Data
(
Spotlight
)
We present a novel multi-modal data fusion technique using topological features. The method, TopFusion, leverages the flexibility of topological data analysis tools (namely persistent homology and persistence images) to map multi-modal datasets into a common feature space by forming a new multi-channel persistence image. Each channel in the image is representative of a view of the data from a modality-dependent filtration. We demonstrate that the topological perspective we take allows for more effective data reconstruction, i.e. imputation. In particular, by performing imputation in topological feature space we are able to outperform the same imputation techniques applied to raw data or alternatively derived features. We show that TopFusion representations can be used as input to downstream deep learning-based computer vision models and doing so achieves comparable performance to other fusion methods for classification on two multi-modal datasets. |
Tegan Emerson · Audun Myers · Henry Kvinge 🔗 |
Sun 11:30 a.m. - 11:45 a.m.
|
Quantifying Extrinsic Curvature in Neural Manifolds
(
Spotlight
)
The neural manifold hypothesis postulates that the ac-tivity of a neural population forms a low-dimensional man-ifold whose structure reflects that of the encoded task vari-ables. In this work, we combine topological deep generativemodels and extrinsic Riemannian geometry to introduce anovel approach for studying the structure of neural mani-folds. This approach (i) computes an explicit parameteriza-tion of the manifolds and (ii) estimates their local extrinsiccurvature—hence quantifying their shape within the neuralstate space. Importantly, we prove that our methodology isinvariant with respect to transformations that do not bearmeaningful neuroscience information, such as permutationof the order in which neurons are recorded. We show empir-ically that we correctly estimate the geometry of syntheticmanifolds generated from smooth deformations of circles,spheres, and tori, using realistic noise levels. We addition-ally validate our methodology on simulated and real neuraldata, and show that we recover geometric structure knownto exist in hippocampal place cells. We expect this approachto open new avenues of inquiry into geometric neural cor-relates of perception and behavior, while providing a newmeans to compare representations in biological and artifi-cial neural systems. |
Francisco Acosta · Sophia Sanborn · Khanh Dao Duc · Manu Madhav · Nina Miolane 🔗 |
Sun 11:45 a.m. - 1:30 p.m.
|
Lunch
|
🔗 |
Sun 1:30 p.m. - 2:30 p.m.
|
Recognizing Rigid Patterns of Unlabeled Point Clouds
(
Keynote
)
Rigid structures such as cars or any other solid objects are often represented by finite clouds of unlabeled points. The most natural equivalence on these point clouds is rigid motion or isometry that maintains all inter-point distances. Rigid patterns of point clouds can be fully identified only by complete isometry invariants (also called equivariant descriptors) that should have no false negatives (isometric clouds having different descriptions) and no false positives (non-isometric clouds with the same description). Noise in data motivates a search for invariants that are continuous under perturbations of points in a suitable metric. We propose a continuous and complete invariant for finite clouds of unlabeled points in any Euclidean space. For a fixed dimension, a new metric for this invariant is computable in a polynomial time in the number of points. The talk is based on the CVPR 2023 paper with Daniel Widdowson. |
Vitaliy Kurlin 🔗 |
Sun 2:30 p.m. - 2:45 p.m.
|
Topology-Aware Focal Loss for 3D Image Segmentation
(
Spotlight
)
The efficacy of segmentation algorithms is frequently compromised by topological errors like overlapping regions, disrupted connections, and voids. To tackle this problem, we introduce a novel loss function, namely Topology-Aware Focal Loss (TAFL), that incorporates the conventional Focal Loss with a topological constraint term based on the Wasserstein distance between the ground truth and predicted segmentation masks' persistence diagrams. By enforcing identical topology as the ground truth, the topological constraint can effectively resolve topological errors, while Focal Loss tackles class imbalance. We begin by constructing persistence diagrams from filtered cubical complexes of the ground truth and predicted segmentation masks. We subsequently utilize the Sinkhorn-Knopp algorithm to determine the optimal transport plan between the two persistence diagrams. The resultant transport plan minimizes the cost of transporting mass from one distribution to the other and provides a mapping between the points in the two persistence diagrams. We then compute the Wasserstein distance based on this travel plan to measure the topological dissimilarity between the ground truth and predicted masks. We evaluate our approach by training a 3D U-Net with the MICCAI Brain Tumor Segmentation (BraTS) challenge validation dataset, which requires accurate segmentation of 3D MRI scans that integrate various modalities for the precise identification and tracking of malignant brain tumors. Then, we demonstrate that the quality of segmentation performance is enhanced by regularizing the focal loss through the addition of a topological constraint as a penalty term. |
Andac Demir · Elie Massaad · Bulent Kiziltan 🔗 |
Sun 2:45 p.m. - 3:00 p.m.
|
Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools
(
Spotlight
)
Methods for model explainability have become increasingly critical for testing the fairness and soundness of deep learning. Concept-based interpretability techniques, which use a small set of human-interpretable concept exemplars in order to measure the influence of a concept on a model's internal representation of input, are an important thread in this line of research. In this work we show that these explainability methods can suffer the same vulnerability to adversarial attacks as the models they are meant to analyze. We demonstrate this phenomenon on two well-known concept-based interpretability methods: TCAV and faceted feature visualization. We show that by carefully perturbing the examples of the concept that is being investigated, we can radically change the output of the interpretability method. The attacks that we propose can either induce positive interpretations (polka dots are an important concept for a model when classifying zebras) or negative interpretations (stripes are not an important factor in identifying images of a zebra). Our work highlights the fact that in safety-critical applications, there is need for security around not only the machine learning pipeline but also the model interpretation process. |
Davis Brown · Henry Kvinge 🔗 |
Sun 3:00 p.m. - 3:45 p.m.
|
Coffee Break
|
🔗 |
Sun 3:45 p.m. - 4:00 p.m.
|
Shape and Intensity Analysis of Glioblastoma Multiforme Tumors
(
Spotlight
)
We use a geometric approach to characterize tumor shape and intensity along the tumor contour in the context of Glioblastoma Multiforme. Properties of the proposed shape+intensity representation include invariance to translation,scale, rotation and reparameterization, which allow for objective comparison of tumor features. Controlling for the weight of intensity information in the shape+intensity representation results in improved comparisons between tumorfeatures of different patients who have been diagnosed with Glioblastoma Multiforme; further, it allows for identification of different partitions of the data associated with different median survival among such patients. Our findingssuggest that integrating and appropriately balancing information regarding GBM tumor shape and intensity can be beneficial for disease prognosis. We evaluate the proposed statistical framework using simulated examples as well as areal dataset of Glioblastoma Multiforme tumors. |
Yi Chen Chen 🔗 |
Sun 4:00 p.m. - 4:15 p.m.
|
Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification
(
Spotlight
)
Explanations for black-box models help us to understand model decisions as well as provide information on model biases and inconsistencies. Most of the current post-hoc explainability techniques provide a single level of explanation, often in terms of feature importance scores or feature attention maps in the input space. The explanations provided by these methods are also sensitive to perturbations in the input space. Our focus is on explaining deep discriminative models for images at multiple levels of abstraction, from fine-grained to fully abstract explanations. We use the natural properties of hyperbolic geometry to more efficiently model a hierarchical relationship of symbolic features with decreased distortion to generate robust hierarchical explanations. Specifically, we distill the underpinning knowledge in an image classifier by quantising the continuous latent space to form hyperbolic symbols and learn the relations between these symbols in a hierarchical manner to induce a knowledge tree. We traverse the tree to extract hierarchical explanations in terms of chains of symbols and their corresponding visual semantics. |
Ainkaran Santhirasekaram · Avinash Kori · Mathias Winkler · Andrea Rockall · Francesca Toni · Ben Glocker 🔗 |
Sun 4:15 p.m. - 4:30 p.m.
|
Euler Characteristic Transform Based Topological Loss for Reconstructing 3D Images from Single 2D Slices
(
Spotlight
)
The computer vision task of reconstructing 3D images, i.e., shapes, from their single 2D image slices is extremely challenging, more so in the regime of limited data. Deep learning models typically optimize geometric loss functions, which may lead to poor reconstructions as they ignore the structural properties of the shape. To tackle this, we propose a novel topological loss function based on the Euler Characteristic Transform. This loss can be used as an inductive bias to aid the optimization of any neural network toward better reconstructions in the regime of limited data. We show the effectiveness of the proposed loss function by incorporating it into SHAPR, a state-of-the-art shape reconstruction model, and test it on two benchmark datasets, viz., Red Blood Cells and Nuclei datasets. We also show a favourable property, namely injectivity and discuss the stability of the topological loss function based on the Euler Characteristic Transform. |
Kalyan Nadimpalli · Amit Chattopadhyay · Bastian Rieck 🔗 |
Sun 4:30 p.m. - 4:45 p.m.
|
Closing Remarks
|
Tegan Emerson · Timothy Doster 🔗 |
Sun 5:00 p.m. - 6:30 p.m.
|
Social Hour @ Lions Pub
(
Community Activity
)
|
🔗 |