Speaker
Description
In this talk, I will show through several examples and applications how persistence theory can be used to build relevant topological descriptors/signatures from data sets, that encode useful topological information that is often complementary to other usual descriptors. Then, we will show how these signatures can be converted into features for further data analysis and machine learning tasks, by using either finite or infinite-dimensional vectorizations into reproducing kernel Hilbert spaces, i.e., kernel methods. We will finally present several recent applications of topological data analysis in deep learning, involving differentiating persistence (in order to, eg, being able to incorporate topological penalties in the loss functions of classifiers) and mimicking persistence computations with deep neural networks.