Seminários, para a disseminação informal de resultados de investigação, trabalho exploratório de equipas de investigação, actividades de difusão, etc., constituem a forma mais simples de encontros num centro de investigação de matemática.
O CAMGSD regista e publica o calendário dos seus seminários há bastante tempo, servindo páginas como esta não só como um método de anúncio dessas actividades mas também como um registo histórico.
The Mobius strip; collapsing the equator; exploding a point in the plane; geometric definition of blowups; the secant construction; pull-backs of curves under blowup.
The famous Burau representation of the braid group is known to be unfaithful for braids with at least five strands. In the early 2000s, two constructions were provided to fix faithfulness: the first being the Lawrence–Krammer–Bigelow linear representation, hence proving linearity of braid groups, and the second being the Khovanov–Seidel categorical representation. In this talk, based on joint work in progress with Licata, Queffelec, and Wagner, I will investigate the interplay between these two representations.
Choice of centers of blowup; descent in dimension; lexicographic decrease of invariant; transversality; obstructions in positive characteristic; resolution of planar vector fields.
How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.
How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.
The classical Stein-Tomas theorem extends from the theory of linear Fourier restriction estimates for smooth manifolds to the one of fractal measures exhibiting Fourier decay. In the multilinear “smooth” setting, transversality allows for estimates beyond those implied by the linear theory. The goal of this talk is to investigate the question “how does transversality manifest itself in the fractal world?” We will show, for instance, that it could be through integrability properties of the multiple convolution of the measures involved, but that is just the beginning of the story. In the special case of Cantor-type fractals, we will construct multilinear Knapp examples through certain co-Sidon sets which, in some cases, will give more restrictive necessary conditions for a multilinear theorem to hold than those currently available in the literature. This is work in progress with Ana de Orellana (University of St. Andrews, Scotland).