Seminars, for informal dissemination of research results, exploratory work by research teams, outreach activities, etc., constitute the simplest form of meetings at a Mathematics research centre.
CAMGSD has recorded the calendar of its seminars for a long time, this page serving both as a means of public announcement of forthcoming activities but also as a historic record.
We'll talk about joint work with Jacob Lurie regarding moduli stacks of geometric objects developing natural breaks. If time allows, I'll end with some speculation regarding a 3-D TFT arising from various G2 manifolds.
The Mobius strip; collapsing the equator; exploding a point in the plane; geometric definition of blowups; the secant construction; pull-backs of curves under blowup.
The famous Burau representation of the braid group is known to be unfaithful for braids with at least five strands. In the early 2000s, two constructions were provided to fix faithfulness: the first being the Lawrence–Krammer–Bigelow linear representation, hence proving linearity of braid groups, and the second being the Khovanov–Seidel categorical representation. In this talk, based on joint work in progress with Licata, Queffelec, and Wagner, I will investigate the interplay between these two representations.
Choice of centers of blowup; descent in dimension; lexicographic decrease of invariant; transversality; obstructions in positive characteristic; resolution of planar vector fields.
How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.
How many different problems can a neural network solve? What makes two machine learning problems different? In this talk, we'll show how Topological Data Analysis (TDA) can be used to partition classification problems into equivalence classes, and how the complexity of decision boundaries can be quantified using persistent homology. Then we will look at a network's learning process from a manifold disentanglement perspective. We'll demonstrate why analyzing decision boundaries from a topological standpoint provides clearer insights than previous approaches. We use the topology of the decision boundaries realized by a neural network as a measure of a neural network's expressive power. We show how such a measure of expressive power depends on the properties of the neural networks' architectures, like depth, width and other related quantities.
In this talk I will discuss some results obtained in collaboration with Filipe C. Mena and former PhD student Vítor Bessa on the global dynamics of a minimally coupled scalar field interacting with a perfect-fluid through a friction-like term in spatially flat homogeneous and isotropic spacetimes. In particular, it is shown that the late time dynamics contain a rich varitey of possible asymptotic states which in some cases are described by partially hyperbolic lines of equilibria, bands of periodic orbits or generalised Liénard systems.