Seminars, for informal dissemination of research results, exploratory work by research teams, outreach activities, etc., constitute the simplest form of meetings at a Mathematics research centre.
CAMGSD has recorded the calendar of its seminars for a long time, this page serving both as a means of public announcement of forthcoming activities but also as a historic record.
After a short introduction on kinetic equations and classical $L^2/H^1$ hypocoercivity techniques due to Dolbeault, Mouhot, Schmeiser AMS 2015, I will talk about Harris-type theorems that is an alternative method for obtaining quantitative convergence rates. I will discuss how to use these theorems summarising some recent results obtained jointly with Jo Evans (Warwick) on the run and tumble equations that is a kinetic-transport equation modelling the bacterial movement under the effect of a chemoattractant.
Roughly, 2-Segal sets are simplicial sets such that higher-dimensional simplices can be uniquely described by triangulated polygons formed out of 2-simplices. In a sense that I will make precise, 2-Segal sets can be viewed as categorified associative algebras. As a TQFT Club member, you might ask, “Are there 2-Segal sets that correspond to (commutative) Frobenius algebras?” The answer is yes, commutativity and Frobenius structures come from asking the simplicial set to possess additional compatible structure maps. I’ll give an overview of these correspondences as well as some background as to how I arrived at this topic from the world of Poisson geometry. This is based on joint works with Ivan Contreras, Walker Stern, and Sophia Marx.
Distributed machine learning addresses the problem of training a model when the dataset is scattered across spatially distributed agents. The goal is to design algorithms that allow each agent to arrive at the model trained on the whole dataset, but without agents ever disclosing their local data.
This tutorial covers the two main settings in DML, namely, Federated Learning, in which agents communicate with a common server, and Decentralized Learning, in which agents communicate only with a few neighbor agents. For each setting, we illustrate synchronous and asynchronous algorithms.
We start by discussing convex models. Although distributed algorithms can be derived from many perspectives, we show that convex models allow to generate many interesting synchronous algorithms based on the framework of contractive operators. Furthermore, by stochastically activating such operators by blocks, we obtain directly their asynchronous versions. In both kind of algorithms agents interact with their local loss functions via the convex proximity operator.
We then discuss nonconvex models. Here, agents interact with their local loss functions via the gradient. We discuss the standard mini-batch stochastic gradient (SG) and an improved version, the loopless stochastic variance-reduced gradient (L-SVRG).
We end the tutorial by briefly mentioning our recent research on the vertical federated learning setting where the dataset is scattered, not by examples, but by features.
Distributed machine learning addresses the problem of training a model when the dataset is scattered across spatially distributed agents. The goal is to design algorithms that allow each agent to arrive at the model trained on the whole dataset, but without agents ever disclosing their local data.
This tutorial covers the two main settings in DML, namely, Federated Learning, in which agents communicate with a common server, and Decentralized Learning, in which agents communicate only with a few neighbor agents. For each setting, we illustrate synchronous and asynchronous algorithms.
We start by discussing convex models. Although distributed algorithms can be derived from many perspectives, we show that convex models allow to generate many interesting synchronous algorithms based on the framework of contractive operators. Furthermore, by stochastically activating such operators by blocks, we obtain directly their asynchronous versions. In both kind of algorithms agents interact with their local loss functions via the convex proximity operator.
We then discuss nonconvex models. Here, agents interact with their local loss functions via the gradient. We discuss the standard mini-batch stochastic gradient (SG) and an improved version, the loopless stochastic variance-reduced gradient (L-SVRG).
We end the tutorial by briefly mentioning our recent research on the vertical federated learning setting where the dataset is scattered, not by examples, but by features.
The phenomenon of dispersion in a physical system occurs whenever the elementary building blocks of the system, whether they are particles or waves, overall move away from each other, because each evolves according to a distinct momentum. This physical process limits the superposition of particles or waves, and leads to remarkable mathematical properties of the densities or amplitudes, including local and global decay, Strichartz estimates, and smoothing.
In kinetic theory, the effects of dispersion in the whole space were notably well captured by the estimates developed by Castella and Perthame in 1996, which, for instance, are particularly useful in the analysis of the Boltzmann equation to construct global solutions. However, these estimates are based on the transfer of integrability of particle densities in mixed Lebesgue spaces, which fails to apply to general settings of kinetic dynamics.
Therefore, we are now interested in characterizing the kinetic dispersive effects in the whole space in cases where only natural principles of conservation of mass, momentum and energy, and decay of entropy seem to hold. Such general settings correspond to degenerate endpoint cases of the Castella–Perthame estimates where no dispersion is effectively measured. However, by introducing a suitable kinetic uncertainty principle, we will see how it is possible to extract some amount of entropic dispersion and, in essence, measure how particles tend to move away from each other, at least when they are not restricted by a spatial boundary.
A simple application of entropic dispersion will then show us how kinetic dynamics in the whole space inevitably leads, in infinite time, to an asymptotic thermodynamic equilibrium state with no particle interaction and no available heat to sustain thermodynamic processes, thereby providing a provocative interpretation of the heat death of the universe.