CAMGSD
IST FCT EditPT | EN

Seminars and short courses RSS feed

Seminars, for informal dissemination of research results, exploratory work by research teams, outreach activities, etc., constitute the simplest form of meetings at a Mathematics research centre.

CAMGSD has recorded the calendar of its seminars for a long time, this page serving both as a means of public announcement of forthcoming activities but also as a historic record.

For a full search interface see the Mathematics Department seminar page.

Europe/Lisbon —

Room 6.2.52, Faculty of Sciences, University of Lisbon Instituto Superior Técnico https://tecnico.ulisboa.pt

Lisbon WADE — Webinar in Analysis and Differential Equations

Delia Schiera, Instituto Superior Técnico, Universidade de Lisboa.

We will consider a Lane-Emden system on a bounded regular domain with Neumann boundary conditions and (sub)critical nonlinearities. In the critical regime, we show that, under suitable conditions on the exponents in the nonlinearities, least-energy (sign-changing) solutions exist. Moreover, through a suitable nonlinear eigenvalue problem, we prove convergence of solutions in dependence of the exponents of the nonlinearities in the (sub)critical range. Finally, I will briefly discuss related results on multiplicity, symmetry breaking, and regularity.

Based on joint works with A. Pistoia, A. Saldaña and H. Tavares.

Room P3.10, Mathematics Building Instituto Superior Técnico https://tecnico.ulisboa.pt

Algebra and Topology

Erkko Lehtonen, Khalifa University of Science and Technology, Abu Dhabi.

We provide a brief introduction to the theory of clones, minions, and clonoids, which are sets of functions of several arguments with certain closure conditions defined in terms of function class composition. These notions arise in a natural way in universal algebra and they have proved useful in the analysis of computational complexity of constraint satisfaction problems. Our primary focus is on clonoids of Boolean functions, and we present classifications of clonoids in the spirit of Post's classification of clones. Moreover, we propose refinements and strengthenings to Sparks's theorem on the cardinalities of clonoid lattices.

Europe/Lisbon —

Room P3.10, Mathematics Building Instituto Superior Técnico https://tecnico.ulisboa.pt

Mathematics for Artificial Intelligence

João Xavier, ISR & Instituto Superior Técnico.

Distributed machine learning addresses the problem of training a model when the dataset is scattered across spatially distributed agents. The goal is to design algorithms that allow each agent to arrive at the model trained on the whole dataset, but without agents ever disclosing their local data.

This tutorial covers the two main settings in DML, namely, Federated Learning, in which agents communicate with a common server, and Decentralized Learning, in which agents communicate only with a few neighbor agents. For each setting, we illustrate synchronous and asynchronous algorithms.

We start by discussing convex models. Although distributed algorithms can be derived from many perspectives, we show that convex models allow to generate many interesting synchronous algorithms based on the framework of contractive operators. Furthermore, by stochastically activating such operators by blocks, we obtain directly their asynchronous versions. In both kind of algorithms agents interact with their local loss functions via the convex proximity operator.

We then discuss nonconvex models. Here, agents interact with their local loss functions via the gradient. We discuss the standard mini-batch stochastic gradient (SG) and an improved version, the loopless stochastic variance-reduced gradient (L-SVRG).

We end the tutorial by briefly mentioning our recent research on the vertical federated learning setting where the dataset is scattered, not by examples, but by features.

Europe/Lisbon —

Room P3.10, Mathematics Building Instituto Superior Técnico https://tecnico.ulisboa.pt

Mathematics for Artificial Intelligence

João Xavier, ISR & Instituto Superior Técnico.

Distributed machine learning addresses the problem of training a model when the dataset is scattered across spatially distributed agents. The goal is to design algorithms that allow each agent to arrive at the model trained on the whole dataset, but without agents ever disclosing their local data.

This tutorial covers the two main settings in DML, namely, Federated Learning, in which agents communicate with a common server, and Decentralized Learning, in which agents communicate only with a few neighbor agents. For each setting, we illustrate synchronous and asynchronous algorithms.

We start by discussing convex models. Although distributed algorithms can be derived from many perspectives, we show that convex models allow to generate many interesting synchronous algorithms based on the framework of contractive operators. Furthermore, by stochastically activating such operators by blocks, we obtain directly their asynchronous versions. In both kind of algorithms agents interact with their local loss functions via the convex proximity operator.

We then discuss nonconvex models. Here, agents interact with their local loss functions via the gradient. We discuss the standard mini-batch stochastic gradient (SG) and an improved version, the loopless stochastic variance-reduced gradient (L-SVRG).

We end the tutorial by briefly mentioning our recent research on the vertical federated learning setting where the dataset is scattered, not by examples, but by features.

Current funding: FCT UIDB/04459/2020 & FCT UIDP/04459/2020.

©2025, Instituto Superior Técnico. All rights reserved.