Seminars, for informal dissemination of research results, exploratory work by research teams, outreach activities, etc., constitute the simplest form of meetings at a Mathematics research centre.
CAMGSD has recorded the calendar of its seminars for a long time, this page serving both as a means of public announcement of forthcoming activities but also as a historic record.
Defining internal symmetry in a quantum theory through the lens of topological defects opens the door to generalised notions of symmetry, including some arising from non-invertible transformations, and enables a calculus that leverages well-established methods from topological quantum field theory. In d spatial dimensions, the framework of fusion d-category theory is believed to offer an axiomatisation for finite non-invertible symmetries. Though seemingly exotic, such non-invertible symmetries can be shown to naturally arise as dual symmetries upon gauging invertible symmetries. In this talk, I will present a framework to systematically investigate these aspects in quantum lattice models.
The goal of these lectures is to give a simple and direct introduction to some of the most basic concepts and techniques in Deep Learning. We will start by reviewing the fundamentals of Linear Regression and Linear Classifiers, and from there we will find our way into Deep Dense Neural Networks (aka multi-layer perceptrons). Then, we will introduce the theoretical and practical minimum to train such neural nets to perform the classification of handwritten digits, as provided by the MNIST dataset. This will require, in particular, the efficient computation of the gradients of the loss wrt the parameters of the model, which is achieved by backpropagation. Finally, if time permits, we will briefly describe other neural network architectures, such as Convolution Networks and Transformers, and other applications of deep learning, including Physics Informed Neural Networks, which apply neural nets to find approximate solutions of Differential Equations. The lectures will be accompanied by Python code, implementing some of these basic techniques.
The goal of these lectures is to give a simple and direct introduction to some of the most basic concepts and techniques in Deep Learning. We will start by reviewing the fundamentals of Linear Regression and Linear Classifiers, and from there we will find our way into Deep Dense Neural Networks (aka multi-layer perceptrons). Then, we will introduce the theoretical and practical minimum to train such neural nets to perform the classification of handwritten digits, as provided by the MNIST dataset. This will require, in particular, the efficient computation of the gradients of the loss wrt the parameters of the model, which is achieved by backpropagation. Finally, if time permits, we will briefly describe other neural network architectures, such as Convolution Networks and Transformers, and other applications of deep learning, including Physics Informed Neural Networks, which apply neural nets to find approximate solutions of Differential Equations. The lectures will be accompanied by Python code, implementing some of these basic techniques.