Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in The Thirty-seventh International Conference on Machine Learning (ICML 2020), 2020
Grigory Malinovsky, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtárik
Published in Frontiers in Signal Processing, 2022
Laurent Condat, Grigory Malinovsky, Peter Richtárik
Published in arXiv, 2022
Grigory Malinovsky, Konstantin Mishchenko, Peter Richtárik
Published in arXiv, 2022
Grigory Malinovsky, Peter Richtárik
Published in Computer Research and Modeling, 2022
Marina Danilova, Grigory Malinovsky
Published in Thirty-ninth International Conference on Machine Learning (ICML 2022), 2022
Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, Peter Richtárik
Published in Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022), 2022
Grigory Malinovsky, Kai Yi, Peter Richtárik
Published in arXiv, 2022
Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Burlachenko, Peter Richtárik
Published in arXiv, 2022
Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, El Houcine Bergou
Published in 39th Conference on Uncertainty in Artificial Intelligence (UAI 2023), 2023
Grigory Malinovsky, Alibek Sailanbayev, Peter Richtárik
Published in The 26th International Conference on Artificial Intelligence and Statistics (AISTATS 2023), 2023
Michał Grudzień, Grigory Malinovsky, Peter Richtárik
Published in arXiv, 2023
Dmitry Kovalev, Alexander Gasnikov, Grigory Malinovsky
Published in arXiv, 2023
Grigory Malinovsky, Samuel Horváth, Konstantin Burlachenko, Peter Richtárik
Published in arXiv, 2023
Laurent Condat, Ivan Agarský, Grigory Malinovsky, Peter Richtárik
Published in arXiv, 2023
Yury Demidovich, Grigory Malinovsky, Igor Sokolov, Peter Richtárik
Published in arXiv, 2023
Michał Grudzień, Grigory Malinovsky, Peter Richtárik
Published:
Best poster Award.
Published:
Best talk Award.
Published:
Book of abstracts (page 68).
Published:
Published:
Published:
Published:
Published:
Published:
Google Research Seminar (invited by Zachary Charles)
Tutoring, Physics Olympiads and Tests, 2019
Preparation of 7-11th grade students for the Olympiads and tests, 2017-2019
Undergraduate course, Teaching Assistance, MIPT, Spring term, 2019
This course aims to introduce students to modern state of Machine Learning and Artificial Intelligence. It is designed to take one year (two terms at MIPT) - approximately 2 * 15 lectures and seminars.
Undergraduate course, Teaching Assistance, MIPT, Fall term, 2019
Theory: Convex Sets and Functions, Optimality Conditions, Foundations of duality theory
Practice: Optimization Problem Statement, Methods for solving problems without restrictions, Methods for solving problems with simple constraints Linear programming, Cone Optimization Problems and SDP
Graduate course, Teaching Assistance, Ozon Masters, Spring, 2020
The introductory course to convex optimization and modern optimization methods.
Graduate course, Teaching Assistance, Ozon Masters, Fall, 2020
The focus of this course is on optimization modeling, which includes uncertainty in one form or another. This section includes both well-studied stochastic optimization, which has found an infinite number of applications, and relatively new and actively developing approaches: robust optimization and online optimization. Like the previous part of the course, in this one we try to keep a balance between theory, applications and algorithms.
Graduate course, Teaching Assistance, Ozon Masters, Spring, 2021
This course focuses on optimization problems involving uncertainty in data. This section of optimization is very rich in applications: finance, logistics and supply chain optimization, statistical evaluation and others. In this course, we will analyze the basic techniques of stochastic and robust optimization.
Graduate course, Teaching Assistance, KAUST, Fall, 2021
Stochastic gradient descent (SGD) in one or another of its many variants is the workhorse method for training modern supervised machine learning models. However, the world of SGD methods is vast and expanding, which makes it hard for practitioners and even experts to understand its landscape and inhabitants. This course is a mathematically rigorous and comprehensive introduction to the field, and is based on the latest results and insights. The course develops a convergence and complexity theory for serial, parallel, and distributed variants of SGD, in the strongly convex, convex and nonconvex setup, with randomness coming from sources such as subsampling and compression. Additional topics such as acceleration via Nesterov momentum or curvature information will be covered as well. A substantial part of the course offers a unified analysis of a large family of variants of SGD which have so far required different intuitions, convergence analyses, have different applications, and which have been developed separately in various communities. This framework includes methods with and without the following tricks, and their combinations: variance reduction, data sampling, coordinate sampling, arbitrary sampling, importance sampling, mini-batching, quantization, sketching, dithering and sparsification.
Graduate course, Teaching Assistance, KAUST, Fall, 2022
Graduate seminar focusing on special topics within the field.
Graduate course, Teaching Assistance, KAUST, Fall, 2022
Graduate course, Teaching Assistance, KAUST, Fall, 2022
Stochastic gradient descent (SGD) in one or another of its many variants is the workhorse method for training modern supervised machine learning models. However, the world of SGD methods is vast and expanding, which makes it hard for practitioners and even experts to understand its landscape and inhabitants. This course is a mathematically rigorous and comprehensive introduction to the field, and is based on the latest results and insights. The course develops a convergence and complexity theory for serial, parallel, and distributed variants of SGD, in the strongly convex, convex and nonconvex setup, with randomness coming from sources such as subsampling and compression. Additional topics such as acceleration via Nesterov momentum or curvature information will be covered as well. A substantial part of the course offers a unified analysis of a large family of variants of SGD which have so far required different intuitions, convergence analyses, have different applications, and which have been developed separately in various communities. This framework includes methods with and without the following tricks, and their combinations: variance reduction, data sampling, coordinate sampling, arbitrary sampling, importance sampling, mini-batching, quantization, sketching, dithering and sparsification.
Graduate course, Teaching Assistance, KAUST, Spring, 2023
Graduate seminar focusing on special topics within the field.
Graduate course, Teaching Assistance, KAUST, Fall, 2023