Page Not Found
Page not found. Your pixels are in another canvas.
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Page not found. Your pixels are in another canvas.
About me
This is a page not in th emain menu
Published:
I have internship in Samsung R&D Institute UK (SRUK) from April 15 2024 until October 13 2024.
Published:
I am visiting Samuel Horvath and Martin Takac at MBZUAI in Abu Dhabi, UAE.
Published:
I am visiting Sebastian Stich at CISPA Helmholtz Center for Information Security.
Published:
I am visiting Machine Intelligence Lab this summer.
Published:
My wife and I are finally at KAUST. We have to stay in quarantine until January 10.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in The Thirty-seventh International Conference on Machine Learning (ICML 2020), 2020
Grigory Malinovsky, Dmitry Kovalev, Elnur Gasanov, Laurent Condat, Peter Richtárik
Published in Frontiers in Signal Processing, 2022
Laurent Condat, Grigory Malinovsky, Peter Richtárik
Published in arXiv, 2022
Grigory Malinovsky, Peter Richtárik
Published in Computer Research and Modeling, 2022
Marina Danilova, Grigory Malinovsky
Published in Thirty-ninth International Conference on Machine Learning (ICML 2022), 2022
Konstantin Mishchenko, Grigory Malinovsky, Sebastian Stich, Peter Richtárik
Published in Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2022), 2022
Grigory Malinovsky, Kai Yi, Peter Richtárik
Published in 39th Conference on Uncertainty in Artificial Intelligence (UAI 2023), 2023
Grigory Malinovsky, Alibek Sailanbayev, Peter Richtárik
Published in The 26th International Conference on Artificial Intelligence and Statistics (AISTATS 2023), 2023
Michał Grudzień, Grigory Malinovsky, Peter Richtárik
Published in arXiv, 2023
Dmitry Kovalev, Alexander Gasnikov, Grigory Malinovsky
Published in arXiv, 2023
Grigory Malinovsky, Samuel Horváth, Konstantin Burlachenko, Peter Richtárik
Published in arXiv, 2023
Laurent Condat, Ivan Agarský, Grigory Malinovsky, Peter Richtárik
Published in arXiv, 2023
Michał Grudzień, Grigory Malinovsky, Peter Richtárik
Published in arXiv, 2023
Yury Demidovich, Grigory Malinovsky, Egor Shulgin, Peter Richtárik
Published in 4th International Workshop on Distributed Machine Learning (DistributedML 2023), 2023
Grigory Malinovsky, Konstantin Mishchenko, Peter Richtárik
Published in Thirty-eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2023), 2023
Yury Demidovich, Grigory Malinovsky, Igor Sokolov, Peter Richtárik
Published in The 38th AAAI Conference on Artificial Intelligence (AAAI 2024), 2024
Soumia Boucherouite, Grigory Malinovsky, Peter Richtárik, El Houcine Bergou
Published in arXiv, 2024
Yury Demidovich, Grigory Malinovsky, Peter Richtárik
Published in The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024), 2024
Ionut-Vlad Modoranu, Mher Safaryan, Grigory Malinovsky, Eldar Kurtic, Thomas Robert, Peter Richtarik, Dan Alistarh
Published in The Thirty-Eighth Annual Conference on Neural Information Processing Systems (NeurIPS 2024), 2024
Abdurakhmon Sadiev, Grigory Malinovsky, Eduard Gorbunov, Igor Sokolov, Ahmed Khaled, Konstantin Burlachenko, Peter Richtárik
Published in The Thirty-Eighth Annual Conference on Neural Information Processing Systems, (NeurIPS 2024), 2024
Grigory Malinovsky, Peter Richtárik, Samuel Horváth, Eduard Gorbunov
Published in ArXiv, 2024
Grigory Malinovsky, Umberto Michieli, Hasan Abed Al Kader Hammoud, Taha Ceritli, Hayder Elesedy, Mete Ozay, Peter Richtárik
Published:
Best poster Award.
Published:
Best talk Award.
Published:
Book of abstracts (page 68).
Published:
Published:
Published:
Published:
Published:
Published:
Google Research Seminar (invited by Zachary Charles)
Published:
Published:
Tutoring, Physics Olympiads and Tests, 2019
Preparation of 7-11th grade students for the Olympiads and tests, 2017-2019
Undergraduate course, Teaching Assistance, MIPT, Spring term, 2019
This course aims to introduce students to modern state of Machine Learning and Artificial Intelligence. It is designed to take one year (two terms at MIPT) - approximately 2 * 15 lectures and seminars.
Undergraduate course, Teaching Assistance, MIPT, Fall term, 2019
Theory: Convex Sets and Functions, Optimality Conditions, Foundations of duality theory
Practice: Optimization Problem Statement, Methods for solving problems without restrictions, Methods for solving problems with simple constraints Linear programming, Cone Optimization Problems and SDP
Graduate course, Teaching Assistance, Ozon Masters, Spring, 2020
The introductory course to convex optimization and modern optimization methods.
Graduate course, Teaching Assistance, KAUST, Fall, 2021
Stochastic gradient descent (SGD) in one or another of its many variants is the workhorse method for training modern supervised machine learning models. However, the world of SGD methods is vast and expanding, which makes it hard for practitioners and even experts to understand its landscape and inhabitants. This course is a mathematically rigorous and comprehensive introduction to the field, and is based on the latest results and insights. The course develops a convergence and complexity theory for serial, parallel, and distributed variants of SGD, in the strongly convex, convex and nonconvex setup, with randomness coming from sources such as subsampling and compression. Additional topics such as acceleration via Nesterov momentum or curvature information will be covered as well. A substantial part of the course offers a unified analysis of a large family of variants of SGD which have so far required different intuitions, convergence analyses, have different applications, and which have been developed separately in various communities. This framework includes methods with and without the following tricks, and their combinations: variance reduction, data sampling, coordinate sampling, arbitrary sampling, importance sampling, mini-batching, quantization, sketching, dithering and sparsification.
Graduate course, Teaching Assistance, KAUST, Fall, 2022
Graduate seminar focusing on special topics within the field.
Graduate course, Teaching Assistance, KAUST, Fall, 2022
Graduate course, Teaching Assistance, KAUST, Fall, 2022
Stochastic gradient descent (SGD) in one or another of its many variants is the workhorse method for training modern supervised machine learning models. However, the world of SGD methods is vast and expanding, which makes it hard for practitioners and even experts to understand its landscape and inhabitants. This course is a mathematically rigorous and comprehensive introduction to the field, and is based on the latest results and insights. The course develops a convergence and complexity theory for serial, parallel, and distributed variants of SGD, in the strongly convex, convex and nonconvex setup, with randomness coming from sources such as subsampling and compression. Additional topics such as acceleration via Nesterov momentum or curvature information will be covered as well. A substantial part of the course offers a unified analysis of a large family of variants of SGD which have so far required different intuitions, convergence analyses, have different applications, and which have been developed separately in various communities. This framework includes methods with and without the following tricks, and their combinations: variance reduction, data sampling, coordinate sampling, arbitrary sampling, importance sampling, mini-batching, quantization, sketching, dithering and sparsification.
Graduate course, Teaching Assistance, KAUST, Spring, 2023
Graduate seminar focusing on special topics within the field.
Graduate course, Teaching Assistance, KAUST, Fall, 2023
Graduate course, Teaching Assistance, KAUST, Spring, 2024
Stochastic gradient descent (SGD) in one or another of its many variants is the workhorse method for training modern supervised machine learning models. However, the world of SGD methods is vast and expanding, which makes it hard for practitioners and even experts to understand its landscape and inhabitants. This course is a mathematically rigorous and comprehensive introduction to the field, and is based on the latest results and insights. The course develops a convergence and complexity theory for serial, parallel, and distributed variants of SGD, in the strongly convex, convex and nonconvex setup, with randomness coming from sources such as subsampling and compression. Additional topics such as acceleration via Nesterov momentum or curvature information will be covered as well. A substantial part of the course offers a unified analysis of a large family of variants of SGD which have so far required different intuitions, convergence analyses, have different applications, and which have been developed separately in various communities. This framework includes methods with and without the following tricks, and their combinations: variance reduction, data sampling, coordinate sampling, arbitrary sampling, importance sampling, mini-batching, quantization, sketching, dithering and sparsification.