CV
Education
- B.S. in GitHub, GitHub University, 2012
- M.S. in Jekyll, GitHub University, 2014
- Ph.D in Version Control Theory, GitHub University, 2018 (expected)
Work experience
- Summer 2015: Research Assistant
- Github University
- Duties included: Tagging issues
- Supervisor: Professor Git
- Fall 2015: Research Assistant
- Github University
- Duties included: Merging pull requests
- Supervisor: Professor Hub
Skills
- Skill 1
- Skill 2
- Sub-skill 2.1
- Sub-skill 2.2
- Sub-skill 2.3
- Skill 3
Publications
Talks
Averaged Heavy Ball Method
Poster at Traditional Youth School: Control, Information and Optimization, Voronovo, Russia
Averaged Heavy Ball Method
Talk at 62th Scientific conference at MIPT, Section of Data analysis, recognition and prediction, Moscow, Russia
Determination of data complexity using a universal approximating model
Talk at Mathematical Methods for Pattern Recognition: the 19th Russian National Conference with International Participation, Moscow, Russia
Random Reshuffling with Variance Reduction New Analysis and Better Rates
Talk at KAUST Conference on Artificial Intelligence 2021, Thuwal, Saudi Arabia
Random Reshuffling with Variance Reduction New Analysis and Better Rates
Talk at Traditional Youth School Control, Information and Optimization, Voronovo, Russia
Random Reshuffling with Variance Reduction New Analysis and Better Rates
Poster and Talk at Conference Optimization Without Borders, Voronovo, Russia
Federated Random Reshuffling with Compression and Variance Reduction
Poster at International Workshop on Federated Learning for User Privacy and Data Confidentiality, ICML 2021, online
Better Linear Rates for SGD with Data Shuffling
Poster and Talk at International OPT Workshop on Optimization for Machine Learning, NeurIPS 2021, online
On Server-Side Stepsizes in Federated Optimization: Theory Explaining the Heuristics
Poster at International OPT Workshop on Optimization for Machine Learning, NeurIPS 2021, online
ProxSkip: Breaking the Communication Complexity Barrier of Local Gradient Methods
Talk at Rising Stars in AI Symposium 2022 at KAUST, Thuwal, KAUST
Server-Side Stepsizes and Sampling Without Replacement Provably Help in Federated Optimization
Talk at Federated Learning One World Seminar (FLOW), online
ProxSkip: Breaking the Communication Complexity Barrier of Local Gradient Methods
Talk at All-Russian Optimization Seminar, online
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
Talk at UCL CORE (Yurii Nesterov's group) Optimization Seminar, Louvain-la-Neuve, Belgium
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
Talk at WIAS Stochastic Algorithms and Nonparametric Statistics group seminar, online
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
Talk at CISPA Helmholtz Center for Information Security seminar, Saarbrücken, Germany
ProxSkip: Yes! Local Gradient Steps Provably Lead to Communication Acceleration! Finally!
Talk at EPFL Machine Learning and Optimization Laboratory Seminar, Lausanne, Switzerland
ProxSkip: Breaking the Communication Complexity Barrier of Local Gradient Methods
Talk at 15th Viennese Conference on Optimal Control and Dynamic Games 2022, Vienna, Austria
On 5th Generation of Local Training Methods in Federated Learning
Talk at MIPT Intelligent Systems Seminar, Online
Can 5th Generation Local Training Methods Support Client Sampling? Yes!
Poster at Rising Stars in AI Symposium 2023 at KAUST, Thuwal, Saudi Arabia
Can 5th Generation Local Training Methods Support Client Sampling? Yes!
Poster at 26th International Conference on Artificial Intelligence and Statistics (AISTATS), Valencia, Spain
ProxSkip and its Variations: 5th Generation of Local Training Methods in Federated Learning
Talk at Google Research Seminar Zachary Charles), Online
Can 5th Generation Local Training Methods Support Client Sampling? Yes!
Talk at Third International Conference Mathematics in Armenia: Advances and Perspectives, Yerevan, Armenia
Random reshuffling with variance reduction: New analysis and better rates
Poster at 39th Conference on Uncertainty in Artificial Intelligence, Online
Server-side stepsizes and sampling without replacement provably help in federated optimization
Talk at 4th International Workshop on Distributed Machine Learning, Paris, France
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
Talk at Rising Stars in AI Symposium 2024 at KAUST, Thuwal, Saudi Arabia
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
Talk at Federated Learning One World Seminar (FLOW), Online
Introduction to Federated Optimization
Talk at Samsung AI Reading Club, Staines-upon-Thames, UK
Teaching
Service and leadership
- Currently signed in to 43 different slack teams