Eduard Gorbunov
~Eduard_Gorbunov1
19
论文总数
9.5
年均投稿
平均评分
接收情况8/19
会议分布
ICLR
12
NeurIPS
6
ICML
1
发表论文 (19 篇)
202510 篇
5
Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity
ICLR 2025Poster
4
Communication-efficient Algorithms Under Generalized Smoothness Assumptions
ICLR 2025Rejected
4
Federated Learning Can Find Friends That Are Advantageous
ICLR 2025Rejected
4
Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
NeurIPS 2025Poster
4
Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy
ICLR 2025Rejected
3
Differentially Private Clipped-SGD: High-Probability Convergence with Arbitrary Clipping Level
NeurIPS 2025Rejected
4
Median Clipping for Zeroth-order Non-Smooth Convex Optimization and Multi Arm Bandit Problem with Heavy-tailed Symmetric Noise
ICLR 2025Rejected
4
Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization
ICLR 2025Poster
4
Clipping Improves Adam and AdaGrad when the Noise Is Heavy-Tailed
ICLR 2025Rejected
4
Clipping Improves Adam-Norm and AdaGrad-Norm when the Noise Is Heavy-Tailed
ICML 2025Poster
20249 篇
3
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise
ICLR 2024Rejected
3
Clip21: Error Feedback for Gradient Clipping
ICLR 2024Rejected
3
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
ICLR 2024Rejected
4
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
ICLR 2024Rejected
4
Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences
NeurIPS 2024Poster
5
Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences
NeurIPS 2024Poster
-
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
ICLR 2024Rejected
3
Exploring Jacobian Inexactness in Second-Order Methods for Variational Inequalities: Lower Bounds, Optimal Algorithms and Quasi-Newton Approximations
NeurIPS 2024Spotlight
4
Remove that Square Root: A New Efficient Scale-Invariant Version of AdaGrad
NeurIPS 2024Poster