Peter Richtárik
~Peter_Richtárik1
48
论文总数
24.0
年均投稿
平均评分
接收情况21/48
会议分布
ICLR
32
NeurIPS
13
ICML
3
发表论文 (48 篇)
202523 篇
4
A Unified Theory of Stochastic Proximal Point Methods without Smoothness
ICLR 2025Rejected
4
Local Curvature Descent: Squeezing More Curvature out of Standard and Polyak Gradient Descent
NeurIPS 2025Poster
4
Local Curvature Descent: Squeezing More Curvature out of Standard and Polyak Gradient Descent
ICLR 2025Rejected
4
On the Convergence of FedProx with Extrapolation and Inexact Prox
ICLR 2025Rejected
4
Error Feedback for Smooth and Nonsmooth Convex Optimization with Constant, Decreasing and Polyak Stepsizes
ICLR 2025Rejected
4
Second-order Optimization under Heavy-Tailed Noise: Hessian Clipping and Sample Complexity Limits
NeurIPS 2025Poster
4
ATA: Adaptive Task Allocation for Efficient Resource Management in Distributed Machine Learning
ICML 2025Poster
4
Ringmaster ASGD: The First Asynchronous SGD with Optimal Time Complexity
ICML 2025Poster
4
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
ICLR 2025Spotlight
4
Smoothed Normalization for Efficient Distributed Private Optimization
ICML 2025Rejected
4
MindFlayer: Efficient Asynchronous Parallel SGD in the Presence of Heterogeneous and Random Worker Compute Times
ICLR 2025Rejected
4
Variance Reduced Distributed Non-Convex Optimization Using Matrix Stepsizes
ICLR 2025Rejected
4
MAST: model-agnostic sparsified training
ICLR 2025Poster
3
FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models
ICLR 2025withdrawn
5
Prune at the Clients, Not the Server: Accelerated Sparse Training in Federated Learning
ICLR 2025withdrawn
3
Tighter Performance Theory of FedExProx
ICLR 2025Rejected
5
Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning
ICLR 2025withdrawn
4
Momentum and Error Feedback for Clipping with Fast Rates and Differential Privacy
ICLR 2025Rejected
5
Methods for Convex $(L_0,L_1)$-Smooth Optimization: Clipping, Acceleration, and Adaptivity
ICLR 2025Poster
4
Communication-efficient Algorithms Under Generalized Smoothness Assumptions
ICLR 2025Rejected
4
Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
NeurIPS 2025Poster
4
Methods with Local Steps and Random Reshuffling for Generally Smooth Non-Convex Federated Optimization
ICLR 2025Poster
4
RAC-LoRA: A Theoretical Optimization Framework for Low-Rank Adaptation
ICLR 2025Rejected
202425 篇
4
Error Feedback Reloaded: From Quadratic to Arithmetic Mean of Smoothness Constants
ICLR 2024Poster
4
Error Feedback Shines when Features are Rare
ICLR 2024Rejected
5
Byzantine Robustness and Partial Participation Can Be Achieved at Once: Just Clip Gradient Differences
NeurIPS 2024Poster
4
Towards a Better Theoretical Understanding of Independent Subnetwork Training
ICLR 2024Rejected
4
On the Optimal Time Complexities in Decentralized Stochastic Asynchronous Optimization
NeurIPS 2024Poster
3
Improving Accelerated Federated Learning with Compression and Importance Sampling
ICLR 2024Rejected
4
Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity
NeurIPS 2024Spotlight
3
The Power of Extrapolation in Federated Learning
NeurIPS 2024Poster
3
LoCoDL: Communication-Efficient Distributed Learning with Local Training and Compression
NeurIPS 2024Rejected
4
Freya PAGE: First Optimal Time Complexity for Large-Scale Nonconvex Finite-Sum Optimization with Heterogeneous Asynchronous Computations
NeurIPS 2024Poster
3
Byzantine Robustness and Partial Participation Can Be Achieved Simultaneously: Just Clip Gradient Differences
ICLR 2024Rejected
2
Provably Doubly Accelerated Federated Learning: The First Theoretically Successful Combination of Local Training and Communication Compression
ICLR 2024Rejected
-
MARINA Meets Matrix Stepsizes: Variance Reduced Distributed Non-Convex Optimization
ICLR 2024Rejected
4
GradSkip: Communication-Accelerated Local Gradient Methods with Better Computational Complexity
ICLR 2024Rejected
6
FedP3: Federated Personalized and Privacy-friendly Network Pruning under Model Heterogeneity
ICLR 2024Poster
3
Explicit Personalization and Local Training: Double Communication Acceleration in Federated Learning
ICLR 2024Rejected
3
Det-CGD: Compressed Gradient Descent with Matrix Stepsizes for Non-Convex Optimization
ICLR 2024Poster
4
Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity
NeurIPS 2024Poster
-
Communication Compression for Byzantine Robust Learning: New Efficient Algorithms and Improved Rates
ICLR 2024Rejected
3
Clip21: Error Feedback for Gradient Clipping
ICLR 2024Rejected
5
MicroAdam: Accurate Adaptive Optimization with Low Space Overhead and Provable Convergence
NeurIPS 2024Poster
4
Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences
NeurIPS 2024Poster
4
Federated Optimization Algorithms with Random Reshuffling and Gradient Compression
ICLR 2024Rejected
3
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise
ICLR 2024Rejected
5
PV-Tuning: Beyond Straight-Through Estimation for Extreme LLM Compression
NeurIPS 2024Oral