Jingfeng Wu
~Jingfeng_Wu1
13
论文总数
6.5
年均投稿
平均评分
接收情况11/13
会议分布
NeurIPS
5
ICLR
5
ICML
3
发表论文 (13 篇)
20257 篇
5
Benefits of Early Stopping in Gradient Descent for Overparameterized Logistic Regression
ICML 2025Poster
4
Large Stepsizes Accelerate Gradient Descent for Regularized Logistic Regression
NeurIPS 2025Poster
4
Gradient Descent Converges Arbitrarily Fast for Logistic Regression via Large and Adaptive Stepsizes
ICML 2025Poster
5
Improved Scaling Laws in Linear Regression via Data Reuse
NeurIPS 2025Poster
4
Context-Scaling versus Task-Scaling in In-Context Learning
ICLR 2025Rejected
4
Implicit Bias of Gradient Descent for Non-Homogeneous Deep Networks
ICML 2025Poster
5
How Does Critical Batch Size Scale in Pre-training?
ICLR 2025Poster
20246 篇
4
How Many Pretraining Tasks Are Needed for In-Context Learning of Linear Regression?
ICLR 2024Spotlight
5
Scaling Laws in Linear Regression: Compute, Parameters, and Data
NeurIPS 2024Poster
4
In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization
NeurIPS 2024Poster
4
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
NeurIPS 2024Poster
3
Risk Bounds of Accelerated SGD for Overparameterized Linear Regression
ICLR 2024Poster
4
Private Overparameterized Linear Regression without Suffering in High Dimensions
ICLR 2024Rejected