PaperHub

Yu Wang

~Yu_Wang3

31
论文总数
15.5
年均投稿
5.8
平均评分
接收情况20/31
会议分布
ICLR
17
NeurIPS
9
ICML
3
COLM
2

发表论文 (31 篇)

202523

6.5
4

Distilled Decoding 1: One-step Sampling of Image Auto-regressive Models with Flow Matching

ICLR 2025Poster
6.4
4

ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning

NeurIPS 2025Poster
4.8
4

FlightBench: Benchmarking Learning-based Methods for Ego-vision-based Quadrotors Navigation

ICLR 2025withdrawn
3.8
4

Learning Strategic Language Agents in the Werewolf Game with Iterative Latent Space Policy Optimization

ICML 2025Poster
7.0
3

Learning from Suboptimal Data in Continuous Control via Auto-Regressive Soft Q-Network

ICML 2025Poster
6.0
4

Few-shot In-context Preference Learning using Large Language Models

ICLR 2025Rejected
5.7
3

Accelerating Auto-regressive Text-to-Image Generation with Training-free Speculative Jacobi Decoding

ICLR 2025Poster
7.3
4

Speculative Jacobi-Denoising Decoding for Accelerating Autoregressive Text-to-image Generation

NeurIPS 2025Poster
4.6
5

VeSX: A Framework Featured by Verification, Self-Correction and In-context Learning for Web Automation Tasks

ICLR 2025Rejected
4.3
4

Reward-Robust RLHF in LLMs

ICLR 2025Rejected
6.8
4

Distilled Decoding 2: One-step Sampling of Image Auto-regressive Models with Conditional Score Distillation

NeurIPS 2025Poster
6.8
4

What Can RL Bring to VLA Generalization? An Empirical Study

NeurIPS 2025Poster
5.3
4

Efficient Expert Pruning for Sparse Mixture-of-Experts Language Models: Enhancing Performance and Reducing Inference Costs

ICLR 2025Rejected
7.8
4

R2R: Efficiently Navigating Divergent Reasoning Paths with Small-Large Model Token Routing

NeurIPS 2025Poster
6.3
3

Evaluating LLMs Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving

ICML 2025Poster
4.5
4

Towards Accurate and Efficient Sub-8-Bit Integer Training

ICLR 2025withdrawn
6.0
4

Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better

ICLR 2025Poster
7.3
4

PAROAttention: Pattern-Aware ReOrdering for Efficient Sparse and Quantized Attention in Visual Generation Models

NeurIPS 2025Poster
5.5
4

LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K

ICLR 2025Rejected
6.3
4

LV-Eval: A Balanced Long-Context Benchmark with 5 Length Levels Up to 256K

COLM 2025Poster
6.3
3

Mixture of Attention Spans: Optimizing LLM Inference Efficiency with Heterogeneous Sliding-Window Lengths

COLM 2025Poster
5.5
4

MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression

ICLR 2025Rejected
6.0
3

ViDiT-Q: Efficient and Accurate Quantization of Diffusion Transformers for Image and Video Generation

ICLR 2025Poster