PaperHub

Pavlo Molchanov

~Pavlo_Molchanov1

22
论文总数
11.0
年均投稿
6.1
平均评分
接收情况15/22
会议分布
ICLR
13
NeurIPS
6
ICML
3

发表论文 (22 篇)

202518

6.6
4

FeatSharp: Your Vision Model Features, Sharper

ICML 2025Poster
6.0
4

Minifinetuning: Low-Data Generation Domain Adaptation through Corrective Self-Distillation

ICLR 2025Rejected
5.3
4

PHI-S: Distribution Balancing for Agglomerative Models

ICLR 2025Rejected
4.5
4

VILA^2: VLM Augmented VLM with Self-Improvement

ICLR 2025withdrawn
5.0
3

LLM Pruning and Distillation in Practice

ICLR 2025Rejected
6.5
4

LLaMaFlex: Many-in-one LLMs via Generalized Pruning and Weight Sharing

ICLR 2025Poster
4.0
4

UNAST: Unified framework for Neural Architecture Search for Transformers

ICLR 2025withdrawn
6.8
4

Scaling RL to Long Videos

NeurIPS 2025Poster
4.9
4

LaCache: Ladder-Shaped KV Caching for Efficient Long-Context Modeling of Large Language Models

ICML 2025Poster
6.8
4

LongMamba: Enhancing Mamba's Long-Context Capabilities via Training-Free Receptive Field Enlargement

ICLR 2025Poster
5.0
3

ZoomVLM: A Tuning-Free Framework for Efficient Video Understanding via Adaptive Zooming in Vision-Language Models

ICLR 2025Rejected
4.8
4

X-VILA: Cross-Modality Alignment for Large Language Models

ICLR 2025withdrawn
7.5
4

Hymba: A Hybrid-head Architecture for Small Language Models

ICLR 2025Spotlight
6.8
4

GSPN-2: Efficient Parallel Sequence Modeling

NeurIPS 2025Poster
6.7
3

LongVILA: Scaling Long-Context Visual Language Models for Long Videos

ICLR 2025Poster
7.3
4

Nemotron-Flash: Towards Latency-Optimal Hybrid Small Language Models

NeurIPS 2025Poster
6.1
4

Puzzle: Distillation-Based NAS for Inference-Optimized LLMs

ICML 2025Poster
8.2
4

Efficient Hybrid Language Model Compression through Group-Aware SSM Pruning

NeurIPS 2025Poster