Yunhe Wang
~Yunhe_Wang1
24
论文总数
12.0
年均投稿
平均评分
接收情况14/24
会议分布
ICLR
14
NeurIPS
6
ICML
4
发表论文 (24 篇)
202514 篇
5
SlimLLM: Accurate Structured Pruning for Large Language Models
ICML 2025Poster
4
Linear Multistep Solver Distillation for Fast Sampling of Diffusion Models
ICLR 2025Poster
4
DECO: Unleashing the Potential of ConvNets for Query-based Detection and Segmentation
ICLR 2025Poster
4
Forest-of-Thought: Scaling Test-Time Compute for Enhancing LLM Reasoning
ICML 2025Poster
4
U-REPA: Aligning Diffusion U-Nets to ViTs
NeurIPS 2025Poster
4
Weak-to-Strong Enhanced Vision Model
ICLR 2025withdrawn
4
LLM Data Selection and Utilization via Dynamic Bi-level Optimization
ICML 2025Poster
5
Prompt-guided Visual Perception for Efficient Training-free Video LLM
ICLR 2025withdrawn
4
Mixture of Lookup Experts
ICML 2025Oral
4
GenVidBench: A Challenging Benchmark for Detecting AI-Generated Video
ICLR 2025withdrawn
3
SAN-Diff: Structure-aware noise for super-resolution diffusion model
ICLR 2025Rejected
5
Empirical Study on Enhancing Efficiency in Masked Image Modeling Pre-training
ICLR 2025Rejected
4
Multi-Granularity Semantic Revision for Large Language Model Distillation
ICLR 2025Rejected
5
CBQ: Cross-Block Quantization for Large Language Models
ICLR 2025Spotlight
202410 篇
4
Revisiting Ternary Neural Networks towards Asymmetric Thresholds and Uniform Distribution
ICLR 2024Rejected
4
PPT: Token Pruning and Pooling for Efficient Vision Transformers
ICLR 2024withdrawn
4
Connectivity-based Token Condensation for Efficient Vision Transformer
ICLR 2024withdrawn
4
Double Rounding Quantization for Flexible Deep Neural Network Compression
ICLR 2024withdrawn
4
U-DiTs: Downsample Tokens in U-Shaped Diffusion Transformers
NeurIPS 2024Poster
5
Kangaroo: Lossless Self-Speculative Decoding for Accelerating LLMs via Double Early Exiting
NeurIPS 2024Poster
4
Enhancing Large Language Models through Adaptive Tokenizers
NeurIPS 2024Poster
4
Multiscale Positive-Unlabeled Detection of AI-Generated Texts
ICLR 2024Spotlight
4
Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning
NeurIPS 2024Poster
4
MemoryFormer : Minimize Transformer Computation by Removing Fully-Connected Layers
NeurIPS 2024Poster