Yang You
~Yang_You1
53
论文总数
26.5
年均投稿
平均评分
接收情况25/53
会议分布
ICLR
35
NeurIPS
12
ICML
5
COLM
1
发表论文 (53 篇)
202535 篇
3
Trusted and Interactive Clustering for Time-Series Data
ICLR 2025withdrawn
4
COBias and Debias: Minimizing Language Model Pairwise Accuracy Bias via Nonlinear Integer Programming
ICLR 2025Rejected
4
Let the Rule Speak: Enhancing In-context Learning Debiasing with Interpretability
ICLR 2025Rejected
4
Ensemble Debiasing Across Class and Sample Levels for Fairer Prompting Accuracy
COLM 2025Poster
4
EfficientSkip: Efficiently Transforming Dense LLMs into Sparse Variants
ICLR 2025withdrawn
4
Real-Time Video Generation with Pyramid Attention Broadcast
ICLR 2025Poster
6
Recurrent Diffusion for Large-Scale Parameter Generation
ICLR 2025withdrawn
4
ElasticMM: Efficient Multimodal LLMs Serving with Elastic Multimodal Parallelism
NeurIPS 2025Oral
4
Scaling Up Parameter Generation: A Recurrent Diffusion Approach
NeurIPS 2025Poster
4
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning
NeurIPS 2025Poster
4
Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning
ICLR 2025Rejected
4
MERIT: Maximum-normalized Element-wise Ratio for Language Model Large-batch Training
ICML 2025Poster
5
Train Small, Infer Large: Memory-Efficient LoRA Training for Large Language Models
ICLR 2025Poster
4
SOP-Agent: Empower General Purpose AI Agent with Domain-Specific SOPs
ICLR 2025withdrawn
4
MERIT: Maximum-normalized Element-wise Ratio for Language Model Large-batch Training
ICLR 2025Rejected
4
Can a Large Language Model be a Gaslighter?
ICLR 2025Poster
5
Conditional LoRA Parameter Generation
ICLR 2025withdrawn
5
DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers
ICLR 2025Rejected
3
Visual Perception in Text Strings
ICLR 2025Rejected
4
DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers
ICML 2025Poster
4
StarTrail: Concentric Ring Sequence Parallelism for Efficient Near-Infinite-Context Transformer Model Training
NeurIPS 2025Poster
4
Dynamic Diffusion Transformer
ICLR 2025Poster
3
SeedLoRA: A Fusion Approach to Efficient LLM Fine-Tuning
ICML 2025Poster
4
Generating GFlowNets as You Wish with Diffusion Process
ICLR 2025withdrawn
4
SeedLoRA: A Fusion Approach to Efficient LLM Fine-Tuning
ICLR 2025Rejected
4
Emphasizing Discriminative Features for Dataset Distillation in Complex Scenarios
ICLR 2025withdrawn
4
Info-Coevolution: An Efficient Framework for Data Model Coevolution
ICML 2025Poster
4
A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training
ICLR 2025withdrawn
3
Unsupervised Learning for Class Distribution Mismatch
ICML 2025Poster
4
Prioritize Alignment in Dataset Distillation
ICLR 2025Rejected
3
EnvBridge: Bridging Diverse Environments with Cross-Environment Knowledge Transfer for Embodied AI
ICLR 2025Rejected
4
Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights
NeurIPS 2025Poster
4
REPA Works Until It Doesn’t: Early-Stopped, Holistic Alignment Supercharges Diffusion Training
NeurIPS 2025Poster
4
MixEval-X: Any-to-any Evaluations from Real-world Data Mixture
ICLR 2025Spotlight
4
Neural-Driven Image Editing
NeurIPS 2025Poster
202418 篇
3
Balance Beam: adaptive computation for affordable training and inference with high-throughput offloading for LLMs
ICLR 2024withdrawn
4
SpeedLoader: An I/O efficient scheme for heterogeneous and distributed LLM operation
NeurIPS 2024Poster
3
Neural Network Diffusion
ICLR 2024withdrawn
5
Let's reward step by step: Step-Level reward model as the Navigators for Reasoning
ICLR 2024withdrawn
4
Gauging Learnability in Supervised Fine-tuning Data
ICLR 2024withdrawn
3
AutoChunk: Automated Activation Chunk for Memory-Efficient Deep Learning Inference
ICLR 2024Poster
4
Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching
ICLR 2024Poster
4
The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive field
ICLR 2024Rejected
4
MetaDist: An Infrastructure for Automatic Parallelism via ShardCombine Algorithm
ICLR 2024Rejected
4
Boosting Unsupervised Contrastive Learning Using Diffusion-Based Data Augmentation From Scratch
ICLR 2024withdrawn
4
Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation
NeurIPS 2024Poster
4
MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures
NeurIPS 2024Poster
4
Implicit Semi-auto-regressive Image-to-Video Diffusion
ICLR 2024Rejected
4
Rethinking Human Evaluation Protocol for Text-to-Video Models: Enhancing Reliability, Reproducibility, and Practicality
NeurIPS 2024Poster
4
CTRL: Graph condensation via crafting rational trajectory matching
ICLR 2024Rejected
4
Can pre-trained models assist in dataset distillation?
ICLR 2024withdrawn
4
InfoBatch: Lossless Training Speed Up by Unbiased Dynamic Data Pruning
ICLR 2024Oral
3
Prioritize Alignment in Dataset Distillation
NeurIPS 2024Rejected