PaperHub

Cho-Jui Hsieh

~Cho-Jui_Hsieh1

30
论文总数
15.0
年均投稿
5.6
平均评分
接收情况13/30
会议分布
ICLR
24
NeurIPS
4
ICML
2

发表论文 (30 篇)

202518

5.7
3

Learn from the Past: Dynamic Data Pruning with Historically Weighted Bernoulli Sampling

ICLR 2025Rejected
5.3
4

Certified Training with Branch-and-Bound: A Case Study on Lyapunov-stable Neural Control

ICLR 2025withdrawn
3.0
4

Provably Noise-Resilient Training of Parameterized Quantum Circuits

ICLR 2025Rejected
4.0
4

MuLan: Multimodal-LLM Agent for Progressive and Interactive Multi-Object Diffusion

ICLR 2025Rejected
5.0
8

OR-Bench: An Over-Refusal Benchmark for Large Language Models

ICLR 2025Rejected
6.6
4

OR-Bench: An Over-Refusal Benchmark for Large Language Models

ICML 2025Poster
4.8
4

On the loss of context-awareness in general instruction finetuning

ICLR 2025withdrawn
7.8
4

On the Loss of Context Awareness in General Instruction Fine-tuning

NeurIPS 2025Poster
6.0
4

Don’t Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models

NeurIPS 2025Poster
6.0
4

Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning

NeurIPS 2025Poster
6.8
4

Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs

NeurIPS 2025Poster
6.3
4

The Crystal Ball Hypothesis in diffusion models: Anticipating object positions from initial noise

ICLR 2025Poster
5.5
4

Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning

ICLR 2025Rejected
6.3
3

Large Language Models are Interpretable Learners

ICLR 2025Poster
6.0
4

Is Your Multimodal Language Model Oversensitive to Safe Queries?

ICLR 2025Poster
8.7
3

LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA Optimization

ICLR 2025Oral
5.5
4

SeedLoRA: A Fusion Approach to Efficient LLM Fine-Tuning

ICLR 2025Rejected
5.5
3

SeedLoRA: A Fusion Approach to Efficient LLM Fine-Tuning

ICML 2025Poster

202412