PaperHub

Min Zhang

~Min_Zhang9

35
论文总数
17.5
年均投稿
5.5
平均评分
接收情况18/35
会议分布
ICLR
20
NeurIPS
12
ICML
2
COLM
1

发表论文 (35 篇)

202524

5.5
3

LOGO --- Long cOntext aliGnment via efficient preference Optimization

ICML 2025Poster
5.5
4

DISRetrieval: Harnessing Discourse Structure for Long Document Retrieval

NeurIPS 2025Rejected
6.4
4

MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching

NeurIPS 2025Poster
5.3
4

LOGO --- Long cOntext aliGnment via efficient preference Optimization

ICLR 2025Rejected
6.3
4

IDInit: A Universal and Stable Initialization Method for Neural Network Training

ICLR 2025Poster
8.2
3

SCAN: Self-Denoising Monte Carlo Annotation for Robust Process Reward Learning

NeurIPS 2025Poster
7.0
4

Revealing and Mitigating Over-Attention in Knowledge Editing

ICLR 2025Poster
4.8
4

InfCycle: Learning to Use Tools via Inference Compute and Cycle Consistency

ICLR 2025withdrawn
3.5
4

L-CiteEval: Do Long-Context Models Truly Leverage Context for Responding?

ICLR 2025withdrawn
5.5
4

Unleashing Reasoning Capability of LLMs via Scalable Question Synthesis from Scratch

ICLR 2025Rejected
6.4
4

Learning to Watermark: A Selective Watermarking Framework for Large Language Models via Multi-Objective Optimization

NeurIPS 2025Poster
7.8
4

Exploring the Translation Mechanism of Large Language Models

NeurIPS 2025Poster
6.8
4

Thinking in Character: Advancing Role-Playing Agents with Role-Aware Reasoning

NeurIPS 2025Poster
3.8
4

Path Selection Makes BERT-family Good Generators

ICLR 2025Rejected
4.4
5

DynamicKV: Task-Aware Adaptive KV Cache Compression for Long Context LLMs

ICLR 2025Rejected
6.5
4

DelTA: An Online Document-Level Translation Agent Based on Multi-Level Memory

ICLR 2025Poster
5.8
4

Reflection on Knowledge Graph for Large Language Models Reasoning

ICLR 2025Rejected
5.5
4

Beware of Calibration Data for Pruning Large Language Models

ICLR 2025Poster
3.0
4

CASD: Enhancing Generation Accuracy via Context-Aware Speculative Decoding

ICLR 2025withdrawn
4.4
5

SinkQ: Accurate 2-bit KV Cache Quantization with Dynamic Sink Tracking

ICLR 2025withdrawn
7.3
4

Phased Training for LLM-powered Text Retrieval Models Beyond Data Scaling

COLM 2025Poster
2.0
4

Evolving Virtual World with Delta-Engine

ICLR 2025Rejected
5.5
4

Multi-modality Expansion and Retention for LLMs through Parameter Merging and Decoupling

ICLR 2025Rejected
6.6
4

Function-to-Style Guidance of LLMs for Code Translation

ICML 2025Poster

202411