Luke Zettlemoyer
~Luke_Zettlemoyer1
32
论文总数
16.0
年均投稿
平均评分
接收情况30/32
会议分布
ICLR
15
NeurIPS
10
COLM
6
ICML
1
发表论文 (32 篇)
202519 篇
4
(Mis)Fitting Scaling Laws: A Survey of Scaling Law Fitting Techniques in Deep Learning
ICLR 2025Poster
4
Recycling the Web: A Method to Enhance Pre-training Data Quality and Quantity for Language Models
COLM 2025Poster
4
When Worse is Better: Navigating the Compression Generation Trade-off In Visual Tokenization
NeurIPS 2025Spotlight
5
Memory Layers at Scale
ICML 2025Poster
3
CAT: Content-Adaptive Image Tokenization
NeurIPS 2025Poster
4
Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass
ICLR 2025Poster
4
LMFusion: Adapting Pretrained Language Models for Multimodal Generation
NeurIPS 2025Poster
4
Fantastic Copyrighted Beasts and How (Not) to Generate Them
ICLR 2025Poster
5
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
ICLR 2025withdrawn
3
ParaPO: Aligning Language Models to Reduce Verbatim Reproduction of Pre-training Data
COLM 2025Poster
5
Heterogeneous Swarms: Jointly Optimizing Model Roles and Weights for Multi-LLM Systems
NeurIPS 2025Poster
5
MUSE: Machine Unlearning Six-Way Evaluation for Language Models
ICLR 2025Poster
5
Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model
ICLR 2025Oral
4
Precise Information Control in Long-Form Text Generation
NeurIPS 2025Poster
5
Meta CLIP 2: A Worldwide Scaling Recipe
NeurIPS 2025Spotlight
4
ReasonIR: Training Retrievers for Reasoning Tasks
COLM 2025Poster
6
Latent Action Pretraining from Videos
ICLR 2025Poster
4
FlexOLMo: Open Language Models for Flexible Data Use
NeurIPS 2025Spotlight
4
2 OLMo 2 Furious (COLM’s Version)
COLM 2025Poster
202413 篇
4
Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens
COLM 2024Poster
4
Self-Alignment with Instruction Backtranslation
ICLR 2024Oral
4
Visual Sketchpad: Sketching as a Visual Chain of Thought for Multimodal Language Models
NeurIPS 2024Poster
4
Do Membership Inference Attacks Work on Large Language Models?
COLM 2024Poster
6
Scaling Retrieval-Based Language Models with a Trillion-Token Datastore
NeurIPS 2024Poster
4
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore
ICLR 2024Spotlight
4
Eliciting Attributions from LLMs with Minimal Supervision
ICLR 2024Rejected
4
Megalodon: Efficient LLM Pretraining and Inference with Unlimited Context Length
NeurIPS 2024Poster
4
In-Context Pretraining: Language Modeling Beyond Document Boundaries
ICLR 2024Spotlight
4
Detecting Pretraining Data from Large Language Models
ICLR 2024Poster
4
Representation Deficiency in Masked Language Modeling
ICLR 2024Poster
4
Demystifying CLIP Data
ICLR 2024Spotlight
4
RA-DIT: Retrieval-Augmented Dual Instruction Tuning
ICLR 2024Poster