Sanmi Koyejo
~Sanmi_Koyejo1
48
论文总数
24.0
年均投稿
平均评分
接收情况26/48
会议分布
ICLR
27
ICML
10
NeurIPS
8
COLM
3
发表论文 (48 篇)
202537 篇
5
The Utility and Complexity of In- and Out-of-Distribution Machine Unlearning
ICLR 2025Poster
4
Logits are All We Need to Adapt Closed Models
ICML 2025Poster
4
Lottery Ticket Adaptation: Mitigating Destructive Interference in LLMs
ICLR 2025withdrawn
4
Language Models May Verbatim Complete Text They Were Not Explicitly Trained On
ICML 2025Spotlight
6
Reliable and Efficient Amortized Model-based Evaluation
ICLR 2025Rejected
3
From Passive to Active Reasoning: Can Large Language Models Ask the Right Questions under Incomplete Information?
ICML 2025Poster
4
Reliable and Efficient Amortized Model-based Evaluation
ICML 2025Poster
6
Quantifying Variance in Evaluation Benchmarks
ICLR 2025Rejected
4
Certified Unlearning for Neural Networks
ICML 2025Poster
5
Lean-ing on Quality: How High-Quality Data Beats Diverse Multilingual Data in AutoFormalization
ICLR 2025Rejected
4
Aligning Compound AI Systems via System-level DPO
NeurIPS 2025Poster
5
Scaling Laws for Downstream Task Performance in Machine Translation
ICLR 2025Poster
4
Beyond Scale: The Diversity Coefficient as a Data Quality Metric for Variability in Natural Language Data
ICLR 2025Rejected
4
Sharpe Ratio-Guided Active Learning for Preference Optimization in RLHF
COLM 2025Poster
3
Incidental Polysemanticity: A New Obstacle for Mechanistic Interpretability
ICLR 2025Rejected
3
Attacking Audio Language Models with Best-of-N Jailbreaking
ICLR 2025Rejected
4
AutoRedTeamer: An Autonomous Red Teaming Agent Against Language Models
ICLR 2025Rejected
5
Putnam-AXIOM: A Functional & Static Benchmark for Measuring Higher Level Mathematical Reasoning in LLMs
ICLR 2025Rejected
4
Nonmyopic Bayesian Optimization in Dynamic Cost Settings
ICLR 2025Rejected
4
Best-of-N Jailbreaking
NeurIPS 2025Poster
4
KGGen: Extracting Knowledge Graphs from Plain Text with Language Models
NeurIPS 2025Poster
4
ZIP-FIT: Embedding-Free Data Selection via Compression-Based Alignment
ICLR 2025Rejected
4
Putnam-AXIOM: A Functional & Static Benchmark for Measuring Higher Level Mathematical Reasoning in LLMs
ICML 2025Poster
4
Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating World
ICLR 2025Rejected
4
Collapse or Thrive: Perils and Promises of Synthetic Data in a Self-Generating World
ICML 2025Poster
5
Context Clues: Evaluating Long Context Models for Clinical Prediction Tasks on EHR Data
ICLR 2025Poster
4
MoSH: Modeling Multi-Objective Tradeoffs with Soft and Hard Bounds
ICLR 2025Rejected
4
No, of Course I Can! Deeper Fine-Tuning Attacks That Bypass Token-Level Safety Mechanisms
NeurIPS 2025Rejected
4
AutoRedTeamer: Autonomous Red Teaming with Lifelong Attack Integration
NeurIPS 2025Poster
3
Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
ICML 2025Poster
4
Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?
ICLR 2025Rejected
4
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
ICLR 2025Rejected
4
How Do Large Language Monkeys Get Their Power (Laws)?
ICML 2025Oral
4
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
ICML 2025Spotlight
4
Correlating and Predicting Human Evaluations of Language Models from Natural Language Processing Benchmarks
ICLR 2025Rejected
3
Understanding challenges to the interpretation of disaggregated evaluations of algorithmic fairness
NeurIPS 2025Poster
4
Failures to Find Transferable Image Jailbreaks Between Vision-Language Models
ICLR 2025Poster
202411 篇
4
Principled Federated Domain Adaptation: Gradient Projection and Auto-Weighting
ICLR 2024Poster
3
Enhancing Robustness of Last Layer Two-Stage Fair Model Corrections
NeurIPS 2024Poster
3
HIFA: High-fidelity Text-to-3D Generation with Advanced Diffusion Guidance
ICLR 2024Poster
3
Sketching for Distributed Deep Learning: A Sharper Analysis
NeurIPS 2024Poster
4
Beyond Scale: the Diversity Coefficient as a Data Quality Metric Demonstrates LLMs are Pre-trained on Formally Diverse Data
ICLR 2024Rejected
4
Is Pre-training Truly Better Than Meta-Learning?
ICLR 2024Rejected
4
Learning to (Learn at Test Time)
ICLR 2024withdrawn
4
On Fairness of Low-Rank Adaptation of Large Models
COLM 2024Poster
4
Divergence at the Interpolation Threshold: Identifying, Interpreting & Ablating the Sources of a Deep Learning Puzzle
ICLR 2024withdrawn
4
Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data
COLM 2024Poster
3
Enhancing Neural Network Transparency through Representation Analysis
ICLR 2024Rejected