Micah Goldblum
~Micah_Goldblum1
30
论文总数
15.0
年均投稿
平均评分
接收情况16/30
会议分布
ICLR
20
NeurIPS
7
COLM
2
ICML
1
发表论文 (30 篇)
202514 篇
4
Adaptive Retention & Correction: Test-Time Training for Continual Learning
ICLR 2025Poster
5
Just How Flexible are Neural Networks in Practice?
ICLR 2025Rejected
4
Style Outweighs Substance: Failure Modes of LLM Judges in Alignment Benchmarking
ICLR 2025Poster
3
Data Brittleness Estimation with Self-Supervised Features
ICLR 2025withdrawn
4
vTune: Verifiable Fine-Tuning for LLMs Through Backdooring
ICLR 2025withdrawn
4
Far from the Shallow: Brain-Predictive Reasoning Embedding through Residual Disentanglement
NeurIPS 2025Poster
4
Small Batch Size Training for Language Models: When Vanilla SGD Works, and Why Gradient Accumulation is Wasteful
NeurIPS 2025Poster
4
Hidden No More: Attacking and Defending Private Third-Party LLM Inference
ICML 2025Poster
5
Gemstones: A Model Suite for Multi-Faceted Scaling Laws
NeurIPS 2025Poster
3
A Simple Baseline for Predicting Future Events with Auto-Regressive Tabular Transformers
ICLR 2025Rejected
5
Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
ICLR 2025Rejected
4
Refusal Tokens: A Simple Way to Calibrate Refusals in Large Language Models
COLM 2025Poster
4
LLMs Boost the Performance of Decision Trees on Tabular Data across Sample Sizes
ICLR 2025Rejected
3
LiveBench: A Challenging, Contamination-Limited LLM Benchmark
ICLR 2025Spotlight
202416 篇
4
The No Free Lunch Theorem, Kolmogorov Complexity, and the Role of Inductive Biases in Machine Learning
ICLR 2024Rejected
3
Just How Flexible are Neural Networks in Practice?
ICLR 2024Rejected
4
A Simple and Efficient Baseline for Data Attribution on Images
ICLR 2024withdrawn
3
Unlocking Tokens as Data Points for Generalization Bounds on Larger Language Models
NeurIPS 2024Spotlight
4
Universal Guidance for Diffusion Models
ICLR 2024Poster
5
Non-Vacuous Generalization Bounds for Large Language Models
ICLR 2024Rejected
4
Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text
ICLR 2024Rejected
4
What do vision transformers learn? A visual exploration
ICLR 2024Rejected
3
Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices
NeurIPS 2024Poster
4
TuneTables: Context Optimization for Scalable Prior-Data Fitted Networks
NeurIPS 2024Poster
4
Bring Your Own Data! Self-Supervised Evaluation for Large Language Models
ICLR 2024withdrawn
4
Baseline Defenses for Adversarial Attacks Against Aligned Language Models
ICLR 2024Rejected
4
Bring Your Own Data! Self-Sensitivity Evaluation for Large Language Models
COLM 2024Poster
3
On the Reliability of Watermarks for Large Language Models
ICLR 2024Poster
5
Large Language Models Must Be Taught to Know What They Don’t Know
NeurIPS 2024Poster
4
NEFTune: Noisy Embeddings Improve Instruction Finetuning
ICLR 2024Poster