Tatsunori Hashimoto
~Tatsunori_Hashimoto1
26
论文总数
13.0
年均投稿
平均评分
接收情况21/26
会议分布
ICLR
17
ICML
4
NeurIPS
3
COLM
2
发表论文 (26 篇)
202512 篇
4
Online Conformal Prediction via Online Optimization
ICML 2025Poster
5
Improving Pretraining Data Using Perplexity Correlations
ICLR 2025Poster
4
Can LLMs Generate Novel Research Ideas? A Large-Scale Human Study with 100+ NLP Researchers
ICLR 2025Poster
3
Eliciting Language Model Behaviors with Investigator Agents
ICML 2025Poster
4
Locality Alignment Improves Vision-Language Models
ICLR 2025Poster
4
Synthetic continued pretraining
ICLR 2025Oral
4
Auditing Prompt Caching in Language Model APIs
ICML 2025Poster
6
Improving the Efficiency of Test-Time Search in LLMs with Backtracking
ICLR 2025Rejected
4
AutoBencher: Towards Declarative Benchmark Construction
ICLR 2025Poster
4
LongCodeBench: Evaluating Coding LLMs at 1M Context Windows
COLM 2025Poster
4
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
ICLR 2025Rejected
4
Learning to (Learn at Test Time): RNNs with Expressive Hidden States
ICML 2025Spotlight
202414 篇
4
Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
ICLR 2024Rejected
4
One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention
ICLR 2024Poster
5
Observational Scaling Laws and the Predictability of Langauge Model Performance
NeurIPS 2024Spotlight
4
On the Fairness ROAD: Robust Optimization for Adversarial Debiasing
ICLR 2024Poster
4
Length-Controlled AlpacaEval: A Simple Debiasing of Automatic Evaluators
COLM 2024Poster
3
Benchmarking and Improving Generator-Validator Consistency of Language Models
ICLR 2024Poster
3
Trustless Audits without Revealing Data or Models
ICLR 2024Rejected
4
On the Learnability of Watermarks for Language Models
ICLR 2024Poster
4
Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution
NeurIPS 2024Poster
3
Graph-based Uncertainty Metrics for Long-form Language Model Generations
NeurIPS 2024Spotlight
4
Proving Test Set Contamination in Black-Box Language Models
ICLR 2024Oral
4
Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions
ICLR 2024Poster
4
Learning to (Learn at Test Time)
ICLR 2024withdrawn
3
Identifying the Risks of LM Agents with an LM-Emulated Sandbox
ICLR 2024Spotlight