Han Liu
~Han_Liu4
40
论文总数
20.0
年均投稿
平均评分
接收情况19/40
会议分布
ICLR
27
NeurIPS
9
ICML
4
发表论文 (40 篇)
202532 篇
5
Explainable Sequential Optimization
ICLR 2025withdrawn
4
A Comprehensive Framework for Benchmarking Algorithms Across Hyperparameter Spaces
ICLR 2025withdrawn
-
RDAS: A Low Latency and High Throughput Raw Data Engine for Machine Learning Systems
ICLR 2025desk_rejected
-
MambaFormer-MOE: Mamba-Transformer-based Mixture-of-Experts for Time Series Prediction
ICLR 2025desk_rejected
4
MetaFind: Scene-Aware 3D Asset Retrieval for Coherent Metaverse Scene Generation
NeurIPS 2025Poster
4
On Differentially Private String Distances
ICLR 2025withdrawn
4
HeteGraph-Mamba: Heterogeneous Graph Learning via Selective State Space Model
ICLR 2025withdrawn
4
SWGA: A Distributed Hyperparameter Search Method for Time Series Prediction Models
ICLR 2025Rejected
3
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
ICLR 2025Poster
4
Attention Mechanism, Max-Affine Partition, and Universal Approximation
NeurIPS 2025Poster
4
Investigating Hallucinations of Time Series Foundation Models through Signal Subspace Analysis
NeurIPS 2025Poster
4
Automated Feature Engineering by Prompting
ICLR 2025Rejected
4
AlignAb: Pareto-Optimal Energy Alignment for Designing Nature-Like Antibodies
ICLR 2025Rejected
4
Computational Limits of Low-Rank Adaptation (LoRA) Fine-Tuning for Transformer Models
ICLR 2025Poster
4
Pareto-Optimal Energy Alignment for Designing Nature-Like Antibodies
NeurIPS 2025Poster
4
BOOST: Enhanced Jailbreak of Large Language Model via Slient eos Tokens
ICLR 2025Rejected
4
Nonparametric Modern Hopfield Models
ICML 2025Poster
4
In-Context Deep Learning via Transformer Models
ICML 2025Poster
4
In-Context Learning as Conditioned Associative Memory Retrieval
ICML 2025Poster
4
Pretrained Transformers are Deep Optimizers: Provable In-Context Learning for Deep Model Training
ICLR 2025Rejected
4
Fundamental Limits of Prompt Tuning Transformers: Universality, Capacity and Efficiency
ICLR 2025Poster
5
Can Transformers Perform PCA ?
ICLR 2025withdrawn
4
Transformers Learn Bayesian Networks Autoregressively In-Context
ICLR 2025withdrawn
4
On Statistical Rates of Conditional Diffusion Transformers: Approximation, Estimation and Minimax Optimality
ICLR 2025Poster
4
GenomeOcean: Efficient Foundation Model for Genome Generation
ICLR 2025Rejected
4
Decoupled Alignment for Robust Plug-and-Play Adaptation
ICLR 2025withdrawn
4
Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion?
ICLR 2025Rejected
4
Fast and Low-Cost Genomic Foundation Models via Outlier Removal
ICML 2025Poster
3
DNABERT-S: Pioneering Species Differentiation with Species-Aware DNA Embeddings
ICLR 2025Rejected
5
High-Order Flow Matching: Unified Framework and Sharp Statistical Rates
NeurIPS 2025Poster
4
SPARQ: Outlier-free SpeechLM with Fast Adaptation and Robust Quantization
ICLR 2025Rejected
5
A Benchmark Study For Limit Order Book (LOB) Models and Time Series Forecasting Models on LOB Data
ICLR 2025Rejected
20248 篇
4
Learning Multiple Coordinated Agents under Directed Acyclic Graph Constraints
ICLR 2024Rejected
3
Efficient Action Robust Reinforcement Learning with Probabilistic Policy Execution Uncertainty
ICLR 2024Rejected
4
Provably Optimal Memory Capacity for Modern Hopfield Models: Transformer-Compatible Dense Associative Memories as Spherical Codes
NeurIPS 2024Poster
4
STanHop: Sparse Tandem Hopfield Model for Memory-Enhanced Time Series Prediction
ICLR 2024Poster
4
One-Layer Transformer Provably Learns One-Nearest Neighbor In Context
NeurIPS 2024Poster
5
Global Convergence in Training Large-Scale Transformers
NeurIPS 2024Poster
4
DNABERT-2: Efficient Foundation Model and Benchmark For Multi-Species Genomes
ICLR 2024Poster
4
On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs)
NeurIPS 2024Poster