暂无评分数据
ICLR 2025
Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models
TL;DR
In this work, we study the modularity of neural networks by analyzing circuits for highly compositional subtasks within a transformer-based language model.
摘要
关键词
Circuit IdentificationModularityContinuous Sparsification
评审与讨论
PC编辑台拒稿
直接拒稿原因
This paper is desk rejected due to incorrect margins, which are significantly smaller than the accepted ICLR format. The modified margins allow significantly more text to fit into the paper than the page limit would allow.