PaperHub

暂无评分数据

ICLR 2025

Circuit Compositions: Exploring Modular Structures in Transformer-Based Language Models

OpenReviewPDF
提交: 2024-09-27更新: 2024-10-17
TL;DR

In this work, we study the modularity of neural networks by analyzing circuits for highly compositional subtasks within a transformer-based language model.

摘要

关键词
Circuit IdentificationModularityContinuous Sparsification

评审与讨论

编辑台拒稿

直接拒稿原因

This paper is desk rejected due to incorrect margins, which are significantly smaller than the accepted ICLR format. The modified margins allow significantly more text to fit into the paper than the page limit would allow.