暂无评分数据
ICLR 2025
Unlearning as Multi-Task Optimization: a normalized gradient difference approach with adaptive learning rate
TL;DR
We formulate the unlearning as two-task optimization (forgetting and retaining), for which we apply the normalized gradient difference and automatic learning rate schedule.
摘要
关键词
machine unlearningmulti-task optimizationlearning rate schedulerlarge language models
评审与讨论
作者撤稿通知
I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.