PaperHub

暂无评分数据

ICLR 2025

Unlearning as Multi-Task Optimization: a normalized gradient difference approach with adaptive learning rate

OpenReviewPDF
提交: 2024-09-26更新: 2024-10-16
TL;DR

We formulate the unlearning as two-task optimization (forgetting and retaining), for which we apply the normalized gradient difference and automatic learning rate schedule.

摘要

关键词
machine unlearningmulti-task optimizationlearning rate schedulerlarge language models

评审与讨论

撤稿通知

I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.