Jakub Łucki, Boyi Wei, Yangsibo Huang, Peter Henderson, Florian Tramèr and Javier Rando
NeurIPS Workshop on Socially Responsible Language Modelling Research 2024 (Oral Presentation)
Large language models are finetuned to refuse questions about hazardous knowledge, but these protections can often be bypassed. Unlearning methods aim at completely removing hazardous capabilities from models and make them inaccessible to adversaries. This work challenges the fundamental differences between unlearning and traditional safety post-training from an adversarial perspective. We demonstrate that existing jailbreak methods, previously reported as ineffective against unlearning, can be successful when applied carefully. Furthermore, we develop a variety of adaptive methods that recover most supposedly unlearned capabilities. For instance, we show that finetuning on 10 unrelated examples or removing specific directions in the activation space can recover most hazardous capabilities for models edited with RMU, a state-of-the-art unlearning method. Our findings challenge the robustness of current unlearning approaches and question their advantages over safety training.
@inproceedings{LWHH+24, | |||
author | = | {{\L}ucki, Jakub and Wei, Boyi and Huang, Yangsibo and Henderson, Peter and Tram{\`e}r, Florian and Rando, Javier}, | |
title | = | {An adversarial perspective on machine unlearning for{AI} safety}, | |
booktitle | = | {NeurIPS Workshop on Socially Responsible Language Modelling Research}, | |
year | = | {2024}, | |
howpublished | = | {arXiv preprint arXiv:2409.18025}, | |
url | = | {https://arxiv.org/abs/2409.18025} | |
} |