File Download

There are no files associated with this item.

Supplementary

Conference Paper: Harmless Backdoor-based Client-side Watermarking in Federated Learning

TitleHarmless Backdoor-based Client-side Watermarking in Federated Learning
Authors
Issue Date30-Jun-2025
Abstract

Protecting intellectual property (IP) in federated learning (FL) is increasingly important as clients contribute proprietary data to collaboratively train models. Model watermarking, particularly through backdoor-based methods, has emerged as a popular approach for verifying ownership and contributions in deep neural networks trained via FL. By manipulating their datasets, clients can embed a secret pattern, resulting in non-intuitive predictions that serve as proof of participation, useful for claiming incentives or IP co-ownership. However, this technique faces practical challenges: (i) client watermarks can collide, leading to ambiguous ownership claims, and (ii) malicious clients may exploit watermarks to manipulate model predictions for harmful purposes. To address these issues, we propose Sanitizer, a server-side method that ensures client-embedded backdoors can only be activated in harmless environments but not natural queries. It identifies subnets within client-submitted models, extracts backdoors throughout the FL process, and confines them to harmless, client-specific input subspaces. This approach not only enhances Sanitizer's efficiency but also resolves conflicts when clients use similar triggers with different target labels. Our empirical results demonstrate that Sanitizer achieves near-perfect success verifying client contributions while mitigating the risks of malicious watermark use. Additionally, it reduces GPU memory consumption by 85% and cuts processing time by at least 5x compared to the baseline. 


Persistent Identifierhttp://hdl.handle.net/10722/358776

 

DC FieldValueLanguage
dc.contributor.authorLuo, Kaijing-
dc.contributor.authorChow, Ka-Ho-
dc.date.accessioned2025-08-13T07:47:57Z-
dc.date.available2025-08-13T07:47:57Z-
dc.date.issued2025-06-30-
dc.identifier.urihttp://hdl.handle.net/10722/358776-
dc.description.abstract<p>Protecting intellectual property (IP) in federated learning (FL) is increasingly important as clients contribute proprietary data to collaboratively train models. Model watermarking, particularly through backdoor-based methods, has emerged as a popular approach for verifying ownership and contributions in deep neural networks trained via FL. By manipulating their datasets, clients can embed a secret pattern, resulting in non-intuitive predictions that serve as proof of participation, useful for claiming incentives or IP co-ownership. However, this technique faces practical challenges: (i) client watermarks can collide, leading to ambiguous ownership claims, and (ii) malicious clients may exploit watermarks to manipulate model predictions for harmful purposes. To address these issues, we propose Sanitizer, a server-side method that ensures client-embedded backdoors can only be activated in harmless environments but not natural queries. It identifies subnets within client-submitted models, extracts backdoors throughout the FL process, and confines them to harmless, client-specific input subspaces. This approach not only enhances Sanitizer's efficiency but also resolves conflicts when clients use similar triggers with different target labels. Our empirical results demonstrate that Sanitizer achieves near-perfect success verifying client contributions while mitigating the risks of malicious watermark use. Additionally, it reduces GPU memory consumption by 85% and cuts processing time by at least 5x compared to the baseline. <br></p>-
dc.languageeng-
dc.relation.ispartofIEEE European Symposium on Security and Privacy (EuroS&P) (30/06/2025-04/07/2025, Venice)-
dc.titleHarmless Backdoor-based Client-side Watermarking in Federated Learning-
dc.typeConference_Paper-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats