File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: TAENet: Two-branch Autoencoder Network for Interpretable Deepfake Detection

TitleTAENet: Two-branch Autoencoder Network for Interpretable Deepfake Detection
Authors
KeywordsDeepfake detection
Disentanglement learning
Image forensic
Interpretibility representation
Issue Date1-Oct-2024
PublisherElsevier
Citation
Forensic Science International: Digital Investigation, 2024, v. 50 How to Cite?
AbstractDeepfake detection attracts increasingly attention due to serious security issues caused by facial manipulation techniques. Recently, deep learning-based detectors have achieved promising performance. However, these detectors suffer severe untrustworthy due to the lack of interpretability. Thus, it is essential to work on the interpretibility of deepfake detectors to improve the reliability and traceability of digital evidence. In this work, we propose a two-branch autoencoder network named TAENet for interpretable deepfake detection. TAENet is composed of Content Feature Disentanglement (CFD), Content Map Generation (CMG), and Classification. CFD extracts latent features of real and forged content with dual encoder and feature discriminator. CMG employs a Pixel-level Content Map Generation Loss (PCMGL) to guide the dual decoder in visualizing the latent representations of real and forged contents as real-map and fake-map. In classification module, the Auxiliary Classifier (AC) serves as map amplifier to improve the accuracy of real-map image extraction. Finally, the learned model decouples the input image into two maps that have the same size as the input, providing visualized evidence for deepfake detection. Extensive experiments demonstrate that TAENet can offer interpretability in deepfake detection without compromising accuracy.
Persistent Identifierhttp://hdl.handle.net/10722/362570
ISSN
2023 SCImago Journal Rankings: 0.808

 

DC FieldValueLanguage
dc.contributor.authorDu, Fuqiang-
dc.contributor.authorYu, Min-
dc.contributor.authorLi, Boquan-
dc.contributor.authorChow, Kam Pui-
dc.contributor.authorJiang, Jianguo-
dc.contributor.authorZhang, Yixin-
dc.contributor.authorLiang, Yachao-
dc.contributor.authorLi, Min-
dc.contributor.authorHuang, Weiqing-
dc.date.accessioned2025-09-26T00:36:12Z-
dc.date.available2025-09-26T00:36:12Z-
dc.date.issued2024-10-01-
dc.identifier.citationForensic Science International: Digital Investigation, 2024, v. 50-
dc.identifier.issn2666-2825-
dc.identifier.urihttp://hdl.handle.net/10722/362570-
dc.description.abstractDeepfake detection attracts increasingly attention due to serious security issues caused by facial manipulation techniques. Recently, deep learning-based detectors have achieved promising performance. However, these detectors suffer severe untrustworthy due to the lack of interpretability. Thus, it is essential to work on the interpretibility of deepfake detectors to improve the reliability and traceability of digital evidence. In this work, we propose a two-branch autoencoder network named TAENet for interpretable deepfake detection. TAENet is composed of Content Feature Disentanglement (CFD), Content Map Generation (CMG), and Classification. CFD extracts latent features of real and forged content with dual encoder and feature discriminator. CMG employs a Pixel-level Content Map Generation Loss (PCMGL) to guide the dual decoder in visualizing the latent representations of real and forged contents as real-map and fake-map. In classification module, the Auxiliary Classifier (AC) serves as map amplifier to improve the accuracy of real-map image extraction. Finally, the learned model decouples the input image into two maps that have the same size as the input, providing visualized evidence for deepfake detection. Extensive experiments demonstrate that TAENet can offer interpretability in deepfake detection without compromising accuracy.-
dc.languageeng-
dc.publisherElsevier-
dc.relation.ispartofForensic Science International: Digital Investigation-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectDeepfake detection-
dc.subjectDisentanglement learning-
dc.subjectImage forensic-
dc.subjectInterpretibility representation-
dc.titleTAENet: Two-branch Autoencoder Network for Interpretable Deepfake Detection-
dc.typeArticle-
dc.identifier.doi10.1016/j.fsidi.2024.301808-
dc.identifier.scopuseid_2-s2.0-85209239523-
dc.identifier.volume50-
dc.identifier.eissn2666-2817-
dc.identifier.issnl2666-2817-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats