File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.fsidi.2024.301808
- Scopus: eid_2-s2.0-85209239523
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: TAENet: Two-branch Autoencoder Network for Interpretable Deepfake Detection
| Title | TAENet: Two-branch Autoencoder Network for Interpretable Deepfake Detection |
|---|---|
| Authors | |
| Keywords | Deepfake detection Disentanglement learning Image forensic Interpretibility representation |
| Issue Date | 1-Oct-2024 |
| Publisher | Elsevier |
| Citation | Forensic Science International: Digital Investigation, 2024, v. 50 How to Cite? |
| Abstract | Deepfake detection attracts increasingly attention due to serious security issues caused by facial manipulation techniques. Recently, deep learning-based detectors have achieved promising performance. However, these detectors suffer severe untrustworthy due to the lack of interpretability. Thus, it is essential to work on the interpretibility of deepfake detectors to improve the reliability and traceability of digital evidence. In this work, we propose a two-branch autoencoder network named TAENet for interpretable deepfake detection. TAENet is composed of Content Feature Disentanglement (CFD), Content Map Generation (CMG), and Classification. CFD extracts latent features of real and forged content with dual encoder and feature discriminator. CMG employs a Pixel-level Content Map Generation Loss (PCMGL) to guide the dual decoder in visualizing the latent representations of real and forged contents as real-map and fake-map. In classification module, the Auxiliary Classifier (AC) serves as map amplifier to improve the accuracy of real-map image extraction. Finally, the learned model decouples the input image into two maps that have the same size as the input, providing visualized evidence for deepfake detection. Extensive experiments demonstrate that TAENet can offer interpretability in deepfake detection without compromising accuracy. |
| Persistent Identifier | http://hdl.handle.net/10722/362570 |
| ISSN | 2023 SCImago Journal Rankings: 0.808 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Du, Fuqiang | - |
| dc.contributor.author | Yu, Min | - |
| dc.contributor.author | Li, Boquan | - |
| dc.contributor.author | Chow, Kam Pui | - |
| dc.contributor.author | Jiang, Jianguo | - |
| dc.contributor.author | Zhang, Yixin | - |
| dc.contributor.author | Liang, Yachao | - |
| dc.contributor.author | Li, Min | - |
| dc.contributor.author | Huang, Weiqing | - |
| dc.date.accessioned | 2025-09-26T00:36:12Z | - |
| dc.date.available | 2025-09-26T00:36:12Z | - |
| dc.date.issued | 2024-10-01 | - |
| dc.identifier.citation | Forensic Science International: Digital Investigation, 2024, v. 50 | - |
| dc.identifier.issn | 2666-2825 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362570 | - |
| dc.description.abstract | Deepfake detection attracts increasingly attention due to serious security issues caused by facial manipulation techniques. Recently, deep learning-based detectors have achieved promising performance. However, these detectors suffer severe untrustworthy due to the lack of interpretability. Thus, it is essential to work on the interpretibility of deepfake detectors to improve the reliability and traceability of digital evidence. In this work, we propose a two-branch autoencoder network named TAENet for interpretable deepfake detection. TAENet is composed of Content Feature Disentanglement (CFD), Content Map Generation (CMG), and Classification. CFD extracts latent features of real and forged content with dual encoder and feature discriminator. CMG employs a Pixel-level Content Map Generation Loss (PCMGL) to guide the dual decoder in visualizing the latent representations of real and forged contents as real-map and fake-map. In classification module, the Auxiliary Classifier (AC) serves as map amplifier to improve the accuracy of real-map image extraction. Finally, the learned model decouples the input image into two maps that have the same size as the input, providing visualized evidence for deepfake detection. Extensive experiments demonstrate that TAENet can offer interpretability in deepfake detection without compromising accuracy. | - |
| dc.language | eng | - |
| dc.publisher | Elsevier | - |
| dc.relation.ispartof | Forensic Science International: Digital Investigation | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | Deepfake detection | - |
| dc.subject | Disentanglement learning | - |
| dc.subject | Image forensic | - |
| dc.subject | Interpretibility representation | - |
| dc.title | TAENet: Two-branch Autoencoder Network for Interpretable Deepfake Detection | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1016/j.fsidi.2024.301808 | - |
| dc.identifier.scopus | eid_2-s2.0-85209239523 | - |
| dc.identifier.volume | 50 | - |
| dc.identifier.eissn | 2666-2817 | - |
| dc.identifier.issnl | 2666-2817 | - |
