File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Stealthy porn: Understanding real-world adversarial images for illicit online promotion

TitleStealthy porn: Understanding real-world adversarial images for illicit online promotion
Authors
KeywordsAdversarial-images
Cybercrime
Deep-learning
Issue Date2019
Citation
Proceedings - IEEE Symposium on Security and Privacy, 2019, v. 2019-May, p. 952-966 How to Cite?
AbstractRecent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today's strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male'na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others' images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.
Persistent Identifierhttp://hdl.handle.net/10722/350219
ISSN
2020 SCImago Journal Rankings: 2.407

 

DC FieldValueLanguage
dc.contributor.authorYuan, Kan-
dc.contributor.authorTang, Di-
dc.contributor.authorLiao, Xiaojing-
dc.contributor.authorWang, Xiaofeng-
dc.contributor.authorFeng, Xuan-
dc.contributor.authorChen, Yi-
dc.contributor.authorSun, Menghan-
dc.contributor.authorLu, Haoran-
dc.contributor.authorZhang, Kehuan-
dc.date.accessioned2024-10-21T04:35:08Z-
dc.date.available2024-10-21T04:35:08Z-
dc.date.issued2019-
dc.identifier.citationProceedings - IEEE Symposium on Security and Privacy, 2019, v. 2019-May, p. 952-966-
dc.identifier.issn1081-6011-
dc.identifier.urihttp://hdl.handle.net/10722/350219-
dc.description.abstractRecent years have witnessed the rapid progress in deep learning (DP), which also brings their potential weaknesses to the spotlights of security and machine learning studies. With important discoveries made by adversarial learning research, surprisingly little attention, however, has been paid to the real-world adversarial techniques deployed by the cybercriminal to evade image-based detection. Unlike the adversarial examples that induce misclassification using nearly imperceivable perturbation, real-world adversarial images tend to be less optimal yet equally effective. As a first step to understand the threat, we report in the paper a study on adversarial promotional porn images (APPIs) that are extensively used in underground advertising. We show that the adversary today's strategically constructs the APPIs to evade explicit content detection while still preserving their sexual appeal, even though the distortions and noise introduced are clearly observable to humans. To understand such real-world adversarial images and the underground business behind them, we develop a novel DP-based methodology called Male'na, which focuses on the regions of an image where sexual content is least obfuscated and therefore visible to the target audience of a promotion. Using this technique, we have discovered over 4,000 APPIs from 4,042,690 images crawled from popular social media, and further brought to light the unique techniques they use to evade popular explicit content detectors (e.g., Google Cloud Vision API, Yahoo Open NSFW model), and the reason that these techniques work. Also studied are the ecosystem of such illicit promotions, including the obfuscated contacts advertised through those images, compromised accounts used to disseminate them, and large APPI campaigns involving thousands of images. Another interesting finding is the apparent attempt made by cybercriminals to steal others' images for their advertising. The study highlights the importance of the research on real-world adversarial learning and makes the first step towards mitigating the threats it poses.-
dc.languageeng-
dc.relation.ispartofProceedings - IEEE Symposium on Security and Privacy-
dc.subjectAdversarial-images-
dc.subjectCybercrime-
dc.subjectDeep-learning-
dc.titleStealthy porn: Understanding real-world adversarial images for illicit online promotion-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.doi10.1109/SP.2019.00032-
dc.identifier.scopuseid_2-s2.0-85072914023-
dc.identifier.volume2019-May-
dc.identifier.spage952-
dc.identifier.epage966-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats