File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/GLOBECOM54140.2023.10436878
- Scopus: eid_2-s2.0-85187400472
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: DeepJSCC-1++: Robust and Bandwidth-Adaptive Wireless Image Transmission
| Title | DeepJSCC-1++: Robust and Bandwidth-Adaptive Wireless Image Transmission |
|---|---|
| Authors | |
| Keywords | bandwidth adaptive DeepJSCC dynamic weight assignment Semantic communication Swin Transformer |
| Issue Date | 2023 |
| Citation | Proceedings IEEE Global Communications Conference Globecom, 2023, p. 3148-3154 How to Cite? |
| Abstract | This paper presents a novel vision transformer (ViT) based deep joint source channel coding (DeepJSCC) scheme, dubbed DeepJSCC-l++, which can adapt to different target bandwidth ratios as well as channel signal-to-noise ratios (SNRs) using a single model. To achieve this, we treat the bandwidth ratio and the SNR as channel state information available to the encoder and decoder, which are fed to the model as side information, and train the proposed DeepJSCC-l++ model with different bandwidth ratios and SNRs. The reconstruction losses corresponding to different bandwidth ratios are calculated, and a novel training methodology, which dynamically assigns different weights to the losses of different bandwidth ratios according to their individual reconstruction qualities, is introduced. Shifted window (Swin) transformer is adopted as the backbone for our DeepJSCC-l++ model, and it is shown through extensive simulations that the proposed DeepJSCC-l++ can adapt to different bandwidth ratios and channel SNRs with marginal performance loss compared to the separately trained models. We also observe the proposed schemes can outperform the digital baseline, which concatenates the BPG compression with capacity-achieving channel code. We believe this is an important step towards the implementation of DeepJSCC in practice as a single pre-trained model is sufficient to serve the user in a wide range of channel conditions. |
| Persistent Identifier | http://hdl.handle.net/10722/363614 |
| ISSN |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Bian, Chenghong | - |
| dc.contributor.author | Shao, Yulin | - |
| dc.contributor.author | Gündüz, Deniz | - |
| dc.date.accessioned | 2025-10-10T07:48:10Z | - |
| dc.date.available | 2025-10-10T07:48:10Z | - |
| dc.date.issued | 2023 | - |
| dc.identifier.citation | Proceedings IEEE Global Communications Conference Globecom, 2023, p. 3148-3154 | - |
| dc.identifier.issn | 2334-0983 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/363614 | - |
| dc.description.abstract | This paper presents a novel vision transformer (ViT) based deep joint source channel coding (DeepJSCC) scheme, dubbed DeepJSCC-l++, which can adapt to different target bandwidth ratios as well as channel signal-to-noise ratios (SNRs) using a single model. To achieve this, we treat the bandwidth ratio and the SNR as channel state information available to the encoder and decoder, which are fed to the model as side information, and train the proposed DeepJSCC-l++ model with different bandwidth ratios and SNRs. The reconstruction losses corresponding to different bandwidth ratios are calculated, and a novel training methodology, which dynamically assigns different weights to the losses of different bandwidth ratios according to their individual reconstruction qualities, is introduced. Shifted window (Swin) transformer is adopted as the backbone for our DeepJSCC-l++ model, and it is shown through extensive simulations that the proposed DeepJSCC-l++ can adapt to different bandwidth ratios and channel SNRs with marginal performance loss compared to the separately trained models. We also observe the proposed schemes can outperform the digital baseline, which concatenates the BPG compression with capacity-achieving channel code. We believe this is an important step towards the implementation of DeepJSCC in practice as a single pre-trained model is sufficient to serve the user in a wide range of channel conditions. | - |
| dc.language | eng | - |
| dc.relation.ispartof | Proceedings IEEE Global Communications Conference Globecom | - |
| dc.subject | bandwidth adaptive | - |
| dc.subject | DeepJSCC | - |
| dc.subject | dynamic weight assignment | - |
| dc.subject | Semantic communication | - |
| dc.subject | Swin Transformer | - |
| dc.title | DeepJSCC-1++: Robust and Bandwidth-Adaptive Wireless Image Transmission | - |
| dc.type | Conference_Paper | - |
| dc.description.nature | link_to_subscribed_fulltext | - |
| dc.identifier.doi | 10.1109/GLOBECOM54140.2023.10436878 | - |
| dc.identifier.scopus | eid_2-s2.0-85187400472 | - |
| dc.identifier.spage | 3148 | - |
| dc.identifier.epage | 3154 | - |
| dc.identifier.eissn | 2576-6813 | - |
