File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TMC.2024.3449371
- Scopus: eid_2-s2.0-85202778032
- WOS: WOS:001359244600211
- Find via

Supplementary
- Citations:
- Appears in Collections:
Article: PACP: Priority-Aware Collaborative Perception for Connected and Autonomous Vehicles
| Title | PACP: Priority-Aware Collaborative Perception for Connected and Autonomous Vehicles |
|---|---|
| Authors | |
| Keywords | adaptive compression Autonomous vehicles Collaboration collaborative perception Connected and autonomous vehicle (CAV) data fusion Mobile computing Optimization priority-aware collaborative perception (PACP) submodular optimization Task analysis Throughput Vehicle dynamics |
| Issue Date | 26-Aug-2024 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Mobile Computing, 2024, v. 23, n. 12, p. 15003-15018 How to Cite? |
| Abstract | Surrounding perceptions are quintessential for safe driving for connected and autonomous vehicles (CAVs), where the Bird's Eye View has been employed to accurately capture spatial relationships among vehicles. However, severe inherent limitations of BEV, like blind spots, have been identified. Collaborative perception has emerged as an effective solution to overcoming these limitations through data fusion from multiple views of surrounding vehicles. While most existing collaborative perception strategies adopt a fully connected graph predicated on fairness in transmissions, they often neglect the varying importance of individual vehicles due to channel variations and perception redundancy. To address these challenges, we propose a novel Priority-Aware Collaborative Perception (PACP) framework to employ a BEV-match mechanism to determine the priority levels based on the correlation between nearby CAVs and the ego vehicle for perception. By leveraging submodular optimization, we find near-optimal transmission rates, link connectivity, and compression metrics. Moreover, we deploy a deep learning-based adaptive autoencoder to modulate the image reconstruction quality under dynamic channel conditions. Finally, we conduct extensive studies and demonstrate that our scheme significantly outperforms the state-of-the-art schemes by 8.27% and 13.60%, respectively, in terms of utility and precision of the Intersection over Union. |
| Persistent Identifier | http://hdl.handle.net/10722/353556 |
| ISSN | 2023 Impact Factor: 7.7 2023 SCImago Journal Rankings: 2.755 |
| ISI Accession Number ID |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Fang, Zhengru | - |
| dc.contributor.author | Hu, Senkang | - |
| dc.contributor.author | An, Haonan | - |
| dc.contributor.author | Zhang, Yuang | - |
| dc.contributor.author | Wang, Jingjing | - |
| dc.contributor.author | Cao, Hangcheng | - |
| dc.contributor.author | Chen, Xianhao | - |
| dc.contributor.author | Fang, Yuguang | - |
| dc.date.accessioned | 2025-01-21T00:35:40Z | - |
| dc.date.available | 2025-01-21T00:35:40Z | - |
| dc.date.issued | 2024-08-26 | - |
| dc.identifier.citation | IEEE Transactions on Mobile Computing, 2024, v. 23, n. 12, p. 15003-15018 | - |
| dc.identifier.issn | 1536-1233 | - |
| dc.identifier.uri | http://hdl.handle.net/10722/353556 | - |
| dc.description.abstract | Surrounding perceptions are quintessential for safe driving for connected and autonomous vehicles (CAVs), where the Bird's Eye View has been employed to accurately capture spatial relationships among vehicles. However, severe inherent limitations of BEV, like blind spots, have been identified. Collaborative perception has emerged as an effective solution to overcoming these limitations through data fusion from multiple views of surrounding vehicles. While most existing collaborative perception strategies adopt a fully connected graph predicated on fairness in transmissions, they often neglect the varying importance of individual vehicles due to channel variations and perception redundancy. To address these challenges, we propose a novel Priority-Aware Collaborative Perception (PACP) framework to employ a BEV-match mechanism to determine the priority levels based on the correlation between nearby CAVs and the ego vehicle for perception. By leveraging submodular optimization, we find near-optimal transmission rates, link connectivity, and compression metrics. Moreover, we deploy a deep learning-based adaptive autoencoder to modulate the image reconstruction quality under dynamic channel conditions. Finally, we conduct extensive studies and demonstrate that our scheme significantly outperforms the state-of-the-art schemes by 8.27% and 13.60%, respectively, in terms of utility and precision of the Intersection over Union. | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Mobile Computing | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | adaptive compression | - |
| dc.subject | Autonomous vehicles | - |
| dc.subject | Collaboration | - |
| dc.subject | collaborative perception | - |
| dc.subject | Connected and autonomous vehicle (CAV) | - |
| dc.subject | data fusion | - |
| dc.subject | Mobile computing | - |
| dc.subject | Optimization | - |
| dc.subject | priority-aware collaborative perception (PACP) | - |
| dc.subject | submodular optimization | - |
| dc.subject | Task analysis | - |
| dc.subject | Throughput | - |
| dc.subject | Vehicle dynamics | - |
| dc.title | PACP: Priority-Aware Collaborative Perception for Connected and Autonomous Vehicles | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TMC.2024.3449371 | - |
| dc.identifier.scopus | eid_2-s2.0-85202778032 | - |
| dc.identifier.volume | 23 | - |
| dc.identifier.issue | 12 | - |
| dc.identifier.spage | 15003 | - |
| dc.identifier.epage | 15018 | - |
| dc.identifier.eissn | 1558-0660 | - |
| dc.identifier.isi | WOS:001359244600211 | - |
| dc.identifier.issnl | 1536-1233 | - |
