File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1109/TSP.2025.3580374
- Scopus: eid_2-s2.0-105009419363
- Find via

Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Article: FieldFormer: Self-supervised Reconstruction of Physical Fields via Tensor Attention Prior
| Title | FieldFormer: Self-supervised Reconstruction of Physical Fields via Tensor Attention Prior |
|---|---|
| Authors | |
| Keywords | 3D physical field reconstruction tensor attention prior tensor completion |
| Issue Date | 1-Jan-2025 |
| Publisher | Institute of Electrical and Electronics Engineers |
| Citation | IEEE Transactions on Signal Processing, 2025, v. 73, p. 2704-2718 How to Cite? |
| Abstract | Reconstructing physical field tensors from in situ observations, such as radio maps and ocean sound speed fields, is crucial for enabling environment-aware decision making in various applications, e.g., wireless communications and underwater acoustics. Field data reconstruction is often challenging, due to the limited and noisy nature of the observations, necessitating the incorporation of prior information to aid the reconstruction process. Deep neural network-based data-driven structural constraints (e.g., “deeply learned priors”) have showed promising performance. However, this family of techniques faces challenges such as model mismatches between training and testing phases. This work introduces FieldFormer, a self-supervised neural prior learned solely from the limited in situ observations without the need of offline training. Specifically, the proposed framework starts with modeling the fields of interest using the tensor Tucker model of a high multilinear rank, which ensures a universal approximation property for all fields. In the sequel, an attention mechanism is incorporated to learn the sparsity pattern that underlies the core tensor in order to reduce the solution space. In this way, a “complexity-adaptive” neural representation, grounded in the Tucker decomposition, is obtained that can flexibly represent various types of fields. A theoretical analysis is provided to support the recoverability of the proposed design. Moreover, extensive experiments, using various physical field tensors, demonstrate the superiority of the proposed approach compared to state-of-the-art baselines. |
| Persistent Identifier | http://hdl.handle.net/10722/362252 |
| ISSN | 2023 Impact Factor: 4.6 2023 SCImago Journal Rankings: 2.520 |
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Chen, Panqi | - |
| dc.contributor.author | Li, Siyuan | - |
| dc.contributor.author | Cheng, Lei | - |
| dc.contributor.author | Fu, Xiao | - |
| dc.contributor.author | Wu, Yik Chung | - |
| dc.contributor.author | Theodoridis, Sergios | - |
| dc.date.accessioned | 2025-09-20T00:31:05Z | - |
| dc.date.available | 2025-09-20T00:31:05Z | - |
| dc.date.issued | 2025-01-01 | - |
| dc.identifier.citation | IEEE Transactions on Signal Processing, 2025, v. 73, p. 2704-2718 | - |
| dc.identifier.issn | 1053-587X | - |
| dc.identifier.uri | http://hdl.handle.net/10722/362252 | - |
| dc.description.abstract | <p>Reconstructing physical field tensors from in situ observations, such as radio maps and ocean sound speed fields, is crucial for enabling environment-aware decision making in various applications, e.g., wireless communications and underwater acoustics. Field data reconstruction is often challenging, due to the limited and noisy nature of the observations, necessitating the incorporation of prior information to aid the reconstruction process. Deep neural network-based data-driven structural constraints (e.g., “deeply learned priors”) have showed promising performance. However, this family of techniques faces challenges such as model mismatches between training and testing phases. This work introduces FieldFormer, a self-supervised neural prior learned solely from the limited in situ observations without the need of offline training. Specifically, the proposed framework starts with modeling the fields of interest using the tensor Tucker model of a high multilinear rank, which ensures a universal approximation property for all fields. In the sequel, an attention mechanism is incorporated to learn the sparsity pattern that underlies the core tensor in order to reduce the solution space. In this way, a “complexity-adaptive” neural representation, grounded in the Tucker decomposition, is obtained that can flexibly represent various types of fields. A theoretical analysis is provided to support the recoverability of the proposed design. Moreover, extensive experiments, using various physical field tensors, demonstrate the superiority of the proposed approach compared to state-of-the-art baselines.</p> | - |
| dc.language | eng | - |
| dc.publisher | Institute of Electrical and Electronics Engineers | - |
| dc.relation.ispartof | IEEE Transactions on Signal Processing | - |
| dc.rights | This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | - |
| dc.subject | 3D physical field reconstruction | - |
| dc.subject | tensor attention prior | - |
| dc.subject | tensor completion | - |
| dc.title | FieldFormer: Self-supervised Reconstruction of Physical Fields via Tensor Attention Prior | - |
| dc.type | Article | - |
| dc.identifier.doi | 10.1109/TSP.2025.3580374 | - |
| dc.identifier.scopus | eid_2-s2.0-105009419363 | - |
| dc.identifier.volume | 73 | - |
| dc.identifier.spage | 2704 | - |
| dc.identifier.epage | 2718 | - |
| dc.identifier.eissn | 1941-0476 | - |
| dc.identifier.issnl | 1053-587X | - |
