File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Article: FieldFormer: Self-supervised Reconstruction of Physical Fields via Tensor Attention Prior

TitleFieldFormer: Self-supervised Reconstruction of Physical Fields via Tensor Attention Prior
Authors
Keywords3D physical field reconstruction
tensor attention prior
tensor completion
Issue Date1-Jan-2025
PublisherInstitute of Electrical and Electronics Engineers
Citation
IEEE Transactions on Signal Processing, 2025, v. 73, p. 2704-2718 How to Cite?
Abstract

Reconstructing physical field tensors from in situ observations, such as radio maps and ocean sound speed fields, is crucial for enabling environment-aware decision making in various applications, e.g., wireless communications and underwater acoustics. Field data reconstruction is often challenging, due to the limited and noisy nature of the observations, necessitating the incorporation of prior information to aid the reconstruction process. Deep neural network-based data-driven structural constraints (e.g., “deeply learned priors”) have showed promising performance. However, this family of techniques faces challenges such as model mismatches between training and testing phases. This work introduces FieldFormer, a self-supervised neural prior learned solely from the limited in situ observations without the need of offline training. Specifically, the proposed framework starts with modeling the fields of interest using the tensor Tucker model of a high multilinear rank, which ensures a universal approximation property for all fields. In the sequel, an attention mechanism is incorporated to learn the sparsity pattern that underlies the core tensor in order to reduce the solution space. In this way, a “complexity-adaptive” neural representation, grounded in the Tucker decomposition, is obtained that can flexibly represent various types of fields. A theoretical analysis is provided to support the recoverability of the proposed design. Moreover, extensive experiments, using various physical field tensors, demonstrate the superiority of the proposed approach compared to state-of-the-art baselines.


Persistent Identifierhttp://hdl.handle.net/10722/362252
ISSN
2023 Impact Factor: 4.6
2023 SCImago Journal Rankings: 2.520

 

DC FieldValueLanguage
dc.contributor.authorChen, Panqi-
dc.contributor.authorLi, Siyuan-
dc.contributor.authorCheng, Lei-
dc.contributor.authorFu, Xiao-
dc.contributor.authorWu, Yik Chung-
dc.contributor.authorTheodoridis, Sergios-
dc.date.accessioned2025-09-20T00:31:05Z-
dc.date.available2025-09-20T00:31:05Z-
dc.date.issued2025-01-01-
dc.identifier.citationIEEE Transactions on Signal Processing, 2025, v. 73, p. 2704-2718-
dc.identifier.issn1053-587X-
dc.identifier.urihttp://hdl.handle.net/10722/362252-
dc.description.abstract<p>Reconstructing physical field tensors from in situ observations, such as radio maps and ocean sound speed fields, is crucial for enabling environment-aware decision making in various applications, e.g., wireless communications and underwater acoustics. Field data reconstruction is often challenging, due to the limited and noisy nature of the observations, necessitating the incorporation of prior information to aid the reconstruction process. Deep neural network-based data-driven structural constraints (e.g., “deeply learned priors”) have showed promising performance. However, this family of techniques faces challenges such as model mismatches between training and testing phases. This work introduces FieldFormer, a self-supervised neural prior learned solely from the limited in situ observations without the need of offline training. Specifically, the proposed framework starts with modeling the fields of interest using the tensor Tucker model of a high multilinear rank, which ensures a universal approximation property for all fields. In the sequel, an attention mechanism is incorporated to learn the sparsity pattern that underlies the core tensor in order to reduce the solution space. In this way, a “complexity-adaptive” neural representation, grounded in the Tucker decomposition, is obtained that can flexibly represent various types of fields. A theoretical analysis is provided to support the recoverability of the proposed design. Moreover, extensive experiments, using various physical field tensors, demonstrate the superiority of the proposed approach compared to state-of-the-art baselines.</p>-
dc.languageeng-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.ispartofIEEE Transactions on Signal Processing-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subject3D physical field reconstruction-
dc.subjecttensor attention prior-
dc.subjecttensor completion-
dc.titleFieldFormer: Self-supervised Reconstruction of Physical Fields via Tensor Attention Prior-
dc.typeArticle-
dc.identifier.doi10.1109/TSP.2025.3580374-
dc.identifier.scopuseid_2-s2.0-105009419363-
dc.identifier.volume73-
dc.identifier.spage2704-
dc.identifier.epage2718-
dc.identifier.eissn1941-0476-
dc.identifier.issnl1053-587X-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats