File Download
There are no files associated with this item.
Supplementary
-
Citations:
- Scopus: 0
- Appears in Collections:
Conference Paper: Learning Neural Acoustic Fields
Title | Learning Neural Acoustic Fields |
---|---|
Authors | |
Issue Date | 2022 |
Citation | Advances in Neural Information Processing Systems, 2022, v. 35 How to Cite? |
Abstract | Our environment is filled with rich and dynamic acoustic information. When we walk into a cathedral, the reverberations as much as appearance inform us of the sanctuary's wide open space. Similarly, as an object moves around us, we expect the sound emitted to also exhibit this movement. While recent advances in learned implicit functions have led to increasingly higher quality representations of the visual world, there have not been commensurate advances in learning spatial auditory representations. To address this gap, we introduce Neural Acoustic Fields (NAFs), an implicit representation that captures how sounds propagate in a physical scene. By modeling acoustic propagation in a scene as a linear time-invariant system, NAFs learn to continuously map all emitter and listener location pairs to a neural impulse response function that can then be applied to arbitrary sounds. We demonstrate NAFs on both synthetic and real data, and show that the continuous nature of NAFs enables us to render spatial acoustics for a listener at arbitrary locations. We further show that the representation learned by NAFs can help improve visual learning with sparse views. Finally we show that a representation informative of scene structure emerges during the learning of NAFs. Project site: https://www.andrew.cmu.edu/user/afluo/Neural_Acoustic_Fields. |
Persistent Identifier | http://hdl.handle.net/10722/352358 |
ISSN | 2020 SCImago Journal Rankings: 1.399 |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Luo, Andrew | - |
dc.contributor.author | Tarr, Michael J. | - |
dc.contributor.author | Torralba, Antonio | - |
dc.contributor.author | Du, Yilun | - |
dc.contributor.author | Tenenbaum, Joshua B. | - |
dc.contributor.author | Gan, Chuang | - |
dc.date.accessioned | 2024-12-16T03:58:27Z | - |
dc.date.available | 2024-12-16T03:58:27Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Advances in Neural Information Processing Systems, 2022, v. 35 | - |
dc.identifier.issn | 1049-5258 | - |
dc.identifier.uri | http://hdl.handle.net/10722/352358 | - |
dc.description.abstract | Our environment is filled with rich and dynamic acoustic information. When we walk into a cathedral, the reverberations as much as appearance inform us of the sanctuary's wide open space. Similarly, as an object moves around us, we expect the sound emitted to also exhibit this movement. While recent advances in learned implicit functions have led to increasingly higher quality representations of the visual world, there have not been commensurate advances in learning spatial auditory representations. To address this gap, we introduce Neural Acoustic Fields (NAFs), an implicit representation that captures how sounds propagate in a physical scene. By modeling acoustic propagation in a scene as a linear time-invariant system, NAFs learn to continuously map all emitter and listener location pairs to a neural impulse response function that can then be applied to arbitrary sounds. We demonstrate NAFs on both synthetic and real data, and show that the continuous nature of NAFs enables us to render spatial acoustics for a listener at arbitrary locations. We further show that the representation learned by NAFs can help improve visual learning with sparse views. Finally we show that a representation informative of scene structure emerges during the learning of NAFs. Project site: https://www.andrew.cmu.edu/user/afluo/Neural_Acoustic_Fields. | - |
dc.language | eng | - |
dc.relation.ispartof | Advances in Neural Information Processing Systems | - |
dc.title | Learning Neural Acoustic Fields | - |
dc.type | Conference_Paper | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.scopus | eid_2-s2.0-85160137010 | - |
dc.identifier.volume | 35 | - |