File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: Learning Entangled Single-Sample Distributions via Iterative Trimming

TitleLearning Entangled Single-Sample Distributions via Iterative Trimming
Authors
Issue Date2020
Citation
Proceedings of Machine Learning Research, 2020, v. 108, p. 2666-2676 How to Cite?
AbstractIn the setting of entangled single-sample distributions, the goal is to estimate some common parameter shared by a family of distributions, given one single sample from each distribution. We study mean estimation and linear regression under general conditions, and analyze a simple and computationally efficient method based on iteratively trimming samples and re-estimating the parameter on the trimmed sample set. We show that the method in logarithmic iterations outputs an estimation whose error only depends on the noise level of the dαne-th noisiest data point where α is a constant and n is the sample size. This means it can tolerate a constant fraction of high-noise points. These are the first such results under our general conditions with computationally efficient estimators. It also justifies the wide application and empirical success of iterative trimming in practice. Our theoretical results are complemented by experiments on synthetic data.
Persistent Identifierhttp://hdl.handle.net/10722/341406

 

DC FieldValueLanguage
dc.contributor.authorYuan, Hui-
dc.contributor.authorLiang, Yingyu-
dc.date.accessioned2024-03-13T08:42:34Z-
dc.date.available2024-03-13T08:42:34Z-
dc.date.issued2020-
dc.identifier.citationProceedings of Machine Learning Research, 2020, v. 108, p. 2666-2676-
dc.identifier.urihttp://hdl.handle.net/10722/341406-
dc.description.abstractIn the setting of entangled single-sample distributions, the goal is to estimate some common parameter shared by a family of distributions, given one single sample from each distribution. We study mean estimation and linear regression under general conditions, and analyze a simple and computationally efficient method based on iteratively trimming samples and re-estimating the parameter on the trimmed sample set. We show that the method in logarithmic iterations outputs an estimation whose error only depends on the noise level of the dαne-th noisiest data point where α is a constant and n is the sample size. This means it can tolerate a constant fraction of high-noise points. These are the first such results under our general conditions with computationally efficient estimators. It also justifies the wide application and empirical success of iterative trimming in practice. Our theoretical results are complemented by experiments on synthetic data.-
dc.languageeng-
dc.relation.ispartofProceedings of Machine Learning Research-
dc.titleLearning Entangled Single-Sample Distributions via Iterative Trimming-
dc.typeConference_Paper-
dc.description.naturelink_to_subscribed_fulltext-
dc.identifier.scopuseid_2-s2.0-85161902415-
dc.identifier.volume108-
dc.identifier.spage2666-
dc.identifier.epage2676-
dc.identifier.eissn2640-3498-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats