File Download

There are no files associated with this item.

  Links for fulltext
     (May Require Subscription)

Article: An Interpretable Deep Learning-based Model for Decision-making through Piecewise Linear Approximation

TitleAn Interpretable Deep Learning-based Model for Decision-making through Piecewise Linear Approximation
Authors
KeywordsBusiness intelligence
Decision making
Deep neural networks
Explainable artificial intelligence
Predictive modeling
Issue Date21-Feb-2025
PublisherAssociation for Computing Machinery (ACM)
Citation
ACM Transactions on Knowledge Discovery from Data, 2025, v. 19, n. 3, p. 1-35 How to Cite?
Abstract

Full-complexity machine learning models, such as the deep neural network, are non-traceable black-box, whereas the classic interpretable models, such as linear regression models, are often over-simplified, leading to lower accuracy. Model interpretability limits the application of machine learning models in management problems, which requires high prediction performance, as well as the understanding of individual features' contributions to the model outcome. To enhance model interpretability while preserving good prediction performance, we propose a hybrid interpretable model that combines a piecewise linear component and a nonlinear component. The first component describes the explicit feature contributions by piecewise linear approximation to increase the expressiveness of the model. The other component uses a multi-layer perceptron to increase the prediction performance by capturing the high-order interactions between features and their complex nonlinear transformations. The interpretability is obtained once the model is learned in the form of shape functions for the main effects. We also provide a variant to explore the higher-order interactions among features. Experiments are conducted on synthetic and real-world datasets to demonstrate that the proposed models can achieve good interpretability by explicitly describing the main effects and the interaction effects of the features while maintaining state-of-the-art accuracy.


Persistent Identifierhttp://hdl.handle.net/10722/367039
ISSN
2023 Impact Factor: 4.0
2023 SCImago Journal Rankings: 1.303

 

DC FieldValueLanguage
dc.contributor.authorGuo, Mengzhuo-
dc.contributor.authorZhang, Qingpeng-
dc.contributor.authorZeng, Daniel Dajun-
dc.date.accessioned2025-12-02T00:35:21Z-
dc.date.available2025-12-02T00:35:21Z-
dc.date.issued2025-02-21-
dc.identifier.citationACM Transactions on Knowledge Discovery from Data, 2025, v. 19, n. 3, p. 1-35-
dc.identifier.issn1556-4681-
dc.identifier.urihttp://hdl.handle.net/10722/367039-
dc.description.abstract<p>Full-complexity machine learning models, such as the deep neural network, are non-traceable black-box, whereas the classic interpretable models, such as linear regression models, are often over-simplified, leading to lower accuracy. Model interpretability limits the application of machine learning models in management problems, which requires high prediction performance, as well as the understanding of individual features' contributions to the model outcome. To enhance model interpretability while preserving good prediction performance, we propose a hybrid interpretable model that combines a piecewise linear component and a nonlinear component. The first component describes the explicit feature contributions by piecewise linear approximation to increase the expressiveness of the model. The other component uses a multi-layer perceptron to increase the prediction performance by capturing the high-order interactions between features and their complex nonlinear transformations. The interpretability is obtained once the model is learned in the form of shape functions for the main effects. We also provide a variant to explore the higher-order interactions among features. Experiments are conducted on synthetic and real-world datasets to demonstrate that the proposed models can achieve good interpretability by explicitly describing the main effects and the interaction effects of the features while maintaining state-of-the-art accuracy.</p>-
dc.languageeng-
dc.publisherAssociation for Computing Machinery (ACM)-
dc.relation.ispartofACM Transactions on Knowledge Discovery from Data-
dc.rightsThis work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.-
dc.subjectBusiness intelligence-
dc.subjectDecision making-
dc.subjectDeep neural networks-
dc.subjectExplainable artificial intelligence-
dc.subjectPredictive modeling-
dc.titleAn Interpretable Deep Learning-based Model for Decision-making through Piecewise Linear Approximation -
dc.typeArticle-
dc.identifier.doi10.1145/3715150-
dc.identifier.scopuseid_2-s2.0-105002575088-
dc.identifier.volume19-
dc.identifier.issue3-
dc.identifier.spage1-
dc.identifier.epage35-
dc.identifier.eissn1556-472X-
dc.identifier.issnl1556-4681-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats