File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1145/3487048
- Scopus: eid_2-s2.0-85134091965
- WOS: WOS:000804983600019
- Find via
Supplementary
- Citations:
- Appears in Collections:
Article: Deciphering Feature Effects on Decision-Making in Ordinal Regression Problems: An Explainable Ordinal Factorization Model
Title | Deciphering Feature Effects on Decision-Making in Ordinal Regression Problems: An Explainable Ordinal Factorization Model |
---|---|
Authors | |
Keywords | decision support explainable machine learning factorization machines Ordinal regression |
Issue Date | 2021 |
Citation | ACM Transactions on Knowledge Discovery from Data, 2021, v. 16, n. 3, article no. 59 How to Cite? |
Abstract | Ordinal regression predicts the objects' labels that exhibit a natural ordering, which is vital to decision-making problems such as credit scoring and clinical diagnosis. In these problems, the ability to explain how the individual features and their interactions affect the decisions is as critical as model performance. Unfortunately, the existing ordinal regression models in the machine learning community aim at improving prediction accuracy rather than explore explainability. To achieve high accuracy while explaining the relationships between the features and the predictions, we propose a new method for ordinal regression problems, namely the Explainable Ordinal Factorization Model (XOFM). XOFM uses piecewise linear functions to approximate the shape functions of individual features, and renders the pairwise features interaction effects as heat-maps. The proposed XOFM captures the nonlinearity in the main effects and ensures the interaction effects' same flexibility. Therefore, the underlying model yields comparable performance while remaining explainable by explicitly describing the main and interaction effects. To address the potential sparsity problem caused by discretizing the whole feature scale into several sub-intervals, XOFM integrates the Factorization Machines (FMs) to factorize the model parameters. Comprehensive experiments with benchmark real-world and synthetic datasets demonstrate that the proposed XOFM leads to state-of-the-art prediction performance while preserving an easy-to-understand explainability. |
Persistent Identifier | http://hdl.handle.net/10722/330835 |
ISSN | 2023 Impact Factor: 4.0 2023 SCImago Journal Rankings: 1.303 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Guo, Mengzhuo | - |
dc.contributor.author | Xu, Zhongzhi | - |
dc.contributor.author | Zhang, Qingpeng | - |
dc.contributor.author | Liao, Xiuwu | - |
dc.contributor.author | Liu, Jiapeng | - |
dc.date.accessioned | 2023-09-05T12:15:04Z | - |
dc.date.available | 2023-09-05T12:15:04Z | - |
dc.date.issued | 2021 | - |
dc.identifier.citation | ACM Transactions on Knowledge Discovery from Data, 2021, v. 16, n. 3, article no. 59 | - |
dc.identifier.issn | 1556-4681 | - |
dc.identifier.uri | http://hdl.handle.net/10722/330835 | - |
dc.description.abstract | Ordinal regression predicts the objects' labels that exhibit a natural ordering, which is vital to decision-making problems such as credit scoring and clinical diagnosis. In these problems, the ability to explain how the individual features and their interactions affect the decisions is as critical as model performance. Unfortunately, the existing ordinal regression models in the machine learning community aim at improving prediction accuracy rather than explore explainability. To achieve high accuracy while explaining the relationships between the features and the predictions, we propose a new method for ordinal regression problems, namely the Explainable Ordinal Factorization Model (XOFM). XOFM uses piecewise linear functions to approximate the shape functions of individual features, and renders the pairwise features interaction effects as heat-maps. The proposed XOFM captures the nonlinearity in the main effects and ensures the interaction effects' same flexibility. Therefore, the underlying model yields comparable performance while remaining explainable by explicitly describing the main and interaction effects. To address the potential sparsity problem caused by discretizing the whole feature scale into several sub-intervals, XOFM integrates the Factorization Machines (FMs) to factorize the model parameters. Comprehensive experiments with benchmark real-world and synthetic datasets demonstrate that the proposed XOFM leads to state-of-the-art prediction performance while preserving an easy-to-understand explainability. | - |
dc.language | eng | - |
dc.relation.ispartof | ACM Transactions on Knowledge Discovery from Data | - |
dc.subject | decision support | - |
dc.subject | explainable machine learning | - |
dc.subject | factorization machines | - |
dc.subject | Ordinal regression | - |
dc.title | Deciphering Feature Effects on Decision-Making in Ordinal Regression Problems: An Explainable Ordinal Factorization Model | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1145/3487048 | - |
dc.identifier.scopus | eid_2-s2.0-85134091965 | - |
dc.identifier.volume | 16 | - |
dc.identifier.issue | 3 | - |
dc.identifier.spage | article no. 59 | - |
dc.identifier.epage | article no. 59 | - |
dc.identifier.eissn | 1556-472X | - |
dc.identifier.isi | WOS:000804983600019 | - |