File Download
There are no files associated with this item.
Links for fulltext
(May Require Subscription)
- Publisher Website: 10.1016/j.bdr.2022.100345
- Scopus: eid_2-s2.0-85137621532
- WOS: WOS:000876354700001
Supplementary
- Citations:
- Appears in Collections:
Article: Correlation Expert Tuning System for Performance Acceleration
Title | Correlation Expert Tuning System for Performance Acceleration |
---|---|
Authors | |
Keywords | Auto-tuning Correlation expert rules Database optimization Reinforcement learning Training time reduction |
Issue Date | 2022 |
Citation | Big Data Research, 2022, v. 30, article no. 100345 How to Cite? |
Abstract | One configuration can not fit all workloads and diverse resources limitations in modern databases. Auto-tuning methods based on reinforcement learning (RL) normally depend on the exhaustive offline training process with a huge amount of performance measurements, which includes large inefficient knobs combinations under a trial-and-error method. The most time-consuming part of the process is not the RL network training but the performance measurements for acquiring the reward values of target goals like higher throughput or lower latency. In other words, the whole process nearly could be considered as a zero-knowledge method without any experience or rules to constrain it. So we propose a correlation expert tuning system (CXTuning) for acceleration, which contains a correlation knowledge model to remove unnecessary training costs and a multi-instance mechanism (MIM) to support fine-grained tuning for diverse workloads. The models define the importance and correlations among these configuration knobs for the user's specified target. But knobs-based optimization should not be the final destination for auto-tuning. Furthermore, we import an abstracted architectural optimization method into CXTuning as a part of the progressive expert knowledge tuning (PEKT) algorithm. Experiments show that CXTuning can effectively reduce the training time and achieve extra performance promotion compared with the state-of-the-art auto-tuning method. |
Persistent Identifier | http://hdl.handle.net/10722/330849 |
ISI Accession Number ID |
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Chai, Yanfeng | - |
dc.contributor.author | Ge, Jiake | - |
dc.contributor.author | Zhang, Qiang | - |
dc.contributor.author | Chai, Yunpeng | - |
dc.contributor.author | Wang, Xin | - |
dc.contributor.author | Zhang, Qingpeng | - |
dc.date.accessioned | 2023-09-05T12:15:13Z | - |
dc.date.available | 2023-09-05T12:15:13Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Big Data Research, 2022, v. 30, article no. 100345 | - |
dc.identifier.uri | http://hdl.handle.net/10722/330849 | - |
dc.description.abstract | One configuration can not fit all workloads and diverse resources limitations in modern databases. Auto-tuning methods based on reinforcement learning (RL) normally depend on the exhaustive offline training process with a huge amount of performance measurements, which includes large inefficient knobs combinations under a trial-and-error method. The most time-consuming part of the process is not the RL network training but the performance measurements for acquiring the reward values of target goals like higher throughput or lower latency. In other words, the whole process nearly could be considered as a zero-knowledge method without any experience or rules to constrain it. So we propose a correlation expert tuning system (CXTuning) for acceleration, which contains a correlation knowledge model to remove unnecessary training costs and a multi-instance mechanism (MIM) to support fine-grained tuning for diverse workloads. The models define the importance and correlations among these configuration knobs for the user's specified target. But knobs-based optimization should not be the final destination for auto-tuning. Furthermore, we import an abstracted architectural optimization method into CXTuning as a part of the progressive expert knowledge tuning (PEKT) algorithm. Experiments show that CXTuning can effectively reduce the training time and achieve extra performance promotion compared with the state-of-the-art auto-tuning method. | - |
dc.language | eng | - |
dc.relation.ispartof | Big Data Research | - |
dc.subject | Auto-tuning | - |
dc.subject | Correlation expert rules | - |
dc.subject | Database optimization | - |
dc.subject | Reinforcement learning | - |
dc.subject | Training time reduction | - |
dc.title | Correlation Expert Tuning System for Performance Acceleration | - |
dc.type | Article | - |
dc.description.nature | link_to_subscribed_fulltext | - |
dc.identifier.doi | 10.1016/j.bdr.2022.100345 | - |
dc.identifier.scopus | eid_2-s2.0-85137621532 | - |
dc.identifier.volume | 30 | - |
dc.identifier.spage | article no. 100345 | - |
dc.identifier.epage | article no. 100345 | - |
dc.identifier.eissn | 2214-5796 | - |
dc.identifier.isi | WOS:000876354700001 | - |