File Download
  Links for fulltext
     (May Require Subscription)
Supplementary

Conference Paper: APUS: Fast and Scalable PAXOS on RDMA

TitleAPUS: Fast and Scalable PAXOS on RDMA
Authors
KeywordsState Machine Replication
Fault Tolerance
Remote Direct Memory Access
Software Reliability
Issue Date2017
PublisherACM.
Citation
ACM Symposium on Cloud Computing 2017 (SoCC '17), Santa Clara, CA, 24-27 September 2017. In Proceedings of SoCC ’17, 2017, p. 94-107 How to Cite?
AbstractState machine replication (SMR) uses Paxos to enforce the same inputs for a program (e.g., Redis) replicated on a number of hosts, tolerating various types of failures. Unfortunately, traditional Paxos protocols incur prohibitive performance overhead on server programs due to their high consensus latency on TCP/IP. Worse, the consensus latency of extant Paxos protocols increases drastically when more concurrent client connections or hosts are added. This paper presents APUS, the first RDMA-based Paxos protocol that aims to be fast and scalable to client connections and hosts. APUS intercepts inbound socket calls of an unmodified server program, assigns a total order for all input requests, and uses fast RDMA primitives to replicate these requests concurrently. We evaluated APUS on nine widely-used server programs (e.g., Redis and MySQL). APUS incurred a mean overhead of 4.3% in response time and 4.2% in throughput. We integrated APUS with an SMR system Calvin. Our Calvin-APUS integration was 8.2X faster than the extant Calvin-ZooKeeper integration. The consensus latency of APUS outperformed an RDMA-based consensus protocol by 4.9X. APUS source code and raw results are released on github. com/hku-systems/apus.
Persistent Identifierhttp://hdl.handle.net/10722/245447
ISBN
ISI Accession Number ID

 

DC FieldValueLanguage
dc.contributor.authorWang, C-
dc.contributor.authorJiang, J-
dc.contributor.authorChen, X-
dc.contributor.authorYI, N-
dc.contributor.authorCui, H-
dc.date.accessioned2017-09-18T02:10:53Z-
dc.date.available2017-09-18T02:10:53Z-
dc.date.issued2017-
dc.identifier.citationACM Symposium on Cloud Computing 2017 (SoCC '17), Santa Clara, CA, 24-27 September 2017. In Proceedings of SoCC ’17, 2017, p. 94-107-
dc.identifier.isbn978-1-4503-5028-0-
dc.identifier.urihttp://hdl.handle.net/10722/245447-
dc.description.abstractState machine replication (SMR) uses Paxos to enforce the same inputs for a program (e.g., Redis) replicated on a number of hosts, tolerating various types of failures. Unfortunately, traditional Paxos protocols incur prohibitive performance overhead on server programs due to their high consensus latency on TCP/IP. Worse, the consensus latency of extant Paxos protocols increases drastically when more concurrent client connections or hosts are added. This paper presents APUS, the first RDMA-based Paxos protocol that aims to be fast and scalable to client connections and hosts. APUS intercepts inbound socket calls of an unmodified server program, assigns a total order for all input requests, and uses fast RDMA primitives to replicate these requests concurrently. We evaluated APUS on nine widely-used server programs (e.g., Redis and MySQL). APUS incurred a mean overhead of 4.3% in response time and 4.2% in throughput. We integrated APUS with an SMR system Calvin. Our Calvin-APUS integration was 8.2X faster than the extant Calvin-ZooKeeper integration. The consensus latency of APUS outperformed an RDMA-based consensus protocol by 4.9X. APUS source code and raw results are released on github. com/hku-systems/apus.-
dc.languageeng-
dc.publisherACM.-
dc.relation.ispartofProceedings of SoCC ’17-
dc.subjectState Machine Replication-
dc.subjectFault Tolerance-
dc.subjectRemote Direct Memory Access-
dc.subjectSoftware Reliability-
dc.titleAPUS: Fast and Scalable PAXOS on RDMA-
dc.typeConference_Paper-
dc.identifier.emailCui, H: heming@hku.hk-
dc.identifier.authorityCui, H=rp02008-
dc.description.naturepublished_or_final_version-
dc.identifier.doi10.1145/3127479.3128609-
dc.identifier.scopuseid_2-s2.0-85032447790-
dc.identifier.hkuros276666-
dc.identifier.spage94-
dc.identifier.epage107-
dc.identifier.isiWOS:000414279000008-
dc.publisher.placeSanta Clara, CA-

Export via OAI-PMH Interface in XML Formats


OR


Export to Other Non-XML Formats