簡易檢索 / 詳目顯示

研究生: 陳亮廷
Liang-Ting Chen
論文名稱: 一種針對多資料流之公平性與多個NVMe固態硬碟之負載平衡的狀態感知方法
A State-aware Method for Flows with Fairness on NVMe SSDs with Load Balance
指導教授: 吳晋賢
Chin-Hsien Wu
口試委員: 吳晋賢
Chin-Hsien Wu
陳雅淑
Ya-Shu Chen
謝仁偉
Jen-Wei Hsieh
張經略
Ching-Lueh Chang
學位類別: 碩士
Master
系所名稱: 電資學院 - 電子工程系
Department of Electronic and Computer Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 51
中文關鍵詞: 非揮發性記憶體主機控制器介面規範固態硬碟公平性負載平衡多資料流多個固態硬碟
外文關鍵詞: NVMe, SSD, Fairness, Load Balance, Multiple Flows, Multiple SSDs
相關次數: 點閱:276下載:14
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 現今,固態硬碟(SSD)與傳統硬碟(HDD)相比,憑藉其體積小、功耗低、抗震、靜音、存取速度快、非易失性等顯著優勢,成為存儲設備的最佳選擇。越來越多的場景採用多SSD架構來提升性能和擴展存儲容量,例如雲服務、資料中心、分散式系統和虛擬化環境。當多個用戶(資料流)同時競爭多個共享的SSD時,如果多SSD架構缺乏多個用戶之間的公平策略,那麼佔用資源較多的用戶可能會影響其他用戶。同時,如果多SSD架構缺乏多個共享SSD之間的負載平衡策略,某些特定SSD可能會收到過多的I/O請求,從而降低性能並縮短使用壽命。因此,我們提出一種有趣的狀態感知方法來考慮多資料流之公平性與多個NVMe固態硬碟之負載平衡。實驗結果表明,與其他方法相比,本文提出的方法平均提高了1.2x~1.4x的公平性和1.2x~2.6x的負載平衡。


    Nowadays, solid-state drives (SSDs) have become the best choice of storage devices because of its brilliant advantages such as small size, low-power consumption, shake resistance, silence, fast access and non-volatility, when compared with hard-disk drives (HDDs). More and more scenarios adopt a multi-SSD architecture to improve performance and expand storage capacity, such as cloud services, database centers, distributed systems and virtualized environments. When multiple users (flows) are competing for shared multiple SSDs concurrently, if the multi-SSD architecture lacks a fairness strategy among multiple users, a user that takes up more resources can affect other users. Meanwhile, if the multi-SSD architecture lacks a load-balance strategy among multiple shared SSDs, some specific SSDs may receive too many I/O requests to degrade the performance and shorten the lifespan. Therefore, we will propose an interesting state-aware method to consider flows with fairness on NVMe SSDs with load balance. According to experimental results, we can show that the proposed method can improve the fairness by 1.2x~1.4x and the load balance by 1.2x~2.6x on average, when compared to other methods.

    Abstract Contents List of Figures List of Tables List of Equations I. Introduction II. Background knowledge 2.1 Linux Block Layer 2.2 Modern NVMe SSDs 2.3 Fairness Scheduling III. Related Work IV. Motivation 4.1 I/O Intensity 4.2 SSD Imbalance 4.3 Queue Imbalance V. A State-aware Method for Flows with Fairness on NVMe SSDs with Load Balance 5.1 System Overview 5.2 Flows with Fairness 5.3 NVMe SSDs with Load Balance 5.4 Flows with Fairness on NVMe SSDs with Load Balance 5.4.1 State 1: No Load Balance and No Fairness 5.4.2 State 2: No Load Balance but Fairness 5.4.3 State 3: Load Balance but No Fairness VI. Performance Evaluation 6.1 Experimental Setup 6.2 Experimental Results 6.2.1 Maximum Slowdown 6.2.2 Fairness 6.2.3 Load Balance 6.2.4 Fairness × Load balance VII. Conclusion References

    [1] Sungyong Ahn, Kwanghyun La, and Jihong Kim. 2016. Improving I/O Resource Sharing of Linux Cgroup for NVMe SSDs on Multi-core Systems. In 8th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 16). USENIX Association, Denver, CO.

    [2] Ken Bates and Bruce McNutt. 1999. UMass Trace Repository.

    [3] Matias Bjørling, Jens Axboe, David Nellans, and Philippe Bonnet. 2013. Linux Block IO: Introducing Multi-Queue SSD Access on Multi-Core Systems. In Proceedings of the 6th International Systems and Storage Conference (Haifa, Israel) (SYSTOR ’13). Association for Computing Machinery, New York, NY, USA, Article 22, 10 pages.

    [4] Da-Wei Chang, Hsin-Hung Chen, and Wei-Jian Su. 2015. VSSD: Performance Isolation in a Solid-State Drive. ACM Trans. Des. Autom. Electron. Syst. 20, 4, Article 51 (sep 2015), 33 pages.

    [5] Li-Pin Chang, Tei-Wei Kuo, and Shi-Wu Lo. 2004. Real-Time Garbage Collection for Flash-Memory Storage Systems of Real-Time Embedded Systems. ACM Trans. Embed. Comput. Syst. 3, 4 (nov 2004), 837–863.

    [6] Wonil Choi, Myoungsoo Jung, Mahmut Kandemir, and Chita Das. 2018. Parallelizing Garbage Collection with I/O to Improve Flash Resource Utilization. In Proceedings of the 27th International Symposium on High-Performance Parallel and Distributed Computing (Tempe, Arizona) (HPDC ’18). Association for Computing Machinery, New York, NY, USA, 243–254.

    [7] Biplob Debnath, Sudipta Sengupta, and Jin Li. 2010. FlashStore: High Throughput Persistent Key-Value Store. Proc. VLDB Endow. 3, 1–2 (sep 2010), 1414–1425.

    [8] Hao Fan, Song Wu, Shadi Ibrahim, Ximing Chen, Hai Jin, Jiang Xiao, and Haibing Guan. 2019. NCQ-Aware I/O Scheduling for Conventional Solid State Drives. In 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS). 523–532.

    [9] Donghyun Gouk, Jie Zhang, and Myoungsoo Jung. 2018. Enabling Realistic Logical Device Interface and Driver for NVM Express Enabled Full System Simulations. International Journal of Parallel Programming 46, 4 (1 Aug. 2018), 710–721.

    [10] Mohammad Hedayati, Kai Shen, Michael L. Scott, and Mike Marty. 2019. Multi-Queue Fair Queuing. In 2019 USENIX Annual Technical Conference (USENIX ATC 19). USENIX Association, Renton, WA, 301–314.

    [11] Jian Huang, Anirudh Badam, Laura Caulfield, Suman Nath, Sudipta Sengupta, Bikash Sharma, and Moinuddin K. Qureshi. 2017. FlashBlox: Achieving Both Performance Isolation and Uniform Lifetime for Virtualized SSDs. In 15th USENIX Conference on File and Storage Technologies (FAST 17). USENIX Association, Santa Clara, CA, 375–390.

    [12] Myung Hyun Jo and Won Woo Ro. 2017. Dynamic Load Balancing of Dispatch Scheduling for Solid State Disks. IEEE Trans. Comput. 66, 6 (2017), 1034–1047.

    [13] Kanchan Joshi, Kaushal Yadav, and Praval Choudhary. 2017. Enabling NVMe WRR support in Linux Block Layer. In 9th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 17). USENIX Association, Santa Clara, CA.

    [14] Myoungsoo Jung and Mahmut T. Kandemir. 2014. Sprinkler: Maximizing resource utilization in many-chip solid state disks. In 2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA). 524–535.

    [15] Bryan S. Kim. 2018. Utilitarian Performance Isolation in Shared SSDs. In 10th USENIX Workshop on Hot Topics in Storage and File Systems (HotStorage 18). USENIX Association, Boston, MA.

    [16] Bryan S. Kim, Jongmoo Choi, and Sang Lyul Min. 2019. Design Tradeoffs for SSD Reliability. In 17th USENIX Conference on File and Storage Technologies (FAST 19). USENIX Association, Boston, MA, 281–294.

    [17] Seongmin Kim, Kyusik Kim, Heeyoung Shin, and Taeseok Kim. 2020. Practical Enhancement of User Experience in NVMe SSDs. Applied Sciences 10, 14 (2020).

    [18] Jiahao Liu, Fang Wang, and Dan Feng. 2019. CostPI: Cost-Effective Performance Isolation for Shared NVMe SSDs. In Proceedings of the 48th International Conference on Parallel Processing (Kyoto, Japan) (ICPP 2019). Association for Computing Machinery, New York, NY, USA, Article 25, 10 pages.

    [19] Yanjun Lu, Chentao Wu, and Jie Li. 2017. EGS: An Effective Global I/O Scheduler to Improve the Load Balancing of SSD-Based RAID-5 Arrays. In 2017 IEEE International Symposium on Parallel and Distributed Processing with Applications and 2017 IEEE International Conference on Ubiquitous Computing and Communications (ISPA/IUCC). 298–305.

    [20] Jiaxin Ou, Jiwu Shu, Youyou Lu, Letian Yi, and Wei Wang. 2014. EDM: An Endurance-Aware Data Migration Scheme for Load Balancing in SSD Storage Clusters. In 2014 IEEE 28th International Parallel and Distributed Processing Symposium. 787–796.

    [21] Stan Park and Kai Shen. 2012. FIOS: A Fair, Efficient Flash I/O Scheduler. In 10th USENIX Conference on File and Storage Technologies (FAST 12). USENIX Association, San Jose, CA.

    [22] Vishal Sharda, Swaroop Kavalanekar, and Bruce Worthington. 2008. Microsoft Production Server Traces (SNIA IOTTA Trace Set 158). In SNIA IOTTA Trace Repository, Geoff Kuenning (Ed.). Storage Networking Industry Association.

    [23] Kai Shen and Stan Park. 2013. FlashFQ: A Fair Queueing I/O Scheduler for Flash-Based SSDs. In 2013 USENIX Annual Technical Conference (USENIX ATC 13). USENIX Association, San Jose, CA, 67–78.

    [24] Hyogi Sim, Youngjae Kim, Sudharshan S. Vazhkudai, Devesh Tiwari, Ali Anwar, Ali R. Butt, and Lavanya Ramakrishnan. 2015. AnalyzeThis: an analysis workflow-aware storage system. In SC ’15: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. 1–12.

    [25] Arash Tavakkol, Juan Gómez-Luna, Mohammad Sadrosadati, Saugata Ghose, and Onur Mutlu. 2018. MQSim: A Framework for Enabling Realistic Studies of Modern Multi-Queue SSD Devices. In 16th USENIX Conference on File and Storage Technologies (FAST 18). USENIX Association, Oakland, CA, 49–66.

    [26] Arash Tavakkol, Mohammad Sadrosadati, Saugata Ghose, Jeremie Kim, Yixin Luo, Yaohua Wang, Nika Mansouri Ghiasi, Lois Orosa, Juan Gómez-Luna, and Onur Mutlu. 2018. FLIN: Enabling Fairness and Enhancing Performance in Modern NVMe Solid State Drives. In 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). 397–410.

    [27] Shivani Tripathy, Debiprasanna Sahoo, Manoranjan Satpathy, and Madhu Mutyam. 2020. Fuzzy Fairness Controller for NVMe SSDs. In Proceedings of the 34th ACM International Conference on Supercomputing (Barcelona, Spain) (ICS’20). Association for Computing Machinery, New York, NY, USA, Article 22, 12 pages.

    [28] Akshat Verma, Ricardo Koller, Luis Useche, and Raju Rangaswami. 2009. FIU Traces (SNIA IOTTA Trace Set 390). In SNIA IOTTA Trace Repository, Geoff Kuenning (Ed.). Storage Networking Industry Association.

    [29] Jiwon Woo, Minwoo Ahn, Gyusun Lee, and Jinkyu Jeong. 2021. D2FQ: Device-Direct Fair Queueing for NVMe SSDs. In 19th USENIX Conference on File and Storage Technologies (FAST 21). USENIX Association, 403–415.

    [30] NVM Express Workgroup. 2021. NVM Express 2.0 Specification.

    [31] Quan Zhang, Dan Feng, Fang Wang, and Yanwen Xie. 2013. An Efficient, QoS-Aware I/O Scheduler for Solid State Drive. In 2013 IEEE 10th International Conference on High Performance Computing and Communications 2013 IEEE International Conference on Embedded and Ubiquitous Computing. 1408–1415.

    [32] Nannan Zhao, Ali Anware, Yue Cheng, Mohammed Salman, Daping Li, Jiguang Wan, Changsheng Xie, Xubin He, Feiyi Wang, and Ali Butt. 2018. Chameleon: An Adaptive Wear Balancer for Flash Clusters. In 2018 IEEE International Parallel and Distributed Processing Symposium (IPDPS). 1163–1172.

    QR CODE