At present, I/O is the performance bottleneck limiting the speed of computer systems. A large number of I/O operations are synchronous read/write operations of only small data blocks. However, reducing the latency of ...At present, I/O is the performance bottleneck limiting the speed of computer systems. A large number of I/O operations are synchronous read/write operations of only small data blocks. However, reducing the latency of synchronous I/O operation is a non-trivial problem. In this paper, we propose two methods to address this problem. The first method, FastSync, uses a cache disk optimized for write operation via use of a disk-head position prediction algorithm. In this way, disk capacity is traded for synchronous I/O performance. The second method, LND, uses free memory capacity in a network environment as a cache disk for the buffering of synchronous I/O operation. Data integrity in FastSync is ensured by using a data log on the cache disk, whereas in LND, integrity is ensured by the storage in distributed memory of multiple copies of each data block. Both methods succeed in dramatically increasing the performance of synchronous I/O operation. The performance of LND is limited by the network speed, whereas performance of FastSync is determined mostly by the data block size.展开更多
基金the National Key Basic Research and Development Program of China (No. G1999032702) the National High-Tech Research and Development Program of China (No.2001AA111010) and the National Natural Science Foundation of China (No.60131160743)
文摘At present, I/O is the performance bottleneck limiting the speed of computer systems. A large number of I/O operations are synchronous read/write operations of only small data blocks. However, reducing the latency of synchronous I/O operation is a non-trivial problem. In this paper, we propose two methods to address this problem. The first method, FastSync, uses a cache disk optimized for write operation via use of a disk-head position prediction algorithm. In this way, disk capacity is traded for synchronous I/O performance. The second method, LND, uses free memory capacity in a network environment as a cache disk for the buffering of synchronous I/O operation. Data integrity in FastSync is ensured by using a data log on the cache disk, whereas in LND, integrity is ensured by the storage in distributed memory of multiple copies of each data block. Both methods succeed in dramatically increasing the performance of synchronous I/O operation. The performance of LND is limited by the network speed, whereas performance of FastSync is determined mostly by the data block size.