内存对象缓存系统在通信方面受制于传统以太网的高延迟,在存储方面受限于服务器内可部署的内存规模,亟需融合新一代高性能I/O技术来提升性能、扩展容量.以广泛应用的Memcached为例,聚焦内存对象缓存系统的数据通路并基于高性能I/O对其...内存对象缓存系统在通信方面受制于传统以太网的高延迟,在存储方面受限于服务器内可部署的内存规模,亟需融合新一代高性能I/O技术来提升性能、扩展容量.以广泛应用的Memcached为例,聚焦内存对象缓存系统的数据通路并基于高性能I/O对其进行通信加速与存储扩展.首先,基于日益流行的高性能远程直接内存访问(remote direct memory access,RDMA)语义重新设计通信协议,并针对不同的Memcached操作及消息大小设计不同的策略,降低了通信延迟.其次,利用高性能NVMe SSD来扩展Memcached存储,采用日志结构管理内存与外存2级存储,并通过用户级驱动实现对SSD的直接访问,降低了软件开销.最终,实现了支持JVM环境的高性能缓存系统U2cache.U2cache通过旁路操作系统内核和JVM运行时与内存拷贝、RDMA通信、SSD访问交叠流水的方法,显著降低了数据访问开销.实验结果表明,U2cache通信延迟接近RDMA底层硬件性能;对大消息而言,相较无优化版本,性能提高超过20%;访问SSD中的数据时,相比通过内核I/O软件栈的方式,访问延迟最高降低了31%.展开更多
This paper presents two optimizations to improve the network receive performance in Xen, espe- cially for receiving small packets, by reducing per-packet overhead of network virtualization. First, the universal receiv...This paper presents two optimizations to improve the network receive performance in Xen, espe- cially for receiving small packets, by reducing per-packet overhead of network virtualization. First, the universal receive aggregation assembles incoming packets to decrease the cost of software bridge and bridge netfilter, no matter which protocol these packets use or whether they belong to the same TCP link. Second, the grant page sharing makes as many packets as possible share a single grant page to decrease the cost of expensive grant operations effectively. Experiment demonstrates that compared with default network virtualization in Xen, these two optimizations can reduce CPU cycles per packet by 31.20%, and improve UDP and TCP throughput by 37.73% and 25.62% on average.展开更多
文摘内存对象缓存系统在通信方面受制于传统以太网的高延迟,在存储方面受限于服务器内可部署的内存规模,亟需融合新一代高性能I/O技术来提升性能、扩展容量.以广泛应用的Memcached为例,聚焦内存对象缓存系统的数据通路并基于高性能I/O对其进行通信加速与存储扩展.首先,基于日益流行的高性能远程直接内存访问(remote direct memory access,RDMA)语义重新设计通信协议,并针对不同的Memcached操作及消息大小设计不同的策略,降低了通信延迟.其次,利用高性能NVMe SSD来扩展Memcached存储,采用日志结构管理内存与外存2级存储,并通过用户级驱动实现对SSD的直接访问,降低了软件开销.最终,实现了支持JVM环境的高性能缓存系统U2cache.U2cache通过旁路操作系统内核和JVM运行时与内存拷贝、RDMA通信、SSD访问交叠流水的方法,显著降低了数据访问开销.实验结果表明,U2cache通信延迟接近RDMA底层硬件性能;对大消息而言,相较无优化版本,性能提高超过20%;访问SSD中的数据时,相比通过内核I/O软件栈的方式,访问延迟最高降低了31%.
文摘This paper presents two optimizations to improve the network receive performance in Xen, espe- cially for receiving small packets, by reducing per-packet overhead of network virtualization. First, the universal receive aggregation assembles incoming packets to decrease the cost of software bridge and bridge netfilter, no matter which protocol these packets use or whether they belong to the same TCP link. Second, the grant page sharing makes as many packets as possible share a single grant page to decrease the cost of expensive grant operations effectively. Experiment demonstrates that compared with default network virtualization in Xen, these two optimizations can reduce CPU cycles per packet by 31.20%, and improve UDP and TCP throughput by 37.73% and 25.62% on average.