Implicit polynomials(IPs)are considered as a powerful tool for object curve fitting tasks due to their simplicity and fewer parameters.The traditional linear methods,such as 3L,Min Var,and Min Max,often achieve good p...Implicit polynomials(IPs)are considered as a powerful tool for object curve fitting tasks due to their simplicity and fewer parameters.The traditional linear methods,such as 3L,Min Var,and Min Max,often achieve good performances in fitting simple objects,but usually work poorly or even fail to obtain closed curves of complex object contours.To handle the complex fitting issues,taking the advantages of deep neural networks,we designed a neural network model continuity-sparsity constrained network(CSC-Net)with encoder and decoder structure to learn the coefficients of IPs.Further,the continuity constraint is added to ensure the obtained curves are closed,and the sparseness constraint is added to reduce the spurious zero sets of the fitted curves.The experimental results show that better performances have been obtained on both simple and complex object fitting tasks.展开更多
Due to its low latency,byte-addressable,non-volatile,and high density,persistent memory(PM)is expected to be used to design a high-performance storage system.However,PM also has disadvantages such as limited endurance...Due to its low latency,byte-addressable,non-volatile,and high density,persistent memory(PM)is expected to be used to design a high-performance storage system.However,PM also has disadvantages such as limited endurance,thereby proposing challenges to traditional index technologies such as B(+)tree.B(+)tree is originally designed for dynamic random access memory(DRAM)-based or disk-based systems and has a large write amplification problem.The high write amplification is detrimental to a PM-based system.This paper proposes WO-tree,a write-optimized B(+)tree for PM.WO-tree adopts an unordered write mechanism for the leaf nodes,and the unordered write mechanism can reduce a large number of write operations caused by maintaining the entry order in the leaf nodes.When the leaf node is split,WO-tree performs the cache line flushing operation after all write operations are completed,which can reduce frequent data flushing operations.WO-tree adopts a partial logging mechanism and it only writes the log for the leaf node.The inner node recognizes the data inconsistency by the read operation and the data can be recovered using the leaf node information,thereby significantly reducing the logging overhead.Furthermore,WO-tree adopts a lock-free search for inner nodes,which reduces the locking overhead for concurrency operation.We evaluate WO-tree using the Yahoo!Cloud Serving Benchmark(YCSB)workloads.Compared with traditional B(+)tree,wB-tree,and Fast-Fair,the number of cache line flushes caused by WO-tree insertion operations is reduced by 84.7%,22.2%,and 30.8%,respectively,and the execution time is reduced by 84.3%,27.3%,and 44.7%,respectively.展开更多
基金supported by the Key-Area Research and Development Program of Guangdong Province(No.2019B010107001)the Fund of Guangdong Support Program(No.2019TY05X071).
文摘Implicit polynomials(IPs)are considered as a powerful tool for object curve fitting tasks due to their simplicity and fewer parameters.The traditional linear methods,such as 3L,Min Var,and Min Max,often achieve good performances in fitting simple objects,but usually work poorly or even fail to obtain closed curves of complex object contours.To handle the complex fitting issues,taking the advantages of deep neural networks,we designed a neural network model continuity-sparsity constrained network(CSC-Net)with encoder and decoder structure to learn the coefficients of IPs.Further,the continuity constraint is added to ensure the obtained curves are closed,and the sparseness constraint is added to reduce the spurious zero sets of the fitted curves.The experimental results show that better performances have been obtained on both simple and complex object fitting tasks.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.U1709220,U2001203,61821003,61872413,and 61902137in part by the National Key Research and Development Program of China under Grant No.2018YFB1003305in part by the Key-Area Research and Development Program of Guangdong Province of China under Grant No.2019B010107001.
文摘Due to its low latency,byte-addressable,non-volatile,and high density,persistent memory(PM)is expected to be used to design a high-performance storage system.However,PM also has disadvantages such as limited endurance,thereby proposing challenges to traditional index technologies such as B(+)tree.B(+)tree is originally designed for dynamic random access memory(DRAM)-based or disk-based systems and has a large write amplification problem.The high write amplification is detrimental to a PM-based system.This paper proposes WO-tree,a write-optimized B(+)tree for PM.WO-tree adopts an unordered write mechanism for the leaf nodes,and the unordered write mechanism can reduce a large number of write operations caused by maintaining the entry order in the leaf nodes.When the leaf node is split,WO-tree performs the cache line flushing operation after all write operations are completed,which can reduce frequent data flushing operations.WO-tree adopts a partial logging mechanism and it only writes the log for the leaf node.The inner node recognizes the data inconsistency by the read operation and the data can be recovered using the leaf node information,thereby significantly reducing the logging overhead.Furthermore,WO-tree adopts a lock-free search for inner nodes,which reduces the locking overhead for concurrency operation.We evaluate WO-tree using the Yahoo!Cloud Serving Benchmark(YCSB)workloads.Compared with traditional B(+)tree,wB-tree,and Fast-Fair,the number of cache line flushes caused by WO-tree insertion operations is reduced by 84.7%,22.2%,and 30.8%,respectively,and the execution time is reduced by 84.3%,27.3%,and 44.7%,respectively.