This study explores the application of parallel algorithms to enhance large-scale sorting, focusing on the QuickSort method. Implemented in both sequential and parallel forms, the paper provides a detailed comparison ...This study explores the application of parallel algorithms to enhance large-scale sorting, focusing on the QuickSort method. Implemented in both sequential and parallel forms, the paper provides a detailed comparison of their performance. This study investigates the efficacy of both techniques through the lens of array generation and pivot selection to manage datasets of varying sizes. This study meticulously documents the performance metrics, recording 16,499.2 milliseconds for the serial implementation and 16,339 milliseconds for the parallel implementation when sorting an array by using C++ chrono library. These results suggest that while the performance gains of the parallel approach over its serial counterpart are not immediately pronounced for smaller datasets, the benefits are expected to be more substantial as the dataset size increases.展开更多
We apply the recent important result of serial sorting of n real numbers in time to the design of a parallel algorithm for sorting real numbers in time and operations. This is the first NC algorithm known to take oper...We apply the recent important result of serial sorting of n real numbers in time to the design of a parallel algorithm for sorting real numbers in time and operations. This is the first NC algorithm known to take operations for sorting real numbers.展开更多
A unilied vector sorting algorithm (VSA) is proposed, which sorts N arbitrary num-bers with clog. N-bits on an SIMD multi-processor system (SMMP) with processors and a composite interconnected network in time, where c...A unilied vector sorting algorithm (VSA) is proposed, which sorts N arbitrary num-bers with clog. N-bits on an SIMD multi-processor system (SMMP) with processors and a composite interconnected network in time, where c is an arbitrary positive constant. When is an arbitrary small posi-tive constant and u = log2 N, it is an O(logN) algorithm and when it is an optimal algorithm,pT = O(N log N)); where u = 1, c = 1 and e = 0.5 (a constant).展开更多
Most traditional merging and merging-based sorting algorithms are based on 2 sorters or 2 comparators A new merging technique is developed, namely sloping-and-shaking multiway merging, and a corresponding mul-tiway so...Most traditional merging and merging-based sorting algorithms are based on 2 sorters or 2 comparators A new merging technique is developed, namely sloping-and-shaking multiway merging, and a corresponding mul-tiway sorting method based only on k-sorters is proposed The sloping-and-shaking merging algorithm merges k sorted lists into one, where k can be any prime number The merging process is not a series of recursive applications of 2-way morging It sorts the keys on the m × k plane in vertical and horizontal directions, then along sloping lines with various slope rates step by step Only k-sorters are needed in the merging or sorting process. The time needed to merge ksorted lists, with m of each, is ( k + [log2( m / k) ]) tk, and the time for sorting N keys is (1 + (p - 1) k + 1/2( p -1) (p - 2)[ log2k])tk, where p - logkN, and tk is the time to sort k keys. The proposed algorithms can be implemented either by hardwared sorting networks, or on general purpose parallel and vector machines The traditional odd-even merging can be viewed as a special case of the multiway merging proposed (when k is 2) While theoretically the proposed algorithms provide a new understanding of parallel merging and sorting processes, they may be used in prac-tice to construct sorting circuits fasler than 2-sorter based sorting methods.展开更多
In this paper, using the metric of iso--efficiency function [if. we analyze the scalability of PSRS (Parallel. Sorting by Regular Sample) algorithm I2] on two popular architectures (Mesh and Hypercube) The Isoefficien...In this paper, using the metric of iso--efficiency function [if. we analyze the scalability of PSRS (Parallel. Sorting by Regular Sample) algorithm I2] on two popular architectures (Mesh and Hypercube) The Isoefficiency function of PSRS on 2-dimensional mesh With p processors reaches the lower bound for that of sorting algorithms on this architecture. In nils sense, we say the scalabilify of PSRS is optimal on 2-dimensional mesh. The lso-efficiency function of PSRS on hypercube is equal to that of PSRS on 2-dimensional mesh. After changing the data exchanging scheme of PSRS, -cafe get a ne'v iso-efficiency function . which is better than that of PSRS on 2-dimensional mesh So we say that hypercube is more suitable for PSRS than 2--dimensional mesh.展开更多
文摘This study explores the application of parallel algorithms to enhance large-scale sorting, focusing on the QuickSort method. Implemented in both sequential and parallel forms, the paper provides a detailed comparison of their performance. This study investigates the efficacy of both techniques through the lens of array generation and pivot selection to manage datasets of varying sizes. This study meticulously documents the performance metrics, recording 16,499.2 milliseconds for the serial implementation and 16,339 milliseconds for the parallel implementation when sorting an array by using C++ chrono library. These results suggest that while the performance gains of the parallel approach over its serial counterpart are not immediately pronounced for smaller datasets, the benefits are expected to be more substantial as the dataset size increases.
文摘We apply the recent important result of serial sorting of n real numbers in time to the design of a parallel algorithm for sorting real numbers in time and operations. This is the first NC algorithm known to take operations for sorting real numbers.
文摘A unilied vector sorting algorithm (VSA) is proposed, which sorts N arbitrary num-bers with clog. N-bits on an SIMD multi-processor system (SMMP) with processors and a composite interconnected network in time, where c is an arbitrary positive constant. When is an arbitrary small posi-tive constant and u = log2 N, it is an O(logN) algorithm and when it is an optimal algorithm,pT = O(N log N)); where u = 1, c = 1 and e = 0.5 (a constant).
基金Project supported by the National "863"High-Tech Program of China and the National Natural Science Foundation of China.
文摘Most traditional merging and merging-based sorting algorithms are based on 2 sorters or 2 comparators A new merging technique is developed, namely sloping-and-shaking multiway merging, and a corresponding mul-tiway sorting method based only on k-sorters is proposed The sloping-and-shaking merging algorithm merges k sorted lists into one, where k can be any prime number The merging process is not a series of recursive applications of 2-way morging It sorts the keys on the m × k plane in vertical and horizontal directions, then along sloping lines with various slope rates step by step Only k-sorters are needed in the merging or sorting process. The time needed to merge ksorted lists, with m of each, is ( k + [log2( m / k) ]) tk, and the time for sorting N keys is (1 + (p - 1) k + 1/2( p -1) (p - 2)[ log2k])tk, where p - logkN, and tk is the time to sort k keys. The proposed algorithms can be implemented either by hardwared sorting networks, or on general purpose parallel and vector machines The traditional odd-even merging can be viewed as a special case of the multiway merging proposed (when k is 2) While theoretically the proposed algorithms provide a new understanding of parallel merging and sorting processes, they may be used in prac-tice to construct sorting circuits fasler than 2-sorter based sorting methods.
文摘In this paper, using the metric of iso--efficiency function [if. we analyze the scalability of PSRS (Parallel. Sorting by Regular Sample) algorithm I2] on two popular architectures (Mesh and Hypercube) The Isoefficiency function of PSRS on 2-dimensional mesh With p processors reaches the lower bound for that of sorting algorithms on this architecture. In nils sense, we say the scalabilify of PSRS is optimal on 2-dimensional mesh. The lso-efficiency function of PSRS on hypercube is equal to that of PSRS on 2-dimensional mesh. After changing the data exchanging scheme of PSRS, -cafe get a ne'v iso-efficiency function . which is better than that of PSRS on 2-dimensional mesh So we say that hypercube is more suitable for PSRS than 2--dimensional mesh.