期刊文献+
共找到2,734篇文章
< 1 2 137 >
每页显示 20 50 100
Hadoop-based secure storage solution for big data in cloud computing environment 被引量:1
1
作者 Shaopeng Guan Conghui Zhang +1 位作者 Yilin Wang Wenqing Liu 《Digital Communications and Networks》 SCIE CSCD 2024年第1期227-236,共10页
In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose... In order to address the problems of the single encryption algorithm,such as low encryption efficiency and unreliable metadata for static data storage of big data platforms in the cloud computing environment,we propose a Hadoop based big data secure storage scheme.Firstly,in order to disperse the NameNode service from a single server to multiple servers,we combine HDFS federation and HDFS high-availability mechanisms,and use the Zookeeper distributed coordination mechanism to coordinate each node to achieve dual-channel storage.Then,we improve the ECC encryption algorithm for the encryption of ordinary data,and adopt a homomorphic encryption algorithm to encrypt data that needs to be calculated.To accelerate the encryption,we adopt the dualthread encryption mode.Finally,the HDFS control module is designed to combine the encryption algorithm with the storage model.Experimental results show that the proposed solution solves the problem of a single point of failure of metadata,performs well in terms of metadata reliability,and can realize the fault tolerance of the server.The improved encryption algorithm integrates the dual-channel storage mode,and the encryption storage efficiency improves by 27.6% on average. 展开更多
关键词 Big data security data encryption HADOOP parallel encrypted storage Zookeeper
下载PDF
Fortifying Healthcare Data Security in the Cloud:A Comprehensive Examination of the EPM-KEA Encryption Protocol
2
作者 Umi Salma Basha Shashi Kant Gupta +2 位作者 Wedad Alawad SeongKi Kim Salil Bharany 《Computers, Materials & Continua》 SCIE EI 2024年第5期3397-3416,共20页
A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is stil... A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is still a majorconcern. Encryption, network security, and adherence to data protection laws are key to ensuring the confidentialityand integrity of healthcare data in the cloud. The computational overhead of encryption technologies could leadto delays in data access and processing rates. To address these challenges, we introduced the Enhanced ParallelMulti-Key Encryption Algorithm (EPM-KEA), aiming to bolster healthcare data security and facilitate the securestorage of critical patient records in the cloud. The data was gathered from two categories Authorization forHospital Admission (AIH) and Authorization for High Complexity Operations.We use Z-score normalization forpreprocessing. The primary goal of implementing encryption techniques is to secure and store massive amountsof data on the cloud. It is feasible that cloud storage alternatives for protecting healthcare data will become morewidely available if security issues can be successfully fixed. As a result of our analysis using specific parametersincluding Execution time (42%), Encryption time (45%), Decryption time (40%), Security level (97%), and Energyconsumption (53%), the system demonstrated favorable performance when compared to the traditional method.This suggests that by addressing these security concerns, there is the potential for broader accessibility to cloudstorage solutions for safeguarding healthcare data. 展开更多
关键词 Cloud computing healthcare data security enhanced parallel multi-key encryption algorithm(EPM-KEA)
下载PDF
Improving Parallel Corpus Quality for Chinese-Vietnamese Statistical Machine Translation
3
作者 Huu-anh Tran Yuhang Guo +2 位作者 Ping Jian Shumin Shi Heyan Huang 《Journal of Beijing Institute of Technology》 EI CAS 2018年第1期127-136,共10页
The performance of a machine translation system heavily depends on the quantity and quality of the bilingual language resource. However,getting a parallel corpus,which has a large scale and is of high quality,is a ver... The performance of a machine translation system heavily depends on the quantity and quality of the bilingual language resource. However,getting a parallel corpus,which has a large scale and is of high quality,is a very difficult task especially for low resource languages such as Chinese-Vietnamese. Fortunately,multilingual user generated contents( UGC),such as bilingual movie subtitles,provide us access to automatic construction of the parallel corpus. Although the amount of UGC parallel corpora can be considerable,the original corpus is not suitable for statistical machine translation( SMT) systems. The corpus may contain translation errors,sentence mismatching,free translations,etc. To improve the quality of the bilingual corpus for SMT systems,three filtering methods are proposed: sentence length difference,the semantic of sentence pairs,and machine learning. Experiments are conducted on the Chinese to Vietnamese translation corpus.Experimental results demonstrate that all the three methods effectively improve the corpus quality,and the machine translation performance( BLEU score) can be improved by 1. 32. 展开更多
关键词 parallel corpus filtering low resource languages bilingual movie subtitles machine translation chinese-vietnamese translation
下载PDF
Parallel Computing of a Variational Data Assimilation Model for GPS/MET Observation Using the Ray-Tracing Method 被引量:5
4
作者 张昕 刘月巍 +1 位作者 王斌 季仲贞 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2004年第2期220-226,共7页
The Spectral Statistical Interpolation (SSI) analysis system of NCEP is used to assimilate meteorological data from the Global Positioning Satellite System (GPS/MET) refraction angles with the variational technique. V... The Spectral Statistical Interpolation (SSI) analysis system of NCEP is used to assimilate meteorological data from the Global Positioning Satellite System (GPS/MET) refraction angles with the variational technique. Verified by radiosonde, including GPS/MET observations into the analysis makes an overall improvement to the analysis variables of temperature, winds, and water vapor. However, the variational model with the ray-tracing method is quite expensive for numerical weather prediction and climate research. For example, about 4 000 GPS/MET refraction angles need to be assimilated to produce an ideal global analysis. Just one iteration of minimization will take more than 24 hours CPU time on the NCEP's Cray C90 computer. Although efforts have been taken to reduce the computational cost, it is still prohibitive for operational data assimilation. In this paper, a parallel version of the three-dimensional variational data assimilation model of GPS/MET occultation measurement suitable for massive parallel processors architectures is developed. The divide-and-conquer strategy is used to achieve parallelism and is implemented by message passing. The authors present the principles for the code's design and examine the performance on the state-of-the-art parallel computers in China. The results show that this parallel model scales favorably as the number of processors is increased. With the Memory-IO technique implemented by the author, the wall clock time per iteration used for assimilating 1420 refraction angles is reduced from 45 s to 12 s using 1420 processors. This suggests that the new parallelized code has the potential to be useful in numerical weather prediction (NWP) and climate studies. 展开更多
关键词 parallel computing variational data assimilation GPS/MET
下载PDF
An Improved Hilbert Curve for Parallel Spatial Data Partitioning 被引量:7
5
作者 MENG Lingkui HUANG Changqing ZHAO Chunyu LIN Zhiyong 《Geo-Spatial Information Science》 2007年第4期282-286,共5页
A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on t... A novel Hilbert-curve is introduced for parallel spatial data partitioning, with consideration of the huge-amount property of spatial information and the variable-length characteristic of vector data items. Based on the improved Hilbert curve, the algorithm can be designed to achieve almost-uniform spatial data partitioning among multiple disks in parallel spatial databases. Thus, the phenomenon of data imbalance can be significantly avoided and search and query efficiency can be enhanced. 展开更多
关键词 parallel spatial database spatial data partitioning data imbalance Hilbert curve
下载PDF
3D density inversion of gravity gradiometry data with a multilevel hybrid parallel algorithm 被引量:4
6
作者 Hou Zhen-Long Huang Da-Nian +1 位作者 Wang En-De Cheng Hao 《Applied Geophysics》 SCIE CSCD 2019年第2期141-152,252,共13页
The density inversion of gravity gradiometry data has attracted considerable attention;however,in large datasets,the multiplicity and low depth resolution as well as efficiency are constrained by time and computer mem... The density inversion of gravity gradiometry data has attracted considerable attention;however,in large datasets,the multiplicity and low depth resolution as well as efficiency are constrained by time and computer memory requirements.To solve these problems,we improve the reweighting focusing inversion and probability tomography inversion with joint multiple tensors and prior information constraints,and assess the inversion results,computing efficiency,and dataset size.A Message Passing Interface(MPI)-Open Multi-Processing(OpenMP)-Computed Unified Device Architecture(CUDA)multilevel hybrid parallel inversion,named Hybrinv for short,is proposed.Using model and real data from the Vinton Dome,we confirm that Hybrinv can be used to compute the density distribution.For data size of 100×100×20,the hybrid parallel algorithm is fast and based on the run time and scalability we infer that it can be used to process the large-scale data. 展开更多
关键词 gravity gradiometry data density inversion GPU MPI hybrid parallel inversion
下载PDF
Fast modeling of gravity gradients from topographic surface data using GPU parallel algorithm 被引量:1
7
作者 Xuli Tan Qingbin Wang +2 位作者 Jinkai Feng Yan Huang Ziyan Huang 《Geodesy and Geodynamics》 CSCD 2021年第4期288-297,共10页
The gravity gradient is a secondary derivative of gravity potential,containing more high-frequency information of Earth’s gravity field.Gravity gradient observation data require deducting its prior and intrinsic part... The gravity gradient is a secondary derivative of gravity potential,containing more high-frequency information of Earth’s gravity field.Gravity gradient observation data require deducting its prior and intrinsic parts to obtain more variational information.A model generated from a topographic surface database is more appropriate to represent gradiometric effects derived from near-surface mass,as other kinds of data can hardly reach the spatial resolution requirement.The rectangle prism method,namely an analytic integration of Newtonian potential integrals,is a reliable and commonly used approach to modeling gravity gradient,whereas its computing efficiency is extremely low.A modified rectangle prism method and a graphical processing unit(GPU)parallel algorithm were proposed to speed up the modeling process.The modified method avoided massive redundant computations by deforming formulas according to the symmetries of prisms’integral regions,and the proposed algorithm parallelized this method’s computing process.The parallel algorithm was compared with a conventional serial algorithm using 100 elevation data in two topographic areas(rough and moderate terrain).Modeling differences between the two algorithms were less than 0.1 E,which is attributed to precision differences between single-precision and double-precision float numbers.The parallel algorithm showed computational efficiency approximately 200 times higher than the serial algorithm in experiments,demonstrating its effective speeding up in the modeling process.Further analysis indicates that both the modified method and computational parallelism through GPU contributed to the proposed algorithm’s performances in experiments. 展开更多
关键词 Gravity gradient Topographic surface data Rectangle prism method parallel computation Graphical processing unit(GPU)
下载PDF
A Granularity-Aware Parallel Aggregation Method for Data Streams
8
作者 WANG Yong-li XU Hong-bing XU Li-zhen QIAN Jiang-bo LIU Xue-jun 《Wuhan University Journal of Natural Sciences》 EI CAS 2006年第1期133-137,共5页
This paper focuses on the parallel aggregation processing of data streams based on the shared-nothing architecture. A novel granularity-aware parallel aggregating model is proposed. It employs parallel sampling and li... This paper focuses on the parallel aggregation processing of data streams based on the shared-nothing architecture. A novel granularity-aware parallel aggregating model is proposed. It employs parallel sampling and linear regression to describe the characteristics of the data quantity in the query window in order to determine the partition granularity of tuples, and utilizes equal depth histogram to implement partitio ning. This method can avoid data skew and reduce communi cation cost. The experiment results on both synthetic data and actual data prove that the proposed method is efficient, practical and suitable for time-varying data streams processing. 展开更多
关键词 data streams parallel processing linear regression AGGREGATION data skew
下载PDF
Financial Data Modeling by Using Asynchronous Parallel Evolutionary Algorithms
9
作者 Wang Chun, Li Qiao-yunSchool of Business, Huazhong University of Science and Technology , Wuhan 4300741 Hubei ChinaNetwork and Software Technology Center of America, Sony Corporation San Jose, CA, USA 《Wuhan University Journal of Natural Sciences》 CAS 2003年第S1期239-242,共4页
In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A n... In this paper, the high-level knowledge of financial data modeled by ordinary differential equations (ODEs) is discovered in dynamic data by using an asynchronous parallel evolutionary modeling algorithm (APHEMA). A numerical example of Nasdaq index analysis is used to demonstrate the potential of APHEMA. The results show that the dynamic models automatically discovered in dynamic data by computer can be used to predict the financial trends. 展开更多
关键词 financial data mining asynchronous parallel algorithm knowledge discovery evolutionary modeling
下载PDF
PORLES:A Parallel Object Relational Database System
10
作者 Sun Yong\|qiang, Xu Shu\|ting, Zhu Feng\|hua, Lai Shu\|huaDepartment of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai 200030,China 《Wuhan University Journal of Natural Sciences》 CAS 2001年第Z1期100-109,共10页
We developed a parallel object relational DBMS named PORLES. It uses BSP model as its parallel computing model, and monoid calculus as its basis of data model. In this paper, we introduce its data model, parallel que... We developed a parallel object relational DBMS named PORLES. It uses BSP model as its parallel computing model, and monoid calculus as its basis of data model. In this paper, we introduce its data model, parallel query optimization, transaction processing system and parallel access method in detail. 展开更多
关键词 parallel object relational database BSP model data model query optimization
下载PDF
Regularized focusing inversion for large-scale gravity data based on GPU parallel computing
11
作者 WANG Haoran DING Yidan +1 位作者 LI Feida LI Jing 《Global Geology》 2019年第3期179-187,共9页
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes... Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape. 展开更多
关键词 LARGE-SCALE gravity data GPU parallel computing CUDA equivalent geometric TRELLIS FOCUSING INVERSION
下载PDF
Data Mining Algorithm Implementation and Its Application in Parallel Cloud System based on C++
12
作者 Jiangtao Geng Xiaobo Xiong 《International Journal of Technology Management》 2016年第12期1-3,共3页
. This paper conducts the analysis on the data mining algorithm implementation and its application in parallel cloud system based on C++. With the increase in the number of the cloud computing platform developers, w... . This paper conducts the analysis on the data mining algorithm implementation and its application in parallel cloud system based on C++. With the increase in the number of the cloud computing platform developers, with the use of cloud computing platform to support the growth of the number of Internet users, the system is also the proportion of log data growth. At present applies in the colony environment many is the news transmission model. In takes in the rest transmission model, between each concurrent execution part exchanges the information, and the coordinated step and the control execution through the transmission news. As for the C++ in the data mining applications, it should ? rstly hold the following features. Parallel communication and serial communication are two basic ways of general communication. Under this basis, this paper proposes the novel perspective on the data mining algorithm implementation and its application in parallel cloud system based on C++. The later research will be focused on the code based implementation. 展开更多
关键词 data Mining parallel Cloud System C++ Implementation and Its Application
下载PDF
Storage and Parallel Loading System Based on Mode Network for Multimode Medical Image Data
13
作者 Xiao Zhai Haiwei Pan +2 位作者 Xiaoqin Xie Zhiqiang Zhang Qilong Han 《国际计算机前沿大会会议论文集》 2016年第2期61-62,共2页
Since Multimode data is composed of many modes and their complex relationships,it cannot be retrieved or mined effectively by utilizing traditional analysis and processing techniques for single mode data.To address th... Since Multimode data is composed of many modes and their complex relationships,it cannot be retrieved or mined effectively by utilizing traditional analysis and processing techniques for single mode data.To address the challenges,we design and implement a graph-based storage and parallel loading system aimed at multimode medical image data.The system is a framework designed to flexibly store and rapidly load these multimode data.Specifically,the system utilizes the Mode Network to model the modes and their relationships in multimode medical image data and the graph database to store the data with a parallel loading technique. 展开更多
关键词 MULTIMODE MEDICAL image data MODE NETWORK GRAPH database parallel loading
下载PDF
Attenuate Class Imbalance Problem for Pneumonia Diagnosis Using Ensemble Parallel Stacked Pre-Trained Models
14
作者 Aswathy Ravikumar Harini Sriraman 《Computers, Materials & Continua》 SCIE EI 2023年第4期891-909,共19页
Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Com... Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches. 展开更多
关键词 Pneumonia prediction distributed deep learning data parallel model ensemble deep learning class imbalance skewed data
下载PDF
Fast and scalable routing protocols for data center networks
15
作者 Mihailo Vesovic Aleksandra Smiljanic Dusan Kostic 《Digital Communications and Networks》 SCIE CSCD 2023年第6期1340-1350,共11页
Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with s... Data center networks may comprise tens or hundreds of thousands of nodes,and,naturally,suffer from frequent software and hardware failures as well as link congestions.Packets are routed along the shortest paths with sufficient resources to facilitate efficient network utilization and minimize delays.In such dynamic networks,links frequently fail or get congested,making the recalculation of the shortest paths a computationally intensive problem.Various routing protocols were proposed to overcome this problem by focusing on network utilization rather than speed.Surprisingly,the design of fast shortest-path algorithms for data centers was largely neglected,though they are universal components of routing protocols.Moreover,parallelization techniques were mostly deployed for random network topologies,and not for regular topologies that are often found in data centers.The aim of this paper is to improve scalability and reduce the time required for the shortest-path calculation in data center networks by parallelization on general-purpose hardware.We propose a novel algorithm that parallelizes edge relaxations as a faster and more scalable solution for popular data center topologies. 展开更多
关键词 Routing protocols data center networks parallel algorithms Distributed algorithms Algorithm design and analysis Shortest-path problem SCALABILITY
下载PDF
One-End Data Method for Fault Position Estimate of Two-Parallel Transmission Lines
16
作者 张庆超 刘飞 +1 位作者 武永峰 宋文南 《Journal of Beijing Institute of Technology》 EI CAS 2003年第1期105-108,共4页
An accurate numerical algorithm for three-line fault involving different phases from each of two-parallel lines is presented. It is based on one-terminal voltage and current data. The loop and nodel equations comparin... An accurate numerical algorithm for three-line fault involving different phases from each of two-parallel lines is presented. It is based on one-terminal voltage and current data. The loop and nodel equations comparing faulted phase to non-faulted phase of two-parallel lines are introduced in the fault location estimation modal, in which the faulted impedance of remote end is not involved. The effect of load flow and fault resistance on the accuracy of fault location are effectively eliminated, therefore an accurate algorithm of locating fault is derived. The algorithm is demonstrated by digital computer simulations and the results show that errors in locating fault are less than 1%. 展开更多
关键词 power system two-parallel lines fault location estimation one-terminal data
下载PDF
Fast and robust training of a probabilistic latent semantic analysis model by the parallel learning and data segmentation
17
作者 Masaharu Kato Tetsuo Kosaka +1 位作者 Akinori Ito Shozo Makino 《通讯和计算机(中英文版)》 2009年第5期28-35,共8页
关键词 LAM MIP PLSA 计算机通讯
下载PDF
基于相对熵和余弦相似度的并行SVM算法
18
作者 毛伊敏 郭斌斌 +1 位作者 易见兵 陈志刚 《计算机集成制造系统》 EI CSCD 北大核心 2024年第9期3183-3198,共16页
针对大数据环境下并行支持向量机(SVM)算法存在子集分布偏差大,并行效率低以及过滤非支持向量不准确等问题,提出了基于相对熵和余弦相似度的并行SVM算法(RC-PSVM)。该算法首先提出基于相对熵的数据划分策略(DPRE),平衡当前子集和原始数... 针对大数据环境下并行支持向量机(SVM)算法存在子集分布偏差大,并行效率低以及过滤非支持向量不准确等问题,提出了基于相对熵和余弦相似度的并行SVM算法(RC-PSVM)。该算法首先提出基于相对熵的数据划分策略(DPRE),平衡当前子集和原始数据集的相对熵,划分样本到适合的子集,降低子集分布偏差;然后提出基于余弦相似度的冗余层级检测策略(CS-RLDS),计算相邻层局部SVM之间法向量的余弦相似度,比较设定的阈值与相似度,识别并停止冗余层级,提高了并行效率;最后提出非支持向量过滤策略(NSVF),结合样本到多个局部支持向量模型决策边界的距离,计算支持向量相似度来识别非支持向量,解决了过滤非支持向量不准确的问题。实验表明,RC-PSVM算法的分类效果更佳,且在大数据下的运行效率更高。 展开更多
关键词 大数据 MAPREDUCE框架 并行支持向量机 相对熵 余弦相似度
下载PDF
Task Scheduling of Data-Parallel Applications on HSA Platform
19
作者 Zhenshan Bao Chong Chen Wenbo Zhang 《国际计算机前沿大会会议论文集》 2018年第1期35-35,共1页
下载PDF
面向大数据的可扩展正则采样并行排序算法
20
作者 王莹 陈志广 卢宇彤 《大数据》 2024年第4期89-105,共17页
排序算法是计算机科学领域的一个基础算法,是大量应用的算法核心。在大数据时代,随着数据量的极速增长,并行排序算法受到广泛关注。现有的并行排序算法普遍存在通信开销过大、负载不均衡等问题,导致算法难以大规模扩展。针对以上问题,... 排序算法是计算机科学领域的一个基础算法,是大量应用的算法核心。在大数据时代,随着数据量的极速增长,并行排序算法受到广泛关注。现有的并行排序算法普遍存在通信开销过大、负载不均衡等问题,导致算法难以大规模扩展。针对以上问题,提出一种大规模可扩展的正则采样并行排序(scalable parallel sorting by regular sampling,ScaPSRS)算法,摒弃传统正则采样并行排序(parallel sorting by regular sampling,PSRS)算法中由一个进程负责采样的做法,转而让所有进程参与正则采样,选出p-1个分隔元素,将整个数据集划分成p个不相交的子集,然后实施并行排序,避免了单一进程的采样瓶颈。此外,ScaPSRS采用一种新的迭代更新策略选择p-1个分隔元素,保证划分的p个子集尽可能大小相同,从而确保p个进程对各自的子集进行本地排序时的负载均衡。在天河二号超级计算机上进行的大量实验表明,ScaPSRS算法能够成功地扩展到32000个内核,性能比PSRS算法和Hofmann等人提出的分区算法分别提升了3.7倍和11.7倍。 展开更多
关键词 并行排序 正则采样 负载均衡 大数据
下载PDF
上一页 1 2 137 下一页 到第
使用帮助 返回顶部