This paper focuses on the parallel aggregation processing of data streams based on the shared-nothing architecture. A novel granularity-aware parallel aggregating model is proposed. It employs parallel sampling and li...This paper focuses on the parallel aggregation processing of data streams based on the shared-nothing architecture. A novel granularity-aware parallel aggregating model is proposed. It employs parallel sampling and linear regression to describe the characteristics of the data quantity in the query window in order to determine the partition granularity of tuples, and utilizes equal depth histogram to implement partitio ning. This method can avoid data skew and reduce communi cation cost. The experiment results on both synthetic data and actual data prove that the proposed method is efficient, practical and suitable for time-varying data streams processing.展开更多
Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Com...Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches.展开更多
MapReduce is a widely used programming model for large-scale data processing.However,it still suffers from the skew problem,which refers to the case in which load is imbalanced among tasks.This problem can cause a sma...MapReduce is a widely used programming model for large-scale data processing.However,it still suffers from the skew problem,which refers to the case in which load is imbalanced among tasks.This problem can cause a small number of tasks to consume much more time than other tasks,thereby prolonging the total job completion time.Existing solutions to this problem commonly predict the loads of tasks and then rebalance the load among them.However,solutions of this kind often incur high performance overhead due to the load prediction and rebalancing.Moreover,existing solutions target the partitioning skew for reduce tasks,but cannot mitigate the computational skew for map tasks.Accordingly,in this paper,we present DynamicAdjust,a run-time dynamic resource adjustment technique for mitigating skew.Rather than rebalancing the load among tasks,DynamicAdjust monitors the run-time execution of tasks and dynamically increases resources for those tasks that require more computation.In so doing,DynamicAdjust can not only eliminate the overhead incurred by load prediction and rebalancing,but also culls both the partitioning skew and the computational skew.Experiments are conducted based on a 21-node real cluster using real-world datasets.The results show that DynamicAdjust can mitigate the negative impact of the skew and shorten the job completion time by up to 40.85%.展开更多
基金Supported by Foundation of High Technology Pro-ject of Jiangsu (BG2004034) , Foundation of Graduate Creative Pro-gramof Jiangsu (xm04-36)
文摘This paper focuses on the parallel aggregation processing of data streams based on the shared-nothing architecture. A novel granularity-aware parallel aggregating model is proposed. It employs parallel sampling and linear regression to describe the characteristics of the data quantity in the query window in order to determine the partition granularity of tuples, and utilizes equal depth histogram to implement partitio ning. This method can avoid data skew and reduce communi cation cost. The experiment results on both synthetic data and actual data prove that the proposed method is efficient, practical and suitable for time-varying data streams processing.
文摘Pneumonia is an acute lung infection that has caused many fatalitiesglobally. Radiologists often employ chest X-rays to identify pneumoniasince they are presently the most effective imaging method for this purpose.Computer-aided diagnosis of pneumonia using deep learning techniques iswidely used due to its effectiveness and performance. In the proposed method,the Synthetic Minority Oversampling Technique (SMOTE) approach is usedto eliminate the class imbalance in the X-ray dataset. To compensate forthe paucity of accessible data, pre-trained transfer learning is used, and anensemble Convolutional Neural Network (CNN) model is developed. Theensemble model consists of all possible combinations of the MobileNetv2,Visual Geometry Group (VGG16), and DenseNet169 models. MobileNetV2and DenseNet169 performed well in the Single classifier model, with anaccuracy of 94%, while the ensemble model (MobileNetV2+DenseNet169)achieved an accuracy of 96.9%. Using the data synchronous parallel modelin Distributed Tensorflow, the training process accelerated performance by98.6% and outperformed other conventional approaches.
基金funded by the Key Area Research and Development Program of Guangdong Province(2019B010137005)the National Natural Science Foundation of China(61906209).
文摘MapReduce is a widely used programming model for large-scale data processing.However,it still suffers from the skew problem,which refers to the case in which load is imbalanced among tasks.This problem can cause a small number of tasks to consume much more time than other tasks,thereby prolonging the total job completion time.Existing solutions to this problem commonly predict the loads of tasks and then rebalance the load among them.However,solutions of this kind often incur high performance overhead due to the load prediction and rebalancing.Moreover,existing solutions target the partitioning skew for reduce tasks,but cannot mitigate the computational skew for map tasks.Accordingly,in this paper,we present DynamicAdjust,a run-time dynamic resource adjustment technique for mitigating skew.Rather than rebalancing the load among tasks,DynamicAdjust monitors the run-time execution of tasks and dynamically increases resources for those tasks that require more computation.In so doing,DynamicAdjust can not only eliminate the overhead incurred by load prediction and rebalancing,but also culls both the partitioning skew and the computational skew.Experiments are conducted based on a 21-node real cluster using real-world datasets.The results show that DynamicAdjust can mitigate the negative impact of the skew and shorten the job completion time by up to 40.85%.