Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability...Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.展开更多
Satellite data obtained over synoptic data-sparse regions such as an ocean contribute toward improving the quality of the initial state of limited-area models. Background error covariances are crucial to the proper di...Satellite data obtained over synoptic data-sparse regions such as an ocean contribute toward improving the quality of the initial state of limited-area models. Background error covariances are crucial to the proper distribution of satellite-observed information in variational data assimilation. In the NMC (National Meteorological Center) method, background error covariances are underestimated over data-sparse regions such as an ocean because of small differences between different forecast times. Thus, it is necessary to reconstruct and tune the background error covariances so as to maximize the usefulness of the satellite data for the initial state of limited-area models, especially over an ocean where there is a lack of conventional data. In this study, we attempted to estimate background error covariances so as to provide adequate error statistics for data-sparse regions by using ensemble forecasts of optimal perturbations using bred vectors. The background error covariances estimated by the ensemble method reduced the overestimation of error amplitude obtained by the NMC method. By employing an appropriate horizontal length scale to exclude spurious correlations, the ensemble method produced better results than the NMC method in the assimilation of retrieved satellite data. Because the ensemble method distributes observed information over a limited local area, it would be more useful in the analysis of high-resolution satellite data. Accordingly, the performance of forecast models can be improved over the area where the satellite data are assimilated.展开更多
This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. T...This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. The maximum depth and the minimum depth of an individual CKT are equal and identical to data’s length. Insertion and deletion operations are defined; storage method and filtering algorithm are also designed for good compensation between efficiency and complexity. Applications to computer aided teaching of Chinese and protein selection show that an about 30% reduction of storage consumption and an over 60% reduction of computation may be easily obtained.展开更多
Direct volume rendering(DVR)is a technique that emphasizes structures of interest(SOIs)within a volume visually,while simultaneously depicting adjacent regional information,e.g.,the spatial location of a structure con...Direct volume rendering(DVR)is a technique that emphasizes structures of interest(SOIs)within a volume visually,while simultaneously depicting adjacent regional information,e.g.,the spatial location of a structure concerning its neighbors.In DVR,transfer function(TF)plays a key role by enabling accurate identification of SOIs interactively as well as ensuring appropriate visibility of them.TF generation typically involves non-intuitive trial-and-error optimization of rendering parameters,which is time-consuming and inefficient.Attempts at mitigating this manual process have led to approaches that make use of a knowledge database consisting of pre-designed TFs by domain experts.In these approaches,a user navigates the knowledge database to find the most suitable pre-designed TF for their input volume to visualize the SOIs.Although these approaches potentially reduce the workload to generate the TFs,they,however,require manual TF navigation of the knowledge database,as well as the likely fine tuning of the selected TF to suit the input.In this work,we propose a TF design approach,CBR-TF,where we introduce a new content-based retrieval(CBR)method to automatically navigate the knowledge database.Instead of pre-designed TFs,our knowledge database contains volumes with SOI labels.Given an input volume,our CBR-TF approach retrieves relevant volumes(with SOI labels)from the knowledge database;the retrieved labels are then used to generate and optimize TFs of the input.This approach largely reduces manual TF navigation and fine tuning.For our CBR-TF approach,we introduce a novel volumetric image feature which includes both a local primitive intensity profile along the SOIs and regional spatial semantics available from the co-planar images to the profile.For the regional spatial semantics,we adopt a convolutional neural network to obtain high-level image feature representations.For the intensity profile,we extend the dynamic time warping technique to address subtle alignment differences between similar profiles(SOIs).Finally,we propose a two-stage CBR scheme to enable the use of these two different feature representations in a complementary manner,thereby improving SOI retrieval performance.We demonstrate the capabilities of our CBR-TF approach with comparison with a conventional approach in visualization,where an intensity profile matching algorithm is used,and also with potential use-cases in medical volume visualization.展开更多
Electronic machines in the guise of digital computers have transformed our world―social,family,commerce,and politics―although not yet health.Each iteration spawns expectations of yet more astonishing wonders.We wait...Electronic machines in the guise of digital computers have transformed our world―social,family,commerce,and politics―although not yet health.Each iteration spawns expectations of yet more astonishing wonders.We wait for the next unbelievable invention to fall into our lap,possibly without limit.How realistic is this?What are the limits,and have we now reached them?A recent survey in The Economist suggests that we have.It describes cycles of misery,where inflated expectations are inevitably followed,a few years later,by disillusion.Yet another Artificial Intelligence(AI)winter is coming―“After years of hype,many people feel AI has failed to deliver”.The current paper not only explains why this was bound to happen,but offers a clear and simple pathway as to how to avoid it happening again.Costly investments in time and effort can only show solid,reliable benefits when full weight is given to the fundamental binary nature of the digital machine,and to the equally unique human faculty of‘intent’.‘Intent’is not easy to define;it suffers acutely from verbal fuzziness―a point made extensively in two earlier papers:“The scientific evidence that‘intent’is vital for healthcare”and“Why Quakerism is more scientific than Einstein”.This paper argues that by putting‘intent’centre stage,first healthcare,and then democracy can be rescued.Suppose every medical consultation were supported by realistic data usage?What if,using only your existing smartphone,your entire medical history were scanned,and instantly compared,within microseconds,with up-to-the-minute information on contraindications and efficacy,from around the globe,for the actual drug you were about to receive,before you actually received it?This is real-time retrieval of clinical data―it increases the security of both doctor and patient,in a way that is otherwise unachievable.My 1980 Ph.D.thesis extolled the merits of digitising the medical record―and,just as digitisation has changed our use of audio and video beyond recognition,so a data-rich medical consultation is unprecedented―prepare to be surprised.This paper has four sections:(1)where binaries help;(2)where binaries ensure extinction;(3)computers in healthcare and civilisation;and(4)data-rich doctoring.Health is vital for economic success,as the current pandemic demonstrates,inescapably.Politics,too,is routinely corrupted―unless we rectify both,failures in AI will be the least of our troubles.展开更多
In this paper, the authors propose a new algorithm to hide data inside image using steganography technique. The proposed algorithm uses binary codes and pixels inside an image. The zipped file is used before it is con...In this paper, the authors propose a new algorithm to hide data inside image using steganography technique. The proposed algorithm uses binary codes and pixels inside an image. The zipped file is used before it is converted to binary codes to maximize the storage of data inside the image. By applying the proposed algorithm, a system called Steganography Imaging System (gig) is developed. The system is then tested to see the viability of the proposed algorithm. Various sizes of data are stored inside the images and the Peak signal-to-noise ratio (PSNR) is also captured for each of the images tested. Based on the PSNR value of each images, the stego image has a higher PSNR value. Hence this new steganography algorithm is very efficient to hide the data inside the image.展开更多
Taking the Ben Cao Gang Mu(《本草纲目》Compendium of Materia Medica)(Jinling edition金陵本)as the research object and“Jing”as the search term,this article summarizes the quantity of medicinals containing“Jing”in B...Taking the Ben Cao Gang Mu(《本草纲目》Compendium of Materia Medica)(Jinling edition金陵本)as the research object and“Jing”as the search term,this article summarizes the quantity of medicinals containing“Jing”in Ben Cao Gang Mu,analyzes the connotation and application of“Jing”in traditional Chinese medicine,and finds that the application of“Jing”in medicine does not deviate from the original meaning of“Jing,”but endows it with the concepts of medicine and pharmacy,and expands the application scope of“Jing.”This study is helpful to understand and use spermicide more reasonably.展开更多
Planning in advance to prepare for and respond to a natural hazard-induced disaster-related emergency is a key action that allows decision makers to mitigate unexpected impacts and potential damage. To further this ai...Planning in advance to prepare for and respond to a natural hazard-induced disaster-related emergency is a key action that allows decision makers to mitigate unexpected impacts and potential damage. To further this aim, a collaborative, modular, and information and communications technology-based Spatial Data Infrastructure(SDI)called SIRENE—Sistema Informativo per la Preparazione e la Risposta alle Emergenze(Information System for Emergency Preparedness and Response) is designed and implemented to access and share, over the Internet, relevant multisource and distributed geospatial data to support decision makers in reducing disaster risks. SIRENE flexibly searches and retrieves strategic information from local and/or remote repositories to cope with different emergency phases. The system collects, queries, and analyzes geographic information provided voluntarily by observers directly in the field(volunteered geographic information(VGI) reports) to identify potentially critical environmental conditions. SIRENE can visualize and cross-validate institutional and research-based data against VGI reports,as well as provide disaster managers with a decision support system able to suggest the mode and timing of intervention, before and in the aftermath of different types of emergencies, on the basis of the available information and in agreement with the laws in force at the national andregional levels. Testing installations of SIRENE have been deployed in 18 hilly or mountain municipalities(12 located in the Italian Central Alps of northern Italy, and six in the Umbria region of central Italy), which have been affected by natural hazard-induced disasters over the past years(landslides, debris flows, floods, and wildfire) and experienced significant social and economic losses.展开更多
With the rapid development of satellite technology,the amount of remote sensing data and demand for remote sensing data analysis over large areas are greatly increasing.Hence,it is necessary to quickly filter out an o...With the rapid development of satellite technology,the amount of remote sensing data and demand for remote sensing data analysis over large areas are greatly increasing.Hence,it is necessary to quickly filter out an optimal dataset from massive dataset to support various remote sensing applications.However,with the improvements in temporal and spatial resolution,remote sensing data have become fragmented,which brings challenges to data retrieval.At present,most data service platforms rely on the query engines to retrieve data.Retrieval results still have a large amount of data with a high degree of overlap,which must be manually selected for further processing.This process is very labour-intensive and time-consuming.This paper proposes an improved coverage-oriented retrieval algorithm that aims to retrieve an optimal image combination with the minimum number of images closest to the imaging time of interest while maximized covering the target area.The retrieval efficiency of this algorithm was analysed by applying different implementation practices:Arcpy,PyQGIS,and GeoPandas.The experimental results confirm the effectiveness of the algorithm and suggest that the GeoPandas-based approach is most advantageous when processing large-area data.展开更多
In this paper,TOVS satellite data are used through variational method on the data-sparse plateau area.Diagnoses are carried out to find a way to solve the large error problem of model initial field.It is put forward t...In this paper,TOVS satellite data are used through variational method on the data-sparse plateau area.Diagnoses are carried out to find a way to solve the large error problem of model initial field.It is put forward that TOVS retrieval data can be used to improve the initial field of numerical prediction model on Tibetan Plateau area.Through variational method,TOVS data are processed and the liability of the initial information on the plateau is improved.Diagnostic results confirm further that the application of TOVS retrieval data can improve our capability to describe the dynamic system features on the plateau and the objectivity of related initial information such as the distribution of water vapor channel and stratification stability.展开更多
In challenging environment, sensory data must be stored inside the network in case of sink failures, we need to redistribute overflowing data items from the depleted storage source nodes to sensor nodes with available...In challenging environment, sensory data must be stored inside the network in case of sink failures, we need to redistribute overflowing data items from the depleted storage source nodes to sensor nodes with available storage space and residual energy. We design a distributed energy efficient data storage algorithm named distributed data preservation with priority (Dzp2). This algorithm takes both data redistribution costs and data retrieval costs into account and combines these two problems into a single problem. DZP2 can effectively realize data redistribution by using cooperative communication among sensor nodes. In order to solve the redistribution contention problem, we introduce the concept of data priority, which can avoid contention consultations between source nodes and reduce energy consumption. Finally, we verify the performance of the proposed algorithm by both theory and simulations. We demonstrate that D2p2's performance is close to the optimal centralized algorithm in terms of energy consumption and shows superiority in terms of data preservation time.展开更多
基金supported and granted by the Ministry of Science and Technology,Taiwan(MOST110-2622-E-390-001 and MOST109-2622-E-390-002-CC3).
文摘Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.
基金funded by the Korea Meteorological Administration Research and Development Program under Grant RACS 2010-2016supported by the Brain Korea 21 project of the Ministry of Education and Human Resources Development of the Korean government
文摘Satellite data obtained over synoptic data-sparse regions such as an ocean contribute toward improving the quality of the initial state of limited-area models. Background error covariances are crucial to the proper distribution of satellite-observed information in variational data assimilation. In the NMC (National Meteorological Center) method, background error covariances are underestimated over data-sparse regions such as an ocean because of small differences between different forecast times. Thus, it is necessary to reconstruct and tune the background error covariances so as to maximize the usefulness of the satellite data for the initial state of limited-area models, especially over an ocean where there is a lack of conventional data. In this study, we attempted to estimate background error covariances so as to provide adequate error statistics for data-sparse regions by using ensemble forecasts of optimal perturbations using bred vectors. The background error covariances estimated by the ensemble method reduced the overestimation of error amplitude obtained by the NMC method. By employing an appropriate horizontal length scale to exclude spurious correlations, the ensemble method produced better results than the NMC method in the assimilation of retrieved satellite data. Because the ensemble method distributes observed information over a limited local area, it would be more useful in the analysis of high-resolution satellite data. Accordingly, the performance of forecast models can be improved over the area where the satellite data are assimilated.
文摘This paper presents a simple complete K level tree (CKT) architecture for text database organization and rapid data filtering. A database is constructed as a CKT forest and each CKT contains data of the same length. The maximum depth and the minimum depth of an individual CKT are equal and identical to data’s length. Insertion and deletion operations are defined; storage method and filtering algorithm are also designed for good compensation between efficiency and complexity. Applications to computer aided teaching of Chinese and protein selection show that an about 30% reduction of storage consumption and an over 60% reduction of computation may be easily obtained.
基金supported by the Korea Health Technology Research and Development Project through the Korea Health Industry Development Institute under Grant No.HI22C1651the National Research Foundation of Korea(NRF)under Grant No.2021R1F1A1059554the Culture,Sports and Tourism Research and Development Program through the Korea Creative Content Agency Grant funded by the Ministry of Culture,Sports and Tourism of Korea under Grant No.RS-2023-00227648.
文摘Direct volume rendering(DVR)is a technique that emphasizes structures of interest(SOIs)within a volume visually,while simultaneously depicting adjacent regional information,e.g.,the spatial location of a structure concerning its neighbors.In DVR,transfer function(TF)plays a key role by enabling accurate identification of SOIs interactively as well as ensuring appropriate visibility of them.TF generation typically involves non-intuitive trial-and-error optimization of rendering parameters,which is time-consuming and inefficient.Attempts at mitigating this manual process have led to approaches that make use of a knowledge database consisting of pre-designed TFs by domain experts.In these approaches,a user navigates the knowledge database to find the most suitable pre-designed TF for their input volume to visualize the SOIs.Although these approaches potentially reduce the workload to generate the TFs,they,however,require manual TF navigation of the knowledge database,as well as the likely fine tuning of the selected TF to suit the input.In this work,we propose a TF design approach,CBR-TF,where we introduce a new content-based retrieval(CBR)method to automatically navigate the knowledge database.Instead of pre-designed TFs,our knowledge database contains volumes with SOI labels.Given an input volume,our CBR-TF approach retrieves relevant volumes(with SOI labels)from the knowledge database;the retrieved labels are then used to generate and optimize TFs of the input.This approach largely reduces manual TF navigation and fine tuning.For our CBR-TF approach,we introduce a novel volumetric image feature which includes both a local primitive intensity profile along the SOIs and regional spatial semantics available from the co-planar images to the profile.For the regional spatial semantics,we adopt a convolutional neural network to obtain high-level image feature representations.For the intensity profile,we extend the dynamic time warping technique to address subtle alignment differences between similar profiles(SOIs).Finally,we propose a two-stage CBR scheme to enable the use of these two different feature representations in a complementary manner,thereby improving SOI retrieval performance.We demonstrate the capabilities of our CBR-TF approach with comparison with a conventional approach in visualization,where an intensity profile matching algorithm is used,and also with potential use-cases in medical volume visualization.
文摘Electronic machines in the guise of digital computers have transformed our world―social,family,commerce,and politics―although not yet health.Each iteration spawns expectations of yet more astonishing wonders.We wait for the next unbelievable invention to fall into our lap,possibly without limit.How realistic is this?What are the limits,and have we now reached them?A recent survey in The Economist suggests that we have.It describes cycles of misery,where inflated expectations are inevitably followed,a few years later,by disillusion.Yet another Artificial Intelligence(AI)winter is coming―“After years of hype,many people feel AI has failed to deliver”.The current paper not only explains why this was bound to happen,but offers a clear and simple pathway as to how to avoid it happening again.Costly investments in time and effort can only show solid,reliable benefits when full weight is given to the fundamental binary nature of the digital machine,and to the equally unique human faculty of‘intent’.‘Intent’is not easy to define;it suffers acutely from verbal fuzziness―a point made extensively in two earlier papers:“The scientific evidence that‘intent’is vital for healthcare”and“Why Quakerism is more scientific than Einstein”.This paper argues that by putting‘intent’centre stage,first healthcare,and then democracy can be rescued.Suppose every medical consultation were supported by realistic data usage?What if,using only your existing smartphone,your entire medical history were scanned,and instantly compared,within microseconds,with up-to-the-minute information on contraindications and efficacy,from around the globe,for the actual drug you were about to receive,before you actually received it?This is real-time retrieval of clinical data―it increases the security of both doctor and patient,in a way that is otherwise unachievable.My 1980 Ph.D.thesis extolled the merits of digitising the medical record―and,just as digitisation has changed our use of audio and video beyond recognition,so a data-rich medical consultation is unprecedented―prepare to be surprised.This paper has four sections:(1)where binaries help;(2)where binaries ensure extinction;(3)computers in healthcare and civilisation;and(4)data-rich doctoring.Health is vital for economic success,as the current pandemic demonstrates,inescapably.Politics,too,is routinely corrupted―unless we rectify both,failures in AI will be the least of our troubles.
文摘In this paper, the authors propose a new algorithm to hide data inside image using steganography technique. The proposed algorithm uses binary codes and pixels inside an image. The zipped file is used before it is converted to binary codes to maximize the storage of data inside the image. By applying the proposed algorithm, a system called Steganography Imaging System (gig) is developed. The system is then tested to see the viability of the proposed algorithm. Various sizes of data are stored inside the images and the Peak signal-to-noise ratio (PSNR) is also captured for each of the images tested. Based on the PSNR value of each images, the stego image has a higher PSNR value. Hence this new steganography algorithm is very efficient to hide the data inside the image.
文摘Taking the Ben Cao Gang Mu(《本草纲目》Compendium of Materia Medica)(Jinling edition金陵本)as the research object and“Jing”as the search term,this article summarizes the quantity of medicinals containing“Jing”in Ben Cao Gang Mu,analyzes the connotation and application of“Jing”in traditional Chinese medicine,and finds that the application of“Jing”in medicine does not deviate from the original meaning of“Jing,”but endows it with the concepts of medicine and pharmacy,and expands the application scope of“Jing.”This study is helpful to understand and use spermicide more reasonably.
基金SIMULATOR-Sistema Integrato ModULAre per la gesTione e prevenzi One dei Rischi-Integrated Modular System for Risk Prevention and Management, financed by the Lombardy regional government, Italy
文摘Planning in advance to prepare for and respond to a natural hazard-induced disaster-related emergency is a key action that allows decision makers to mitigate unexpected impacts and potential damage. To further this aim, a collaborative, modular, and information and communications technology-based Spatial Data Infrastructure(SDI)called SIRENE—Sistema Informativo per la Preparazione e la Risposta alle Emergenze(Information System for Emergency Preparedness and Response) is designed and implemented to access and share, over the Internet, relevant multisource and distributed geospatial data to support decision makers in reducing disaster risks. SIRENE flexibly searches and retrieves strategic information from local and/or remote repositories to cope with different emergency phases. The system collects, queries, and analyzes geographic information provided voluntarily by observers directly in the field(volunteered geographic information(VGI) reports) to identify potentially critical environmental conditions. SIRENE can visualize and cross-validate institutional and research-based data against VGI reports,as well as provide disaster managers with a decision support system able to suggest the mode and timing of intervention, before and in the aftermath of different types of emergencies, on the basis of the available information and in agreement with the laws in force at the national andregional levels. Testing installations of SIRENE have been deployed in 18 hilly or mountain municipalities(12 located in the Italian Central Alps of northern Italy, and six in the Umbria region of central Italy), which have been affected by natural hazard-induced disasters over the past years(landslides, debris flows, floods, and wildfire) and experienced significant social and economic losses.
基金supported by National Key R&D Program for Intergovernmental International Innovation Cooperation(number 2018YFE0100100).
文摘With the rapid development of satellite technology,the amount of remote sensing data and demand for remote sensing data analysis over large areas are greatly increasing.Hence,it is necessary to quickly filter out an optimal dataset from massive dataset to support various remote sensing applications.However,with the improvements in temporal and spatial resolution,remote sensing data have become fragmented,which brings challenges to data retrieval.At present,most data service platforms rely on the query engines to retrieve data.Retrieval results still have a large amount of data with a high degree of overlap,which must be manually selected for further processing.This process is very labour-intensive and time-consuming.This paper proposes an improved coverage-oriented retrieval algorithm that aims to retrieve an optimal image combination with the minimum number of images closest to the imaging time of interest while maximized covering the target area.The retrieval efficiency of this algorithm was analysed by applying different implementation practices:Arcpy,PyQGIS,and GeoPandas.The experimental results confirm the effectiveness of the algorithm and suggest that the GeoPandas-based approach is most advantageous when processing large-area data.
基金Supported by the National Key Project B-"Observation and Theoretical Study of the Physical Process of the Tibetan Plateau Land-Air InteractionIts Impact on the Global Climate and Severe Weather in China."
文摘In this paper,TOVS satellite data are used through variational method on the data-sparse plateau area.Diagnoses are carried out to find a way to solve the large error problem of model initial field.It is put forward that TOVS retrieval data can be used to improve the initial field of numerical prediction model on Tibetan Plateau area.Through variational method,TOVS data are processed and the liability of the initial information on the plateau is improved.Diagnostic results confirm further that the application of TOVS retrieval data can improve our capability to describe the dynamic system features on the plateau and the objectivity of related initial information such as the distribution of water vapor channel and stratification stability.
基金supported by the National Natural Science Foundation of China (61401234,61271234)the Priority Academic Program Development Project of Jiangsu Higher Education InstitutionsJiangsu Government Scholarship for Overseas Studies
文摘In challenging environment, sensory data must be stored inside the network in case of sink failures, we need to redistribute overflowing data items from the depleted storage source nodes to sensor nodes with available storage space and residual energy. We design a distributed energy efficient data storage algorithm named distributed data preservation with priority (Dzp2). This algorithm takes both data redistribution costs and data retrieval costs into account and combines these two problems into a single problem. DZP2 can effectively realize data redistribution by using cooperative communication among sensor nodes. In order to solve the redistribution contention problem, we introduce the concept of data priority, which can avoid contention consultations between source nodes and reduce energy consumption. Finally, we verify the performance of the proposed algorithm by both theory and simulations. We demonstrate that D2p2's performance is close to the optimal centralized algorithm in terms of energy consumption and shows superiority in terms of data preservation time.