The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size ...The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size the complexity increases and our ability to analyze them using the current state of the art is at severe risk of failing to keep pace.Therefore,this paper initiates a discussion on graph signal processing for large-scale data analysis.We first provide a comprehensive overview of core ideas in Graph signal processing(GSP)and their connection to conventional digital signal processing(DSP).We then summarize recent developments in developing basic GSP tools,including methods for graph filtering or graph learning,graph signal,graph Fourier transform(GFT),spectrum,graph frequency,etc.Graph filtering is a basic task that allows for isolating the contribution of individual frequencies and therefore enables the removal of noise.We then consider a graph filter as a model that helps to extend the application of GSP methods to large datasets.To show the suitability and the effeteness,we first created a noisy graph signal and then applied it to the filter.After several rounds of simulation results.We see that the filtered signal appears to be smoother and is closer to the original noise-free distance-based signal.By using this example application,we thoroughly demonstrated that graph filtration is efficient for big data analytics.展开更多
The meteorological big data in Beijing area are typical multi-dimensional big data containing spatiotemporal characteristics,which have important research value for researches related to urban human settlement environ...The meteorological big data in Beijing area are typical multi-dimensional big data containing spatiotemporal characteristics,which have important research value for researches related to urban human settlement environment.With the help of computer programming and software processing,big data crawling,integration,extraction and multi-dimensional information fusion can be realized quickly and effectively,so as to obtain the data set needed for research and realize the target of visualization.Through big data analysis of wind environment,thermal environment and total atmospheric suspended particulate pollutants in Beijing area,it was found that the average wind speed in Beijing area decreased signifi cantly in recent 40 years,while the surface temperature increased signifi cantly;urban heat island effect was signifi cant,and the phenomenon of atmospheric suspended particulate pollution was relatively common.The spatial distribution of the three climatic and environmental data was not balanced and had signifi cant regularity and correlation.Improving urban ventilation corridors and improving urban ventilation capacity is a feasible way to improve urban heat island effect and reduce urban climate issues such as atmospheric particulate pollution.展开更多
Nowadays,healthcare applications necessitate maximum volume of medical data to be fed to help the physicians,academicians,pathologists,doctors and other healthcare professionals.Advancements in the domain of Wireless ...Nowadays,healthcare applications necessitate maximum volume of medical data to be fed to help the physicians,academicians,pathologists,doctors and other healthcare professionals.Advancements in the domain of Wireless Sensor Networks(WSN)andMultimediaWireless Sensor Networks(MWSN)are tremendous.M-WMSN is an advanced form of conventional Wireless Sensor Networks(WSN)to networks that use multimedia devices.When compared with traditional WSN,the quantity of data transmission in M-WMSN is significantly high due to the presence of multimedia content.Hence,clustering techniques are deployed to achieve low amount of energy utilization.The current research work aims at introducing a new Density Based Clustering(DBC)technique to achieve energy efficiency inWMSN.The DBC technique is mainly employed for data collection in healthcare environment which primarily depends on three input parameters namely remaining energy level,distance,and node centrality.In addition,two static data collector points called Super Cluster Head(SCH)are placed,which collects the data from normal CHs and forwards it to the Base Station(BS)directly.SCH supports multi-hop data transmission that assists in effectively balancing the available energy.Adetailed simulation analysiswas conducted to showcase the superior performance of DBC technique and the results were examined under diverse aspects.The simulation outcomes concluded that the proposed DBC technique improved the network lifetime to a maximum of 16,500 rounds,which is significantly higher compared to existing methods.展开更多
From 21st century,it is hard for traditional storage and algorithm to provide service with high quality because of big data of communication which grows rapidly.Thus,cloud computing technology with relatively low cost...From 21st century,it is hard for traditional storage and algorithm to provide service with high quality because of big data of communication which grows rapidly.Thus,cloud computing technology with relatively low cost of hardware facilities is created.However,to guarantee the quality of service in the situation of the rapid growth of data volume,the energy consumption cost of cloud computing begins to exceed the hardware cost.In order to solve the problems mentioned above,this study briefly introduced the virtual machine and its energy consumption model in the mobile cloud environment,introduced the basic principle of the virtual machine migration strategy based on the artificial bee colony algorithm and then simulated the performance of processing strategy to big data of communication based on artificial bee colony algorithm in mobile cloud computing environment by CloudSim3.0 software,which was compared with the performance of two algorithms,resource management(RM)and genetic algorithm(GA).The results showed that the power consumption of the migration strategy based on the artificial bee colony algorithm was lower than the other two strategies,and there were fewer failed virtual machines under the same number of requests,which meant that the service quality was higher.展开更多
This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessi...This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessing and processing Copernicus data and products,as well as the methodology to establish and operate the system.Since March 2017,CODE-DE has been online with access to Sentinel-1 and Sentinel-2 data,to Sentinel-3 data shortly after this time,and since March 2019 with access to Sentinel-5P data.These products are available and accessed by 1,682 registered users as of March 2019.During this period 654,895 products were downloaded and a global catalogue was continuously updated,featuring a data volume of 814 TByte based on a rolling archive concept supported by a reload mechanism from a long-term archive.Since November 2017,the element for big data processing has been operational,where registered users can process and analyse data themselves specifically assisted by methods for value-added product generation.Utilizing 195,467 core and 696,406 memory hours,982,948 products of different applications were fully automatically generated in the cloud environment and made available as of March 2019.Special features include an improved visualization of available Sentinel-2 products,which are presented within the catalogue client at full 10 m resolution.展开更多
基金supported in part by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2019R1A2C1006159)and(NRF-2021R1A6A1A03039493)by the 2021 Yeungnam University Research Grant.
文摘The networks are fundamental to our modern world and they appear throughout science and society.Access to a massive amount of data presents a unique opportunity to the researcher’s community.As networks grow in size the complexity increases and our ability to analyze them using the current state of the art is at severe risk of failing to keep pace.Therefore,this paper initiates a discussion on graph signal processing for large-scale data analysis.We first provide a comprehensive overview of core ideas in Graph signal processing(GSP)and their connection to conventional digital signal processing(DSP).We then summarize recent developments in developing basic GSP tools,including methods for graph filtering or graph learning,graph signal,graph Fourier transform(GFT),spectrum,graph frequency,etc.Graph filtering is a basic task that allows for isolating the contribution of individual frequencies and therefore enables the removal of noise.We then consider a graph filter as a model that helps to extend the application of GSP methods to large datasets.To show the suitability and the effeteness,we first created a noisy graph signal and then applied it to the filter.After several rounds of simulation results.We see that the filtered signal appears to be smoother and is closer to the original noise-free distance-based signal.By using this example application,we thoroughly demonstrated that graph filtration is efficient for big data analytics.
基金Sponsored by National Natural Science Foundation of China(51708004)YuYou Talent Training Program of North University of Technology(215051360020XN160/009)+1 种基金General Program of Beijing Natural Science Foundation(8202017)2018 Beijing Municipal University Academic Human Resources Development:Youth Talent Support Program(PXM2018-014212-000043).
文摘The meteorological big data in Beijing area are typical multi-dimensional big data containing spatiotemporal characteristics,which have important research value for researches related to urban human settlement environment.With the help of computer programming and software processing,big data crawling,integration,extraction and multi-dimensional information fusion can be realized quickly and effectively,so as to obtain the data set needed for research and realize the target of visualization.Through big data analysis of wind environment,thermal environment and total atmospheric suspended particulate pollutants in Beijing area,it was found that the average wind speed in Beijing area decreased signifi cantly in recent 40 years,while the surface temperature increased signifi cantly;urban heat island effect was signifi cant,and the phenomenon of atmospheric suspended particulate pollution was relatively common.The spatial distribution of the three climatic and environmental data was not balanced and had signifi cant regularity and correlation.Improving urban ventilation corridors and improving urban ventilation capacity is a feasible way to improve urban heat island effect and reduce urban climate issues such as atmospheric particulate pollution.
文摘Nowadays,healthcare applications necessitate maximum volume of medical data to be fed to help the physicians,academicians,pathologists,doctors and other healthcare professionals.Advancements in the domain of Wireless Sensor Networks(WSN)andMultimediaWireless Sensor Networks(MWSN)are tremendous.M-WMSN is an advanced form of conventional Wireless Sensor Networks(WSN)to networks that use multimedia devices.When compared with traditional WSN,the quantity of data transmission in M-WMSN is significantly high due to the presence of multimedia content.Hence,clustering techniques are deployed to achieve low amount of energy utilization.The current research work aims at introducing a new Density Based Clustering(DBC)technique to achieve energy efficiency inWMSN.The DBC technique is mainly employed for data collection in healthcare environment which primarily depends on three input parameters namely remaining energy level,distance,and node centrality.In addition,two static data collector points called Super Cluster Head(SCH)are placed,which collects the data from normal CHs and forwards it to the Base Station(BS)directly.SCH supports multi-hop data transmission that assists in effectively balancing the available energy.Adetailed simulation analysiswas conducted to showcase the superior performance of DBC technique and the results were examined under diverse aspects.The simulation outcomes concluded that the proposed DBC technique improved the network lifetime to a maximum of 16,500 rounds,which is significantly higher compared to existing methods.
文摘From 21st century,it is hard for traditional storage and algorithm to provide service with high quality because of big data of communication which grows rapidly.Thus,cloud computing technology with relatively low cost of hardware facilities is created.However,to guarantee the quality of service in the situation of the rapid growth of data volume,the energy consumption cost of cloud computing begins to exceed the hardware cost.In order to solve the problems mentioned above,this study briefly introduced the virtual machine and its energy consumption model in the mobile cloud environment,introduced the basic principle of the virtual machine migration strategy based on the artificial bee colony algorithm and then simulated the performance of processing strategy to big data of communication based on artificial bee colony algorithm in mobile cloud computing environment by CloudSim3.0 software,which was compared with the performance of two algorithms,resource management(RM)and genetic algorithm(GA).The results showed that the power consumption of the migration strategy based on the artificial bee colony algorithm was lower than the other two strategies,and there were fewer failed virtual machines under the same number of requests,which meant that the service quality was higher.
基金funding from the German Federal Ministry of Transport and Digital Infrastructure(BMVI).
文摘This article presents and analyses the modular architecture and capabilities of CODE-DE(Copernicus Data and Exploitation Platform–Deutschland,www.code-de.org),the integrated German operational environment for accessing and processing Copernicus data and products,as well as the methodology to establish and operate the system.Since March 2017,CODE-DE has been online with access to Sentinel-1 and Sentinel-2 data,to Sentinel-3 data shortly after this time,and since March 2019 with access to Sentinel-5P data.These products are available and accessed by 1,682 registered users as of March 2019.During this period 654,895 products were downloaded and a global catalogue was continuously updated,featuring a data volume of 814 TByte based on a rolling archive concept supported by a reload mechanism from a long-term archive.Since November 2017,the element for big data processing has been operational,where registered users can process and analyse data themselves specifically assisted by methods for value-added product generation.Utilizing 195,467 core and 696,406 memory hours,982,948 products of different applications were fully automatically generated in the cloud environment and made available as of March 2019.Special features include an improved visualization of available Sentinel-2 products,which are presented within the catalogue client at full 10 m resolution.