The brokering approach can be successfully used to overcome the crucial question of searching among enormous amount of data (raw and/or processed) produced and stored in different information systems. In this paper,...The brokering approach can be successfully used to overcome the crucial question of searching among enormous amount of data (raw and/or processed) produced and stored in different information systems. In this paper, authors describe the Data Management System the DMS (Data Management System) developed by INGV (Istituto Nazionale di Geofisica e Vulcanologia) to support the brokering system GEOSS (Global Earth Observation System of Systems) adopted for the ARCA (Arctic Present Climate Change and Past Extreme Events) project. This DMS includes heterogeneous data that contributes to the ARCA objective (www.arcaproject.it) focusing on multi-parametric and multi-disciplinary studies on the mechanism (s) behind the release of large volumes of cold and fresh water from melting of ice caps. The DMS is accessible directly at the www.arca.rm.ingv.it, or through the IADC (Italian Arctic Data Center) at http://arcticnode.dta.cnr.it/iadc/gi-portal/index.jsp that interoperates with the GEOSS brokering system (http://www.geoportal.org0 making easy and fast the search of specific data set and its URL.展开更多
The ever increasing demand of adhoc networks for adaptive topology and mobility aware communication led to new paradigm of networking among Unmanned Aerial Vehicles(UAVs)known as Flying ad-hoc Networks(FANETs).Due to ...The ever increasing demand of adhoc networks for adaptive topology and mobility aware communication led to new paradigm of networking among Unmanned Aerial Vehicles(UAVs)known as Flying ad-hoc Networks(FANETs).Due to their dynamic topology,FANETs can be deployed for disaster monitoring and surveillance applications.During these operations,UAVs need to transmit different disaster data,which consists of different types of data packets.Among them there are packets which need to be transmitted urgently because of the emergency situation in disaster management.To handle this situation,we propose a methodology of disaster data classification using urgency level and based on these urgency levels,priority index is assigned to data packets.An approach of Urgency Aware Scheduling(UAS)is proposed to efficiently transmit high and low priority packets with minimum delays in transmission queue.We take into account different scenarios of UAVs for disaster management and for N number of UAVs,we propose bio-inspired mechanism using behavioral study of bird flocking for cluster formation and maintenance.Furthermore,we propose a priority based route selection methodology for data communication in FANET cluster.Simulationresults show that our proposed mechanism shows better performance in the presence of evaluation benchmarks like average delay,queuing time,forward percentage and fairness.展开更多
Grid Computing is concerned with the sharing and coordinated use of diverse resources in distributed Virtual Organizations. This introduces various challenging security issues. Among these trusting, the resources to b...Grid Computing is concerned with the sharing and coordinated use of diverse resources in distributed Virtual Organizations. This introduces various challenging security issues. Among these trusting, the resources to be shared and coordinated with the dynamic and multi-institutional virtual organization environment becomes a challenging security issue. In this paper, an approach for trust assessment and trust degree calculation using subjective logic is suggested to allocate the Data Grid or Computational Grid user a reliable, trusted resource for maintaining the integrity of the data with fast response and accurate results. The suggested approach is explained using an example scenario and also from the simulation results. It is observed that there is an increase in the resource utilization of a trusted resource in contrast to the resource which is not trusted.展开更多
The next generation of high-power lasers enables repetition of experiments at orders of magnitude higher frequency than what was possible using the prior generation.Facilities requiring human intervention between lase...The next generation of high-power lasers enables repetition of experiments at orders of magnitude higher frequency than what was possible using the prior generation.Facilities requiring human intervention between laser repetitions need to adapt in order to keep pace with the new laser technology.A distributed networked control system can enable laboratory-wide automation and feedback control loops.These higher-repetition-rate experiments will create enormous quantities of data.A consistent approach to managing data can increase data accessibility,reduce repetitive data-software development and mitigate poorly organized metadata.An opportunity arises to share knowledge of improvements to control and data infrastructure currently being undertaken.We compare platforms and approaches to state-of-the-art control systems and data management at high-power laser facilities,and we illustrate these topics with case studies from our community.展开更多
Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abro...Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abroad. They mainly include the NASA EOS,World Wind,Google Earth,Google Maps,Bing Maps,Microsoft TerraServer,ESA,Earth Simulator,GeoEye,Map World,China Centre for Resources Satellite Data and Application,National Satellite Meteorological Centre,and National Satellite Ocean Application Service. By summing up the practical data storage and management technologies in terms of remote sensing data storage organization and storage architecture,it will be helpful to seek more suitable techniques and methods for massive remote sensing data storage and management.展开更多
文摘The brokering approach can be successfully used to overcome the crucial question of searching among enormous amount of data (raw and/or processed) produced and stored in different information systems. In this paper, authors describe the Data Management System the DMS (Data Management System) developed by INGV (Istituto Nazionale di Geofisica e Vulcanologia) to support the brokering system GEOSS (Global Earth Observation System of Systems) adopted for the ARCA (Arctic Present Climate Change and Past Extreme Events) project. This DMS includes heterogeneous data that contributes to the ARCA objective (www.arcaproject.it) focusing on multi-parametric and multi-disciplinary studies on the mechanism (s) behind the release of large volumes of cold and fresh water from melting of ice caps. The DMS is accessible directly at the www.arca.rm.ingv.it, or through the IADC (Italian Arctic Data Center) at http://arcticnode.dta.cnr.it/iadc/gi-portal/index.jsp that interoperates with the GEOSS brokering system (http://www.geoportal.org0 making easy and fast the search of specific data set and its URL.
基金supported in part by the Key Project of the National Natural Science Foundation of China under Grant 61431001in part by the Open Research Fund of National Mobile Communications Research Laboratory,Southeast University,under Grant 2017D02+1 种基金in part by the Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education,Guilin University of Electronic Technologyin part by the Foundation of Beijing Engineering and Technology Center for Convergence Networks and Ubiquitous Services
文摘The ever increasing demand of adhoc networks for adaptive topology and mobility aware communication led to new paradigm of networking among Unmanned Aerial Vehicles(UAVs)known as Flying ad-hoc Networks(FANETs).Due to their dynamic topology,FANETs can be deployed for disaster monitoring and surveillance applications.During these operations,UAVs need to transmit different disaster data,which consists of different types of data packets.Among them there are packets which need to be transmitted urgently because of the emergency situation in disaster management.To handle this situation,we propose a methodology of disaster data classification using urgency level and based on these urgency levels,priority index is assigned to data packets.An approach of Urgency Aware Scheduling(UAS)is proposed to efficiently transmit high and low priority packets with minimum delays in transmission queue.We take into account different scenarios of UAVs for disaster management and for N number of UAVs,we propose bio-inspired mechanism using behavioral study of bird flocking for cluster formation and maintenance.Furthermore,we propose a priority based route selection methodology for data communication in FANET cluster.Simulationresults show that our proposed mechanism shows better performance in the presence of evaluation benchmarks like average delay,queuing time,forward percentage and fairness.
文摘Grid Computing is concerned with the sharing and coordinated use of diverse resources in distributed Virtual Organizations. This introduces various challenging security issues. Among these trusting, the resources to be shared and coordinated with the dynamic and multi-institutional virtual organization environment becomes a challenging security issue. In this paper, an approach for trust assessment and trust degree calculation using subjective logic is suggested to allocate the Data Grid or Computational Grid user a reliable, trusted resource for maintaining the integrity of the data with fast response and accurate results. The suggested approach is explained using an example scenario and also from the simulation results. It is observed that there is an increase in the resource utilization of a trusted resource in contrast to the resource which is not trusted.
基金A.J.acknowledges the support from DOE Grant#DESC0016804.
文摘The next generation of high-power lasers enables repetition of experiments at orders of magnitude higher frequency than what was possible using the prior generation.Facilities requiring human intervention between laser repetitions need to adapt in order to keep pace with the new laser technology.A distributed networked control system can enable laboratory-wide automation and feedback control loops.These higher-repetition-rate experiments will create enormous quantities of data.A consistent approach to managing data can increase data accessibility,reduce repetitive data-software development and mitigate poorly organized metadata.An opportunity arises to share knowledge of improvements to control and data infrastructure currently being undertaken.We compare platforms and approaches to state-of-the-art control systems and data management at high-power laser facilities,and we illustrate these topics with case studies from our community.
基金supported by the National Basic Research Program of China ("973" Program) (Grant No.61399)
文摘Aiming at the storage and management problems of massive remote sensing data,this paper gives a comprehensive analysis of the characteristics and advantages of thirteen data storage centers or systems at home and abroad. They mainly include the NASA EOS,World Wind,Google Earth,Google Maps,Bing Maps,Microsoft TerraServer,ESA,Earth Simulator,GeoEye,Map World,China Centre for Resources Satellite Data and Application,National Satellite Meteorological Centre,and National Satellite Ocean Application Service. By summing up the practical data storage and management technologies in terms of remote sensing data storage organization and storage architecture,it will be helpful to seek more suitable techniques and methods for massive remote sensing data storage and management.