Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each ...Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each published content. Since the verification scheme in NDN applies the asymmetric encryption algorithm to sign contents,the content verification overhead is too high to satisfy wire-speed packet forwarding. In this paper, we propose two schemes to improve the verification performance of NDN routers to prevent content poisoning. The first content verification scheme, called "user-assisted",leads to the best performance, but can be bypassed if the clients and the content producer collude. A second scheme, named ``RouterCooperation ‘', prevents the aforementioned collusion attack by making edge routers verify the contents independently without the assistance of users and the core routers no longer verify the contents. The Router-Cooperation verification scheme reduces the computing complexity of cryptographic operation by replacing the asymmetric encryption algorithm with symmetric encryption algorithm.The simulation results demonstrate that this Router-Cooperation scheme can speed up18.85 times of the original content verification scheme with merely extra 80 Bytes transmission overhead.展开更多
The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created ...The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created new opportunities for startups;for instance,the extraction of pertinent research papers from enormous knowledge repositories using certain innovative methods has become an important task for researchers and entrepreneurs.Traditionally,the content of the papers are compared to list the relevant papers from a repository.The conventional method results in a long list of papers that is often impossible to interpret productively.Therefore,the need for a novel approach that intelligently utilizes the available data is imminent.Moreover,the primary element of the scientific knowledge base is a research article,which consists of various logical sections such as the Abstract,Introduction,Related Work,Methodology,Results,and Conclusion.Thus,this study utilizes these logical sections of research articles,because they hold significant potential in finding relevant papers.In this study,comprehensive experiments were performed to determine the role of the logical sections-based terms indexing method in improving the quality of results(i.e.,retrieving relevant papers).Therefore,we proposed,implemented,and evaluated the logical sections-based content comparisons method to address the research objective with a standard method of indexing terms.The section-based approach outperformed the standard content-based approach in identifying relevant documents from all classified topics of computer science.Overall,the proposed approach extracted 14%more relevant results from the entire dataset.As the experimental results suggested that employing a finer content similarity technique improved the quality of results,the proposed approach has led the foundation of knowledge-based startups.展开更多
Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of m...Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.展开更多
On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of c...On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of carbonate content with limited well measarements as a control. The method was applied to the slope area of the northern South China Sea near ODP Sites 1146 and 1148, and the results are satisfaetory. Before inversion calculation, a stepwise regression method was applied to obtain six properties related most closely to the carbonate content variations among the various properties on the seismic profiles across or near the wells. These include the average frequency, the integrated absolute amplitude, the dominant frequency, the reflection time, the derivative instantaneous amplitude, and the instantaneous frequency. The results, with carbonate content errors of mostly ±5 % relative to those measured from sediment samples, show a relatively accurate picture of carbonate distribution along the slope profile. This method pioneers a new quantitative model to acquire carbonate content variations directly from high-resolution seismic data. It will provide a new approach toward obtaining substitutive high-resolution sediment data for earth system studies related to basin evolution, especially in discussing the coupling between regional sedimentation and climate change.展开更多
Soyang Lake is the largest lake in Republic of Korea bordering Chuncheon,Yanggu,and Inje in Gangwon Province.It is widely used as an environmental resource for hydropower,flood control,and water supply.Therefore,we co...Soyang Lake is the largest lake in Republic of Korea bordering Chuncheon,Yanggu,and Inje in Gangwon Province.It is widely used as an environmental resource for hydropower,flood control,and water supply.Therefore,we conducted a survey of the floodplain of Soyang Lake to analyze the sediments in the area.We used global positioning system(GPS)data and aerial photography to monitor sediment deposits in the Soyang Lake floodplain.Data from three GPS units were compared to determine the accuracy of sampling location measurement.Sediment samples were collected at three sites:two in the eastern region of the floodplain and one in the western region.A total of eight samples were collected:Three samples were collected at 10 cm intervals to a depth of 30 cm from each site of the eastern sampling point,and two samples were collected at depths of 10 and 30 cm at the western sampling point.Samples were collected and analyzed for vertical and horizontal trends in particle size and moisture content.The sizes of the sediment samples ranged from coarse to very coarse sediments with a negative slope,which indicate eastward movement from the breach.The probability of a breach was indicated by the high water content at the eastern side of the floodplain,with the eastern sites showing a higher probability than the western sites.The results of this study indicate that analyses of grain fineness,moisture content,sediment deposits,and sediment removal rates can be used to understand and predict the direction of breach movement and sediment distribution in Soyang Lake.展开更多
We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional datab...We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional database query languages. Our basic ideas rest with “ontology” and the notions of “information content”. A public ontology, if available, would be the best choice for reliable domain knowledge. To enable an ontology to work with a database would involve, among others, certain mechanism thereby the two systems can form a coherent whole. This is achieved by means of the notion of “information content inclusion relation”, IIR for short. We present what an IIR is, and how IIR can be identified from both an ontology and a database, and then reasoning about them.展开更多
Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candi...Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candidate anchor histogram and the file-type specific knowledge to refine how anchors are determined when performing de- duplication of file data and enforces the selected average chunk size. CAC yields more chunks being found which in turn produces smaller average chtmks and a better reduction in data. We present a detailed evaluation of CAC and the experimental results show that this scheme can improve the compression ratio chunking for file types whose bytes are not randomly distributed (from 11.3% to 16.7% according to different datasets), and improve the write throughput on average by 9.7%.展开更多
Moisture in insulation materials will impair their thermal and acoustic performance, induce microbe growth, and cause equipment/material corrosion. Moisture content measurement is vital to the effective moisture contr...Moisture in insulation materials will impair their thermal and acoustic performance, induce microbe growth, and cause equipment/material corrosion. Moisture content measurement is vital to the effective moisture control. This investigation proposes a simple, fast, and accurate method to measure moisture content of insulation materials through matching the measured temperature rise. Since moisture content corresponds to unique thermophysical properties, the measured temperature rise varies with moisture content. During the data analysis, all possible volumetric heat capacities and thermal conductivities are enumerated to match the measured temperature rise based on the composite heat conduction theory. Then, the partial derivatives with respect to both volumetric heat capacity and thermal conductivity are evaluated, so that these partial derivatives will be guaranteed equaling to zero at the optimal solutions to the moisture content. Compared to the benchmarked gravimetric method, this proposed method was found having a better accuracy but requiring a short test time.展开更多
In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of...In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of tl CCN node. Then we discuss mobility management, routing strategy, and caching policy in CCN. For better network performance, we propose a probability cache replacement policy that is based on cotent popularity. We also propose and evaluate a probability cache with evicted copy-up decision policy.展开更多
In big data of business service or transaction,it is impossible to provide entire information to both of services from cyber system,so some service providers made use of maliciously services to get more interests.Trus...In big data of business service or transaction,it is impossible to provide entire information to both of services from cyber system,so some service providers made use of maliciously services to get more interests.Trust management is an effective solution to deal with these malicious actions.This paper gave a trust computing model based on service-recommendation in big data.This model takes into account difference of recommendation trust between familiar node and stranger node.Thus,to ensure accuracy of recommending trust computing,paper proposed a fine-granularity similarity computing method based on the similarity of service concept domain ontology.This model is more accurate in computing trust value of cyber service nodes and prevents better cheating and attacking of malicious service nodes.Experiment results illustrated our model is effective.展开更多
Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offer...Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required.展开更多
Recent studies have found cold biases in a fraction of Argo profiles (hereinafter referred to as bad Array for Real-time Geostrophic Oceanography (Argo) profiles) due to the pressure drifts during 2003 and 2006. These...Recent studies have found cold biases in a fraction of Argo profiles (hereinafter referred to as bad Array for Real-time Geostrophic Oceanography (Argo) profiles) due to the pressure drifts during 2003 and 2006. These bad Argo profiles have had an important impact on in situ observation-based global ocean heat content esti- mates. This study investigated the impact of bad Argo profiles on ocean data assimilation results that were based on observations from diverse ocean observation systems, such as in situ profiles (e.g., Argo, expendable bathy- thermograph (XBT), and Tropical Atmosphere Ocean (TAO), remote-sensing sea surface temperature products and satellite altimetry between 2004 and 2006. Results from this work show that the upper ocean heat content analysis is vulnerable to bad Argo profiles and demon- strate a cooling trend in the studied period despite the multiple independent data types that were assimilated. When the bad Argo profiles were excluded from the as- similation, the decreased heat content disappeared and a warming occurred. Combination of satellite altimetry and mass variation data from gravity satellite demonstrated an increase, which agrees well with the increased heat con- tent. Additionally, when an additional Argo profile quality control procedure was utilized that simply removed the profiles that presented static unstable water columns, the results were very similar to those obtained when the bad Argo profiles were excluded from the assimilation. This indicates that an ocean data assimilation that uses multiple data sources with improved quality control could be less vulnerable to a major observation system failure, such as a bad Argo event.展开更多
Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of...Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of content distribution and retrieval.In order to make full use of the limited caching space in routers,it is an urgent challenge to make an efficient cache replacement policy.However,the existing cache replacement policies only consider very few factors that affect the cache performance.In this paper,we present a cache replacement policy based on multi-factors for NDN(CRPM),in which the content with the least cache value is evicted from the caching space.CRPM fully analyzes multi-factors that affect the caching performance,puts forward the corresponding calculation methods,and utilize the multi-factors to measure the cache value of contents.Furthermore,a new cache value function is constructed,which makes the content with high value be stored in the router as long as possible,so as to ensure the efficient use of cache resources.The simulation results show that CPRM can effectively improve cache hit ratio,enhance cache resource utilization,reduce energy consumption and decrease hit distance of content acquisition.展开更多
On the basis of Argo profile data of the temperature and salinity from January 2001 to July 2014, the spatial distributions of an upper ocean heat content(OHC) and ocean salt content(OSC) of the western Pacific warm p...On the basis of Argo profile data of the temperature and salinity from January 2001 to July 2014, the spatial distributions of an upper ocean heat content(OHC) and ocean salt content(OSC) of the western Pacific warm pool(WPWP) region and their seasonal and interannual variations are studied by a cyclostationary empirical orthogonal function(CSEOF) decomposition, a maximum entropy spectral analysis, and a correlation analysis.Probable reasons for variations are discussed. The results show the following.(1) The OHC variations in the subsurface layer of the WPWP are much greater than those in the surface layer. On the contrary, the OSC variations are mainly in the surface layer, while the subsurface layer varies little.(2) Compared with the OSC, the OHC of the WPWP region is more affected by El Ni?o-Southern Oscillation(ENSO) events. The CSEOF analysis shows that the OHC pattern in mode 1 has strong interannual oscillation, with eastern and western parts opposite in phase. The distribution of the OSC has a positive-negative-positive tripole pattern. Time series analysis shows that the OHC has three phase adjustments with the occurrence of ENSO events after 2007, while the OSC only had one such adjustment during the same period. Further analysis indicates that the OHC variations are mainly caused by ENSO events, local winds, and zonal currents, whereas the OSC variations are caused by much more complex reasons. Two of these, the zonal current and a freshwater flux, have a positive feedback on the OSC change in the WPWP region.展开更多
To reconstruct the missing data of the total electron content (TEC) observations, a new method is proposed, which is based on the empirical orthogonal functions (EOF) decomposition and the value of eigenvalue itse...To reconstruct the missing data of the total electron content (TEC) observations, a new method is proposed, which is based on the empirical orthogonal functions (EOF) decomposition and the value of eigenvalue itself. It is a self-adaptive EOF decomposition without any prior information needed, and the error of reconstructed data can be estimated. The interval quartering algorithm and cross-validation algorithm are used to compute the optimal number of EOFs for reconstruction. The interval quartering algorithm can reduce the computation time. The application of the data interpolating empirical orthogonal functions (DINEOF) method to the real data have demonstrated that the method can reconstruct the TEC map with high accuracy, which can be employed on the real-time system in the future work.展开更多
Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching...Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching. In this paper, we study a joint content caching and updating strategy in IoT networks, taking both the energy consumption of the sensors and the freshness loss of the contents into account. In particular, we decide whether or not to cache the transient data and, if so, how often the servers should update their contents. We formulate this content caching and updating problem as a mixed 0–1 integer non-convex optimization programming, and devise a Harmony Search based content Caching and Updating (HSCU) algorithm, which is self-learning and derivativefree and hence stipulates no requirement on the relationship between the objective and variables. Finally, extensive simulation results verify the effectiveness of our proposed algorithm in terms of the achieved satisfaction ratio for content delivery, normalized energy consumption, and overall network utility, by comparing it with some baseline algorithms.展开更多
Information-Centric Networking(ICN)is considered a viable strategy for regulating Internet consumption using the Internet’s underlying architecture.Although Named Data Networking(NDN)and its reference-based implement...Information-Centric Networking(ICN)is considered a viable strategy for regulating Internet consumption using the Internet’s underlying architecture.Although Named Data Networking(NDN)and its reference-based implementa-tion,the NDN Forwarding Daemon(NFD),are the most established ICN solu-tions,their vulnerability to the Content Poisoning Attack(CPA)is regarded as a severe threat that might dramatically impact this architecture.Content Poisoning can significantly minimize the impact of NDN’s universal data caching.Using verification signatures to protect against content poisoning attacks may be imprac-tical due to the associated costs and the volume of messages sent across the net-work,resulting in high computational costs.Therefore,in this research,we designed a method in NDN called Bird Swarm Optimization Algorithm-Based Content Poisoning Mitigation(BSO-Content Poisoning Mitigation Scheme).By aggregating the security information of entire routers along the full path,this sys-tem introduces the BSO to explore the secure transmission path and alter the con-tent retrieval procedure.Meanwhile,based on the determined trustworthy value of each node,the BSO-Content Poisoning Mitigation Scheme can bypass malicious routers,preventing them from disseminating illicit content in the future.Addition-ally,the suggested technique can minimize content poisoning utilizing removing erroneous Data packets from the cache-store during the pathfinding process.The proposed method has been subjected to extensive analysis compared with the ROM scheme and improved performance justified in several metrics.BSO-Con-tent Poisoning Mitigation Scheme is more efficient and faster than the ROM tech-nique in obtaining valid Data packets and resulting in a higher good cache hit ratio in a comparatively less amount of time.展开更多
基金financially supported by Shenzhen Key Fundamental Research Projects(Grant No.:JCYJ20170306091556329).
文摘Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each published content. Since the verification scheme in NDN applies the asymmetric encryption algorithm to sign contents,the content verification overhead is too high to satisfy wire-speed packet forwarding. In this paper, we propose two schemes to improve the verification performance of NDN routers to prevent content poisoning. The first content verification scheme, called "user-assisted",leads to the best performance, but can be bypassed if the clients and the content producer collude. A second scheme, named ``RouterCooperation ‘', prevents the aforementioned collusion attack by making edge routers verify the contents independently without the assistance of users and the core routers no longer verify the contents. The Router-Cooperation verification scheme reduces the computing complexity of cryptographic operation by replacing the asymmetric encryption algorithm with symmetric encryption algorithm.The simulation results demonstrate that this Router-Cooperation scheme can speed up18.85 times of the original content verification scheme with merely extra 80 Bytes transmission overhead.
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(2020-0-01592)Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2019R1F1A1058548).
文摘The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created new opportunities for startups;for instance,the extraction of pertinent research papers from enormous knowledge repositories using certain innovative methods has become an important task for researchers and entrepreneurs.Traditionally,the content of the papers are compared to list the relevant papers from a repository.The conventional method results in a long list of papers that is often impossible to interpret productively.Therefore,the need for a novel approach that intelligently utilizes the available data is imminent.Moreover,the primary element of the scientific knowledge base is a research article,which consists of various logical sections such as the Abstract,Introduction,Related Work,Methodology,Results,and Conclusion.Thus,this study utilizes these logical sections of research articles,because they hold significant potential in finding relevant papers.In this study,comprehensive experiments were performed to determine the role of the logical sections-based terms indexing method in improving the quality of results(i.e.,retrieving relevant papers).Therefore,we proposed,implemented,and evaluated the logical sections-based content comparisons method to address the research objective with a standard method of indexing terms.The section-based approach outperformed the standard content-based approach in identifying relevant documents from all classified topics of computer science.Overall,the proposed approach extracted 14%more relevant results from the entire dataset.As the experimental results suggested that employing a finer content similarity technique improved the quality of results,the proposed approach has led the foundation of knowledge-based startups.
基金Under the auspices of National Natural Science Foundation of China(No.41230751,41101547)Scientific Research Foundation of Graduate School of Nanjing University(No.2012CL14)
文摘Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.
基金This paper is supported by the National Natural Science Foundation ofChina(Nos.40476030,40576031)andthe National Key Basic ResearchSpecial Foundation Project of China(No.G2000078501).
文摘On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of carbonate content with limited well measarements as a control. The method was applied to the slope area of the northern South China Sea near ODP Sites 1146 and 1148, and the results are satisfaetory. Before inversion calculation, a stepwise regression method was applied to obtain six properties related most closely to the carbonate content variations among the various properties on the seismic profiles across or near the wells. These include the average frequency, the integrated absolute amplitude, the dominant frequency, the reflection time, the derivative instantaneous amplitude, and the instantaneous frequency. The results, with carbonate content errors of mostly ±5 % relative to those measured from sediment samples, show a relatively accurate picture of carbonate distribution along the slope profile. This method pioneers a new quantitative model to acquire carbonate content variations directly from high-resolution seismic data. It will provide a new approach toward obtaining substitutive high-resolution sediment data for earth system studies related to basin evolution, especially in discussing the coupling between regional sedimentation and climate change.
基金This research was supported by a grant from the National Research Foundation of Korea provided by the government of Republic of Korea(2019R1A2C1085686).
文摘Soyang Lake is the largest lake in Republic of Korea bordering Chuncheon,Yanggu,and Inje in Gangwon Province.It is widely used as an environmental resource for hydropower,flood control,and water supply.Therefore,we conducted a survey of the floodplain of Soyang Lake to analyze the sediments in the area.We used global positioning system(GPS)data and aerial photography to monitor sediment deposits in the Soyang Lake floodplain.Data from three GPS units were compared to determine the accuracy of sampling location measurement.Sediment samples were collected at three sites:two in the eastern region of the floodplain and one in the western region.A total of eight samples were collected:Three samples were collected at 10 cm intervals to a depth of 30 cm from each site of the eastern sampling point,and two samples were collected at depths of 10 and 30 cm at the western sampling point.Samples were collected and analyzed for vertical and horizontal trends in particle size and moisture content.The sizes of the sediment samples ranged from coarse to very coarse sediments with a negative slope,which indicate eastward movement from the breach.The probability of a breach was indicated by the high water content at the eastern side of the floodplain,with the eastern sites showing a higher probability than the western sites.The results of this study indicate that analyses of grain fineness,moisture content,sediment deposits,and sediment removal rates can be used to understand and predict the direction of breach movement and sediment distribution in Soyang Lake.
文摘We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional database query languages. Our basic ideas rest with “ontology” and the notions of “information content”. A public ontology, if available, would be the best choice for reliable domain knowledge. To enable an ontology to work with a database would involve, among others, certain mechanism thereby the two systems can form a coherent whole. This is achieved by means of the notion of “information content inclusion relation”, IIR for short. We present what an IIR is, and how IIR can be identified from both an ontology and a database, and then reasoning about them.
基金Supported by the National Natural Science Foundation of China (No.60673001) the State Key Development Program of Basic Research of China (No. 2004CB318203).
文摘Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candidate anchor histogram and the file-type specific knowledge to refine how anchors are determined when performing de- duplication of file data and enforces the selected average chunk size. CAC yields more chunks being found which in turn produces smaller average chtmks and a better reduction in data. We present a detailed evaluation of CAC and the experimental results show that this scheme can improve the compression ratio chunking for file types whose bytes are not randomly distributed (from 11.3% to 16.7% according to different datasets), and improve the write throughput on average by 9.7%.
文摘Moisture in insulation materials will impair their thermal and acoustic performance, induce microbe growth, and cause equipment/material corrosion. Moisture content measurement is vital to the effective moisture control. This investigation proposes a simple, fast, and accurate method to measure moisture content of insulation materials through matching the measured temperature rise. Since moisture content corresponds to unique thermophysical properties, the measured temperature rise varies with moisture content. During the data analysis, all possible volumetric heat capacities and thermal conductivities are enumerated to match the measured temperature rise based on the composite heat conduction theory. Then, the partial derivatives with respect to both volumetric heat capacity and thermal conductivity are evaluated, so that these partial derivatives will be guaranteed equaling to zero at the optimal solutions to the moisture content. Compared to the benchmarked gravimetric method, this proposed method was found having a better accuracy but requiring a short test time.
基金supported by National Natural Science Foundation of China under Grant No.60872018 and No. 60902015Major National Science and Technology Project No. 2011ZX03005-004-03
文摘In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of tl CCN node. Then we discuss mobility management, routing strategy, and caching policy in CCN. For better network performance, we propose a probability cache replacement policy that is based on cotent popularity. We also propose and evaluate a probability cache with evicted copy-up decision policy.
文摘In big data of business service or transaction,it is impossible to provide entire information to both of services from cyber system,so some service providers made use of maliciously services to get more interests.Trust management is an effective solution to deal with these malicious actions.This paper gave a trust computing model based on service-recommendation in big data.This model takes into account difference of recommendation trust between familiar node and stranger node.Thus,to ensure accuracy of recommending trust computing,paper proposed a fine-granularity similarity computing method based on the similarity of service concept domain ontology.This model is more accurate in computing trust value of cyber service nodes and prevents better cheating and attacking of malicious service nodes.Experiment results illustrated our model is effective.
文摘Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required.
基金supported by the 973 Program(Grant No.2006CB403606)the Chinese Academy of Sciences(Grant Nos.KZCX2-YW-143 and KZCX2-YW-202)+1 种基金the 863 Program (Grant No.2009AA12Z138)the National Natural Science Foundation of China (Grant Nos.40606008,40437017,and 40221503)
文摘Recent studies have found cold biases in a fraction of Argo profiles (hereinafter referred to as bad Array for Real-time Geostrophic Oceanography (Argo) profiles) due to the pressure drifts during 2003 and 2006. These bad Argo profiles have had an important impact on in situ observation-based global ocean heat content esti- mates. This study investigated the impact of bad Argo profiles on ocean data assimilation results that were based on observations from diverse ocean observation systems, such as in situ profiles (e.g., Argo, expendable bathy- thermograph (XBT), and Tropical Atmosphere Ocean (TAO), remote-sensing sea surface temperature products and satellite altimetry between 2004 and 2006. Results from this work show that the upper ocean heat content analysis is vulnerable to bad Argo profiles and demon- strate a cooling trend in the studied period despite the multiple independent data types that were assimilated. When the bad Argo profiles were excluded from the as- similation, the decreased heat content disappeared and a warming occurred. Combination of satellite altimetry and mass variation data from gravity satellite demonstrated an increase, which agrees well with the increased heat con- tent. Additionally, when an additional Argo profile quality control procedure was utilized that simply removed the profiles that presented static unstable water columns, the results were very similar to those obtained when the bad Argo profiles were excluded from the assimilation. This indicates that an ocean data assimilation that uses multiple data sources with improved quality control could be less vulnerable to a major observation system failure, such as a bad Argo event.
基金This research was funded by the National Natural Science Foundation of China(No.61862046)the Inner Mongolia Natural Science Foundation of China under Grant No.2018MS06024+2 种基金the Research Project of Higher Education School of Inner Mongolia Autonomous Region under Grant NJZY18010the Inner Mongolia Autonomous Region Science and Technology Achievements Transformation Project(No.CGZH2018124)the CERNET Innovation Project under Grant No.NGII20180626.
文摘Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of content distribution and retrieval.In order to make full use of the limited caching space in routers,it is an urgent challenge to make an efficient cache replacement policy.However,the existing cache replacement policies only consider very few factors that affect the cache performance.In this paper,we present a cache replacement policy based on multi-factors for NDN(CRPM),in which the content with the least cache value is evicted from the caching space.CRPM fully analyzes multi-factors that affect the caching performance,puts forward the corresponding calculation methods,and utilize the multi-factors to measure the cache value of contents.Furthermore,a new cache value function is constructed,which makes the content with high value be stored in the router as long as possible,so as to ensure the efficient use of cache resources.The simulation results show that CPRM can effectively improve cache hit ratio,enhance cache resource utilization,reduce energy consumption and decrease hit distance of content acquisition.
基金The National Natural Science Foundation of China under contract Nos 41406022 and 41606003the Scientific Research Fund of the Second Institute of Oceanography,State Oceanic Administration of China under contract Nos JG1812 and JG1709the Special Program for the National Basic Research of China under contract No.2012FY112300
文摘On the basis of Argo profile data of the temperature and salinity from January 2001 to July 2014, the spatial distributions of an upper ocean heat content(OHC) and ocean salt content(OSC) of the western Pacific warm pool(WPWP) region and their seasonal and interannual variations are studied by a cyclostationary empirical orthogonal function(CSEOF) decomposition, a maximum entropy spectral analysis, and a correlation analysis.Probable reasons for variations are discussed. The results show the following.(1) The OHC variations in the subsurface layer of the WPWP are much greater than those in the surface layer. On the contrary, the OSC variations are mainly in the surface layer, while the subsurface layer varies little.(2) Compared with the OSC, the OHC of the WPWP region is more affected by El Ni?o-Southern Oscillation(ENSO) events. The CSEOF analysis shows that the OHC pattern in mode 1 has strong interannual oscillation, with eastern and western parts opposite in phase. The distribution of the OSC has a positive-negative-positive tripole pattern. Time series analysis shows that the OHC has three phase adjustments with the occurrence of ENSO events after 2007, while the OSC only had one such adjustment during the same period. Further analysis indicates that the OHC variations are mainly caused by ENSO events, local winds, and zonal currents, whereas the OSC variations are caused by much more complex reasons. Two of these, the zonal current and a freshwater flux, have a positive feedback on the OSC change in the WPWP region.
基金supported by the National Natural Science Foundation of China(Grant No.41105013,41375028,and 61271106)the National Natural Science Foundation of Jiangsu Province,China(Grant No.BK2011122)the Key Laboratory of Meteorological Observation and Information Processing Scientific Research Fund of Jiangsu Province,China(Grant No.KDXS1205)
文摘To reconstruct the missing data of the total electron content (TEC) observations, a new method is proposed, which is based on the empirical orthogonal functions (EOF) decomposition and the value of eigenvalue itself. It is a self-adaptive EOF decomposition without any prior information needed, and the error of reconstructed data can be estimated. The interval quartering algorithm and cross-validation algorithm are used to compute the optimal number of EOFs for reconstruction. The interval quartering algorithm can reduce the computation time. The application of the data interpolating empirical orthogonal functions (DINEOF) method to the real data have demonstrated that the method can reconstruct the TEC map with high accuracy, which can be employed on the real-time system in the future work.
基金National Natural Science Foundation of China(61701372)Talents Special Foundation of Northwest A&F University(Z111021801).
文摘Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching. In this paper, we study a joint content caching and updating strategy in IoT networks, taking both the energy consumption of the sensors and the freshness loss of the contents into account. In particular, we decide whether or not to cache the transient data and, if so, how often the servers should update their contents. We formulate this content caching and updating problem as a mixed 0–1 integer non-convex optimization programming, and devise a Harmony Search based content Caching and Updating (HSCU) algorithm, which is self-learning and derivativefree and hence stipulates no requirement on the relationship between the objective and variables. Finally, extensive simulation results verify the effectiveness of our proposed algorithm in terms of the achieved satisfaction ratio for content delivery, normalized energy consumption, and overall network utility, by comparing it with some baseline algorithms.
文摘Information-Centric Networking(ICN)is considered a viable strategy for regulating Internet consumption using the Internet’s underlying architecture.Although Named Data Networking(NDN)and its reference-based implementa-tion,the NDN Forwarding Daemon(NFD),are the most established ICN solu-tions,their vulnerability to the Content Poisoning Attack(CPA)is regarded as a severe threat that might dramatically impact this architecture.Content Poisoning can significantly minimize the impact of NDN’s universal data caching.Using verification signatures to protect against content poisoning attacks may be imprac-tical due to the associated costs and the volume of messages sent across the net-work,resulting in high computational costs.Therefore,in this research,we designed a method in NDN called Bird Swarm Optimization Algorithm-Based Content Poisoning Mitigation(BSO-Content Poisoning Mitigation Scheme).By aggregating the security information of entire routers along the full path,this sys-tem introduces the BSO to explore the secure transmission path and alter the con-tent retrieval procedure.Meanwhile,based on the determined trustworthy value of each node,the BSO-Content Poisoning Mitigation Scheme can bypass malicious routers,preventing them from disseminating illicit content in the future.Addition-ally,the suggested technique can minimize content poisoning utilizing removing erroneous Data packets from the cache-store during the pathfinding process.The proposed method has been subjected to extensive analysis compared with the ROM scheme and improved performance justified in several metrics.BSO-Con-tent Poisoning Mitigation Scheme is more efficient and faster than the ROM tech-nique in obtaining valid Data packets and resulting in a higher good cache hit ratio in a comparatively less amount of time.