期刊文献+
共找到1,117篇文章
< 1 2 56 >
每页显示 20 50 100
Preventing“Bad”Content Dispersal in Named Data Networking 被引量:2
1
作者 Yi Wang Zhuyun Qi Bin Liu 《China Communications》 SCIE CSCD 2018年第6期109-119,共11页
Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each ... Named Data Networking(NDN)improves the data delivery efficiency by caching contents in routers. To prevent corrupted and faked contents be spread in the network,NDN routers should verify the digital signature of each published content. Since the verification scheme in NDN applies the asymmetric encryption algorithm to sign contents,the content verification overhead is too high to satisfy wire-speed packet forwarding. In this paper, we propose two schemes to improve the verification performance of NDN routers to prevent content poisoning. The first content verification scheme, called "user-assisted",leads to the best performance, but can be bypassed if the clients and the content producer collude. A second scheme, named ``RouterCooperation ‘', prevents the aforementioned collusion attack by making edge routers verify the contents independently without the assistance of users and the core routers no longer verify the contents. The Router-Cooperation verification scheme reduces the computing complexity of cryptographic operation by replacing the asymmetric encryption algorithm with symmetric encryption algorithm.The simulation results demonstrate that this Router-Cooperation scheme can speed up18.85 times of the original content verification scheme with merely extra 80 Bytes transmission overhead. 展开更多
关键词 named data networking ROUTER content verification encryption algorithm
下载PDF
SwCS: Section-Wise Content Similarity Approach to Exploit Scientific Big Data 被引量:1
2
作者 Kashif Irshad Muhammad Tanvir Afzal +3 位作者 Sanam Shahla Rizvi Abdul Shahid Rabia Riaz Tae-Sun Chung 《Computers, Materials & Continua》 SCIE EI 2021年第4期877-894,共18页
The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created ... The growing collection of scientific data in various web repositories is referred to as Scientific Big Data,as it fulfills the four“V’s”of Big Data—volume,variety,velocity,and veracity.This phenomenon has created new opportunities for startups;for instance,the extraction of pertinent research papers from enormous knowledge repositories using certain innovative methods has become an important task for researchers and entrepreneurs.Traditionally,the content of the papers are compared to list the relevant papers from a repository.The conventional method results in a long list of papers that is often impossible to interpret productively.Therefore,the need for a novel approach that intelligently utilizes the available data is imminent.Moreover,the primary element of the scientific knowledge base is a research article,which consists of various logical sections such as the Abstract,Introduction,Related Work,Methodology,Results,and Conclusion.Thus,this study utilizes these logical sections of research articles,because they hold significant potential in finding relevant papers.In this study,comprehensive experiments were performed to determine the role of the logical sections-based terms indexing method in improving the quality of results(i.e.,retrieving relevant papers).Therefore,we proposed,implemented,and evaluated the logical sections-based content comparisons method to address the research objective with a standard method of indexing terms.The section-based approach outperformed the standard content-based approach in identifying relevant documents from all classified topics of computer science.Overall,the proposed approach extracted 14%more relevant results from the entire dataset.As the experimental results suggested that employing a finer content similarity technique improved the quality of results,the proposed approach has led the foundation of knowledge-based startups. 展开更多
关键词 Scientific big data ACM classification term indexing content similarity cosine similarity
下载PDF
Monitoring Soil Salt Content Using HJ-1A Hyperspectral Data: A Case Study of Coastal Areas in Rudong County, Eastern China 被引量:5
3
作者 LI Jianguo PU Lijie +5 位作者 ZHU Ming DAI Xiaoqing XU Yan CHEN Xinjian ZHANG Lifang ZHANG Runsen 《Chinese Geographical Science》 SCIE CSCD 2015年第2期213-223,共11页
Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of m... Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale. 展开更多
关键词 soil salt content normalized differential vegetation index(NDVI) hyperspectral data Huan Jing-Hyper Spectral Imager(HJ-HSI) coastal area eastern China
下载PDF
Stratal Carbonate Content Inversion Using Seismic Data and Its Applications to the Northern South China Sea
4
作者 熊艳 钟广法 +3 位作者 李前裕 吴能友 李学杰 马在田 《Journal of China University of Geosciences》 SCIE CAS CSCD 2006年第4期320-325,354,共7页
On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of c... On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of carbonate content with limited well measarements as a control. The method was applied to the slope area of the northern South China Sea near ODP Sites 1146 and 1148, and the results are satisfaetory. Before inversion calculation, a stepwise regression method was applied to obtain six properties related most closely to the carbonate content variations among the various properties on the seismic profiles across or near the wells. These include the average frequency, the integrated absolute amplitude, the dominant frequency, the reflection time, the derivative instantaneous amplitude, and the instantaneous frequency. The results, with carbonate content errors of mostly ±5 % relative to those measured from sediment samples, show a relatively accurate picture of carbonate distribution along the slope profile. This method pioneers a new quantitative model to acquire carbonate content variations directly from high-resolution seismic data. It will provide a new approach toward obtaining substitutive high-resolution sediment data for earth system studies related to basin evolution, especially in discussing the coupling between regional sedimentation and climate change. 展开更多
关键词 carbonate content inversion seismic data artificial neural network ODP Leg 184 northern South China Sea.
下载PDF
A Survey of Sediment Fineness and Moisture Content in the Soyang Lake Floodplain Using GPS Data
5
作者 Mutiara Syifa Prima Riza Kadavi +1 位作者 Sung Jae Park Chang-Wook Lee 《Engineering》 SCIE EI 2021年第2期252-259,共8页
Soyang Lake is the largest lake in Republic of Korea bordering Chuncheon,Yanggu,and Inje in Gangwon Province.It is widely used as an environmental resource for hydropower,flood control,and water supply.Therefore,we co... Soyang Lake is the largest lake in Republic of Korea bordering Chuncheon,Yanggu,and Inje in Gangwon Province.It is widely used as an environmental resource for hydropower,flood control,and water supply.Therefore,we conducted a survey of the floodplain of Soyang Lake to analyze the sediments in the area.We used global positioning system(GPS)data and aerial photography to monitor sediment deposits in the Soyang Lake floodplain.Data from three GPS units were compared to determine the accuracy of sampling location measurement.Sediment samples were collected at three sites:two in the eastern region of the floodplain and one in the western region.A total of eight samples were collected:Three samples were collected at 10 cm intervals to a depth of 30 cm from each site of the eastern sampling point,and two samples were collected at depths of 10 and 30 cm at the western sampling point.Samples were collected and analyzed for vertical and horizontal trends in particle size and moisture content.The sizes of the sediment samples ranged from coarse to very coarse sediments with a negative slope,which indicate eastward movement from the breach.The probability of a breach was indicated by the high water content at the eastern side of the floodplain,with the eastern sites showing a higher probability than the western sites.The results of this study indicate that analyses of grain fineness,moisture content,sediment deposits,and sediment removal rates can be used to understand and predict the direction of breach movement and sediment distribution in Soyang Lake. 展开更多
关键词 Soyang Lake Grain fineness number Moisture content GPS data Digital surface model
下载PDF
Using an Ontology to Help Reason about the Information Content of Data
6
作者 Shuang Zhu Junkang Feng 《Journal of Software Engineering and Applications》 2010年第7期629-643,共15页
We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional datab... We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional database query languages. Our basic ideas rest with “ontology” and the notions of “information content”. A public ontology, if available, would be the best choice for reliable domain knowledge. To enable an ontology to work with a database would involve, among others, certain mechanism thereby the two systems can form a coherent whole. This is achieved by means of the notion of “information content inclusion relation”, IIR for short. We present what an IIR is, and how IIR can be identified from both an ontology and a database, and then reasoning about them. 展开更多
关键词 ONTOLOGY INFORMATION content of data
下载PDF
A content aware chunking scheme for data de-duplication in archival storage systems
7
作者 Nie Xuejun Qin Leihua Zhou Jingli 《High Technology Letters》 EI CAS 2012年第1期45-50,共6页
Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candi... Based on variable sized chunking, this paper proposes a content aware chunking scheme, called CAC, that does not assume fully random file contents, but tonsiders the characteristics of the file types. CAC uses a candidate anchor histogram and the file-type specific knowledge to refine how anchors are determined when performing de- duplication of file data and enforces the selected average chunk size. CAC yields more chunks being found which in turn produces smaller average chtmks and a better reduction in data. We present a detailed evaluation of CAC and the experimental results show that this scheme can improve the compression ratio chunking for file types whose bytes are not randomly distributed (from 11.3% to 16.7% according to different datasets), and improve the write throughput on average by 9.7%. 展开更多
关键词 data de-duplicate content aware chunking (CAC) candidate anchor histogram (CAH)
下载PDF
Inverse Measurement of Moisture Content in Porous Insulation Materials with a Data Sorting Method
8
作者 Huojun Yang Yun Luo Tengfei (Tim) Zhang 《Journal of Energy and Power Engineering》 2016年第11期667-673,共7页
Moisture in insulation materials will impair their thermal and acoustic performance, induce microbe growth, and cause equipment/material corrosion. Moisture content measurement is vital to the effective moisture contr... Moisture in insulation materials will impair their thermal and acoustic performance, induce microbe growth, and cause equipment/material corrosion. Moisture content measurement is vital to the effective moisture control. This investigation proposes a simple, fast, and accurate method to measure moisture content of insulation materials through matching the measured temperature rise. Since moisture content corresponds to unique thermophysical properties, the measured temperature rise varies with moisture content. During the data analysis, all possible volumetric heat capacities and thermal conductivities are enumerated to match the measured temperature rise based on the composite heat conduction theory. Then, the partial derivatives with respect to both volumetric heat capacity and thermal conductivity are evaluated, so that these partial derivatives will be guaranteed equaling to zero at the optimal solutions to the moisture content. Compared to the benchmarked gravimetric method, this proposed method was found having a better accuracy but requiring a short test time. 展开更多
关键词 Moisture content data sort temperature match composite heat conduction.
下载PDF
Decomposition of Graphs Representing the Contents of Multimedia Data
9
作者 Hochin Teruhisa 《通讯和计算机(中英文版)》 2010年第4期43-49,共7页
关键词 多媒体内容 分解图 数据模型 多媒体数据 递归调用 火焰传播 实例 递归图
下载PDF
Content Centric Networking: A New Approach to Big Data Distribution
10
作者 Yi Zhu Zhengkun Mi 《ZTE Communications》 2013年第2期3-10,共8页
In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of... In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of tl CCN node. Then we discuss mobility management, routing strategy, and caching policy in CCN. For better network performance, we propose a probability cache replacement policy that is based on cotent popularity. We also propose and evaluate a probability cache with evicted copy-up decision policy. 展开更多
关键词 big data content-centric networking caching policy mobility management routing strategy
下载PDF
面向5G mMTC的data-only竞争式免调度接入 被引量:4
11
作者 张诗壮 袁志锋 李卫敏 《电信科学》 2019年第7期37-46,共10页
基于参考信号的竞争式免调度接入,参考信号的碰撞会限制其性能。考虑了一种基于纯数据(data-only)的竞争式免调度接入方案,其盲检测接收机充分利用数据本身的特点来实现多用户检测,避免了参考信号的碰撞难题以及资源开销,因而可取得高... 基于参考信号的竞争式免调度接入,参考信号的碰撞会限制其性能。考虑了一种基于纯数据(data-only)的竞争式免调度接入方案,其盲检测接收机充分利用数据本身的特点来实现多用户检测,避免了参考信号的碰撞难题以及资源开销,因而可取得高得多的业务负载。另一方面,高负载接入性能还受限于小区间干扰。data-only接入信号不包含小区级处理,每个小区基站的盲检测接收机会对所有接收到的用户信号,包括靠近本区的邻区用户信号,进行解调译码和干扰消除。这实质上实施了小区间干扰消除,因而能明显减少邻区的强干扰,进而提升系统的负载。而且data-only盲检测接收机实现小区间干扰消除并不用增加很多额外的复杂度,这是传统小区间干扰消除方法所不具备的优点。 展开更多
关键词 竞争式免调度 碰撞 纯数据 盲检测 小区间干扰消除
下载PDF
Dynamic Trust Model Based on Service Recommendation in Big Data 被引量:2
12
作者 Gang Wang Mengjuan Liu 《Computers, Materials & Continua》 SCIE EI 2019年第3期845-857,共13页
In big data of business service or transaction,it is impossible to provide entire information to both of services from cyber system,so some service providers made use of maliciously services to get more interests.Trus... In big data of business service or transaction,it is impossible to provide entire information to both of services from cyber system,so some service providers made use of maliciously services to get more interests.Trust management is an effective solution to deal with these malicious actions.This paper gave a trust computing model based on service-recommendation in big data.This model takes into account difference of recommendation trust between familiar node and stranger node.Thus,to ensure accuracy of recommending trust computing,paper proposed a fine-granularity similarity computing method based on the similarity of service concept domain ontology.This model is more accurate in computing trust value of cyber service nodes and prevents better cheating and attacking of malicious service nodes.Experiment results illustrated our model is effective. 展开更多
关键词 Trust model recommendation trust content similarity ONTOLOGY big data.
下载PDF
Proposed Caching Scheme for Optimizing Trade-off between Freshness and Energy Consumption in Name Data Networking Based IoT 被引量:1
13
作者 Rahul Shrimali Hemal Shah Riya Chauhan 《Advances in Internet of Things》 2017年第2期11-24,共14页
Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offer... Over the last few years, the Internet of Things (IoT) has become an omnipresent term. The IoT expands the existing common concepts, anytime and anyplace to the connectivity for anything. The proliferation in IoT offers opportunities but may also bear risks. A hitherto neglected aspect is the possible increase in power consumption as smart devices in IoT applications are expected to be reachable by other devices at all times. This implies that the device is consuming electrical energy even when it is not in use for its primary function. Many researchers’ communities have started addressing storage ability like cache memory of smart devices using the concept called—Named Data Networking (NDN) to achieve better energy efficient communication model. In NDN, memory or buffer overflow is the common challenge especially when internal memory of node exceeds its limit and data with highest degree of freshness may not be accommodated and entire scenarios behaves like a traditional network. In such case, Data Caching is not performed by intermediate nodes to guarantee highest degree of freshness. On the periodical updates sent from data producers, it is exceedingly demanded that data consumers must get up to date information at cost of lease energy. Consequently, there is challenge in maintaining tradeoff between freshness energy consumption during Publisher-Subscriber interaction. In our work, we proposed the architecture to overcome cache strategy issue by Smart Caching Algorithm for improvement in memory management and data freshness. The smart caching strategy updates the data at precise interval by keeping garbage data into consideration. It is also observed from experiment that data redundancy can be easily obtained by ignoring/dropping data packets for the information which is not of interest by other participating nodes in network, ultimately leading to optimizing tradeoff between freshness and energy required. 展开更多
关键词 Internet of Things (IoT) Named data NETWORKING Smart CACHING Table Pending INTEREST Forwarding INFORMATION Base content Store content Centric NETWORKING INFORMATION Centric NETWORKING data & INTEREST Packets SCTSmart CACHING
下载PDF
The Impact of "Bad" Argo Profiles on Ocean Data Assimilation 被引量:1
14
作者 YAN Chang-Xiang ZHU Jiang 《Atmospheric and Oceanic Science Letters》 2010年第2期59-63,共5页
Recent studies have found cold biases in a fraction of Argo profiles (hereinafter referred to as bad Array for Real-time Geostrophic Oceanography (Argo) profiles) due to the pressure drifts during 2003 and 2006. These... Recent studies have found cold biases in a fraction of Argo profiles (hereinafter referred to as bad Array for Real-time Geostrophic Oceanography (Argo) profiles) due to the pressure drifts during 2003 and 2006. These bad Argo profiles have had an important impact on in situ observation-based global ocean heat content esti- mates. This study investigated the impact of bad Argo profiles on ocean data assimilation results that were based on observations from diverse ocean observation systems, such as in situ profiles (e.g., Argo, expendable bathy- thermograph (XBT), and Tropical Atmosphere Ocean (TAO), remote-sensing sea surface temperature products and satellite altimetry between 2004 and 2006. Results from this work show that the upper ocean heat content analysis is vulnerable to bad Argo profiles and demon- strate a cooling trend in the studied period despite the multiple independent data types that were assimilated. When the bad Argo profiles were excluded from the as- similation, the decreased heat content disappeared and a warming occurred. Combination of satellite altimetry and mass variation data from gravity satellite demonstrated an increase, which agrees well with the increased heat con- tent. Additionally, when an additional Argo profile quality control procedure was utilized that simply removed the profiles that presented static unstable water columns, the results were very similar to those obtained when the bad Argo profiles were excluded from the assimilation. This indicates that an ocean data assimilation that uses multiple data sources with improved quality control could be less vulnerable to a major observation system failure, such as a bad Argo event. 展开更多
关键词 data assimilation ARGO heat content ensemble optimal interpolation
下载PDF
A Cache Replacement Policy Based on Multi-Factors for Named Data Networking 被引量:1
15
作者 Meiju Yu Ru Li Yuwen Chen 《Computers, Materials & Continua》 SCIE EI 2020年第10期321-336,共16页
Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of... Named Data Networking(NDN)is one of the most excellent future Internet architectures and every router in NDN has the capacity of caching contents passing by.It greatly reduces network traffic and improves the speed of content distribution and retrieval.In order to make full use of the limited caching space in routers,it is an urgent challenge to make an efficient cache replacement policy.However,the existing cache replacement policies only consider very few factors that affect the cache performance.In this paper,we present a cache replacement policy based on multi-factors for NDN(CRPM),in which the content with the least cache value is evicted from the caching space.CRPM fully analyzes multi-factors that affect the caching performance,puts forward the corresponding calculation methods,and utilize the multi-factors to measure the cache value of contents.Furthermore,a new cache value function is constructed,which makes the content with high value be stored in the router as long as possible,so as to ensure the efficient use of cache resources.The simulation results show that CPRM can effectively improve cache hit ratio,enhance cache resource utilization,reduce energy consumption and decrease hit distance of content acquisition. 展开更多
关键词 Cache replacement policy named data networking content popularity FRESHNESS energy consumption
下载PDF
A preliminary study on an upper ocean heat and salt content of the western Pacific warm pool region
16
作者 Xiaoxin Yang Xiaofen Wu +1 位作者 Zenghong Liu Chunxin Yuan 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2019年第3期60-71,共12页
On the basis of Argo profile data of the temperature and salinity from January 2001 to July 2014, the spatial distributions of an upper ocean heat content(OHC) and ocean salt content(OSC) of the western Pacific warm p... On the basis of Argo profile data of the temperature and salinity from January 2001 to July 2014, the spatial distributions of an upper ocean heat content(OHC) and ocean salt content(OSC) of the western Pacific warm pool(WPWP) region and their seasonal and interannual variations are studied by a cyclostationary empirical orthogonal function(CSEOF) decomposition, a maximum entropy spectral analysis, and a correlation analysis.Probable reasons for variations are discussed. The results show the following.(1) The OHC variations in the subsurface layer of the WPWP are much greater than those in the surface layer. On the contrary, the OSC variations are mainly in the surface layer, while the subsurface layer varies little.(2) Compared with the OSC, the OHC of the WPWP region is more affected by El Ni?o-Southern Oscillation(ENSO) events. The CSEOF analysis shows that the OHC pattern in mode 1 has strong interannual oscillation, with eastern and western parts opposite in phase. The distribution of the OSC has a positive-negative-positive tripole pattern. Time series analysis shows that the OHC has three phase adjustments with the occurrence of ENSO events after 2007, while the OSC only had one such adjustment during the same period. Further analysis indicates that the OHC variations are mainly caused by ENSO events, local winds, and zonal currents, whereas the OSC variations are caused by much more complex reasons. Two of these, the zonal current and a freshwater flux, have a positive feedback on the OSC change in the WPWP region. 展开更多
关键词 ocean HEAT content SALT content the western PACIFIC WARM pool ARGO data
下载PDF
引用位置视角下数据论文引用行为特征分析——以Scientific Data为例 被引量:4
17
作者 吴涵 肖明 +2 位作者 林霄楠 王紫晨 陈柯文 《图书馆杂志》 CSSCI 北大核心 2022年第6期101-107,共7页
为探究数据论文的引用行为特征,本研究以Scientific Data期刊中1 473篇数据论文为例,基于引用位置视角,抓取数据论文全文数据,提取引用行为相关信息进行分析。经研究并与一般学术论文对比,发现数据论文引用位置呈“头重脚轻”特征,且不... 为探究数据论文的引用行为特征,本研究以Scientific Data期刊中1 473篇数据论文为例,基于引用位置视角,抓取数据论文全文数据,提取引用行为相关信息进行分析。经研究并与一般学术论文对比,发现数据论文引用位置呈“头重脚轻”特征,且不同引用位置上的被引年龄与引用强度均存在明显差异。 展开更多
关键词 数据论文 引用行为 引用内容分析 引用位置 引用强度
下载PDF
New reconstruction and forecasting algorithm for TEC data
18
作者 王俊 盛峥 +1 位作者 江宇 石汉青 《Chinese Physics B》 SCIE EI CAS CSCD 2014年第9期602-608,共7页
To reconstruct the missing data of the total electron content (TEC) observations, a new method is proposed, which is based on the empirical orthogonal functions (EOF) decomposition and the value of eigenvalue itse... To reconstruct the missing data of the total electron content (TEC) observations, a new method is proposed, which is based on the empirical orthogonal functions (EOF) decomposition and the value of eigenvalue itself. It is a self-adaptive EOF decomposition without any prior information needed, and the error of reconstructed data can be estimated. The interval quartering algorithm and cross-validation algorithm are used to compute the optimal number of EOFs for reconstruction. The interval quartering algorithm can reduce the computation time. The application of the data interpolating empirical orthogonal functions (DINEOF) method to the real data have demonstrated that the method can reconstruct the TEC map with high accuracy, which can be employed on the real-time system in the future work. 展开更多
关键词 RECONSTRUCTION total electron content (TEC) data empirical orthogonal function (EOF) decompo-sition interval quartering algorithm
下载PDF
Transient content caching and updating with modified harmony search for Internet of Things
19
作者 Chao Xu Xijun Wang 《Digital Communications and Networks》 SCIE 2019年第1期24-33,共10页
Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching... Internet of Things (IoT) has emerged as one of the new use cases in the 5th Generation wireless networks. However, the transient nature of the data generated in IoT networks brings great challenges for content caching. In this paper, we study a joint content caching and updating strategy in IoT networks, taking both the energy consumption of the sensors and the freshness loss of the contents into account. In particular, we decide whether or not to cache the transient data and, if so, how often the servers should update their contents. We formulate this content caching and updating problem as a mixed 0–1 integer non-convex optimization programming, and devise a Harmony Search based content Caching and Updating (HSCU) algorithm, which is self-learning and derivativefree and hence stipulates no requirement on the relationship between the objective and variables. Finally, extensive simulation results verify the effectiveness of our proposed algorithm in terms of the achieved satisfaction ratio for content delivery, normalized energy consumption, and overall network utility, by comparing it with some baseline algorithms. 展开更多
关键词 IOT content CACHING and updating data FRESHNESS
下载PDF
NDN Content Poisoning Mitigation Using Bird Swarm Optimization and Trust Value
20
作者 S.V.Vijaya Karthik J.Arputha Vijaya Selvi 《Intelligent Automation & Soft Computing》 SCIE 2023年第4期833-847,共15页
Information-Centric Networking(ICN)is considered a viable strategy for regulating Internet consumption using the Internet’s underlying architecture.Although Named Data Networking(NDN)and its reference-based implement... Information-Centric Networking(ICN)is considered a viable strategy for regulating Internet consumption using the Internet’s underlying architecture.Although Named Data Networking(NDN)and its reference-based implementa-tion,the NDN Forwarding Daemon(NFD),are the most established ICN solu-tions,their vulnerability to the Content Poisoning Attack(CPA)is regarded as a severe threat that might dramatically impact this architecture.Content Poisoning can significantly minimize the impact of NDN’s universal data caching.Using verification signatures to protect against content poisoning attacks may be imprac-tical due to the associated costs and the volume of messages sent across the net-work,resulting in high computational costs.Therefore,in this research,we designed a method in NDN called Bird Swarm Optimization Algorithm-Based Content Poisoning Mitigation(BSO-Content Poisoning Mitigation Scheme).By aggregating the security information of entire routers along the full path,this sys-tem introduces the BSO to explore the secure transmission path and alter the con-tent retrieval procedure.Meanwhile,based on the determined trustworthy value of each node,the BSO-Content Poisoning Mitigation Scheme can bypass malicious routers,preventing them from disseminating illicit content in the future.Addition-ally,the suggested technique can minimize content poisoning utilizing removing erroneous Data packets from the cache-store during the pathfinding process.The proposed method has been subjected to extensive analysis compared with the ROM scheme and improved performance justified in several metrics.BSO-Con-tent Poisoning Mitigation Scheme is more efficient and faster than the ROM tech-nique in obtaining valid Data packets and resulting in a higher good cache hit ratio in a comparatively less amount of time. 展开更多
关键词 Named data network content poisoning bird swarm optimization content validation fake content
下载PDF
上一页 1 2 56 下一页 到第
使用帮助 返回顶部