In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of...In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of tl CCN node. Then we discuss mobility management, routing strategy, and caching policy in CCN. For better network performance, we propose a probability cache replacement policy that is based on cotent popularity. We also propose and evaluate a probability cache with evicted copy-up decision policy.展开更多
We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional datab...We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional database query languages. Our basic ideas rest with “ontology” and the notions of “information content”. A public ontology, if available, would be the best choice for reliable domain knowledge. To enable an ontology to work with a database would involve, among others, certain mechanism thereby the two systems can form a coherent whole. This is achieved by means of the notion of “information content inclusion relation”, IIR for short. We present what an IIR is, and how IIR can be identified from both an ontology and a database, and then reasoning about them.展开更多
Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of m...Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.展开更多
On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of c...On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of carbonate content with limited well measarements as a control. The method was applied to the slope area of the northern South China Sea near ODP Sites 1146 and 1148, and the results are satisfaetory. Before inversion calculation, a stepwise regression method was applied to obtain six properties related most closely to the carbonate content variations among the various properties on the seismic profiles across or near the wells. These include the average frequency, the integrated absolute amplitude, the dominant frequency, the reflection time, the derivative instantaneous amplitude, and the instantaneous frequency. The results, with carbonate content errors of mostly ±5 % relative to those measured from sediment samples, show a relatively accurate picture of carbonate distribution along the slope profile. This method pioneers a new quantitative model to acquire carbonate content variations directly from high-resolution seismic data. It will provide a new approach toward obtaining substitutive high-resolution sediment data for earth system studies related to basin evolution, especially in discussing the coupling between regional sedimentation and climate change.展开更多
From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand res...From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand research journals have been issuing around four million papers annually on average.Search engines,indexing services,and digital libraries have been searching for such publications over the web.Nevertheless,getting the most relevant articles against the user requests is yet a fantasy.It is mainly because the articles are not appropriately indexed based on the hierarchies of granular subject classification.To overcome this issue,researchers are striving to investigate new techniques for the classification of the research articles especially,when the complete article text is not available(a case of nonopen access articles).The proposed study aims to investigate the multilabel classification over the available metadata in the best possible way and to assess,“to what extent metadata-based features can perform in contrast to content-based approaches.”In this regard,novel techniques for investigating multilabel classification have been proposed,developed,and evaluated on metadata such as the Title and Keywords of the articles.The proposed technique has been assessed for two diverse datasets,namely,from the Journal of universal computer science(J.UCS)and the benchmark dataset comprises of the articles published by the Association for computing machinery(ACM).The proposed technique yields encouraging results in contrast to the state-ofthe-art techniques in the literature.展开更多
To reconstruct the missing data of the total electron content (TEC) observations, a new method is proposed, which is based on the empirical orthogonal functions (EOF) decomposition and the value of eigenvalue itse...To reconstruct the missing data of the total electron content (TEC) observations, a new method is proposed, which is based on the empirical orthogonal functions (EOF) decomposition and the value of eigenvalue itself. It is a self-adaptive EOF decomposition without any prior information needed, and the error of reconstructed data can be estimated. The interval quartering algorithm and cross-validation algorithm are used to compute the optimal number of EOFs for reconstruction. The interval quartering algorithm can reduce the computation time. The application of the data interpolating empirical orthogonal functions (DINEOF) method to the real data have demonstrated that the method can reconstruct the TEC map with high accuracy, which can be employed on the real-time system in the future work.展开更多
The content-ignorant clustering method takes advantages in time complexity and space complexity than the content based methods.In this paper,the authors introduce a unified expanding method for content-ignorant web pa...The content-ignorant clustering method takes advantages in time complexity and space complexity than the content based methods.In this paper,the authors introduce a unified expanding method for content-ignorant web page clustering by mining the "click-through" log,which tries to solve the problem that the "click-through" log is sparse.The relationship between two nodes which have been expanded is also defined and optimized.Analysis and experiment show that the performance of the new method has improved,by the comparison with the standard content-ignorant method.The new method can also work without iterative clustering.展开更多
As a named data-based clean-slate future Internet architecture,Content-Centric Networking(CCN)uses entirely different protocols and communication patterns from the host-to-host IP network.In CCN,communication is wholl...As a named data-based clean-slate future Internet architecture,Content-Centric Networking(CCN)uses entirely different protocols and communication patterns from the host-to-host IP network.In CCN,communication is wholly driven by the data consumer.Consumers must send Interest packets with the content name and not by the host’s network address.Its nature of in-network caching,Interest packets aggregation and hop-byhop communication poses unique challenges to provision of Internet applications,where traditional IP network no long works well.This paper presents a comprehensive survey of state-of-the-art application research activities related to CCN architecture.Our main aims in this survey are(a)to identify the advantages and drawbacks of CCN architectures for application provisioning;(b)to discuss the challenges and opportunities regarding service provisioning in CCN architectures;and(c)to further encourage deeper thinking about design principles for future Internet architectures from the perspective of upper-layer applications.展开更多
Digital educational content is gaining importance as an incubator of pedagogical methodologies in formal and informal online educational settings. Its educational efficiency is directly dependent on its quality, howev...Digital educational content is gaining importance as an incubator of pedagogical methodologies in formal and informal online educational settings. Its educational efficiency is directly dependent on its quality, however educational content is more than information and data. This paper presents a new data quality framework for assessing digital educational content used for teaching in distance learning environments. The model relies on the ISO2500 series quality standard and beside providing the mechanisms for multi-facet quality assessment it also supports organizations that design, create, manage and use educational content with the quality tools (expressed as quality metrics and measurement methods) to provide a more efficient distance education experience. The model describes the quality characteristics of the educational material content using data and software quality characteristics.展开更多
基金supported by National Natural Science Foundation of China under Grant No.60872018 and No. 60902015Major National Science and Technology Project No. 2011ZX03005-004-03
文摘In this paper, we explore network architecture anal key technologies for content-centric networking (CCN), an emerging networking technology in the big-data era. We descrihe the structure anti operation mechanism of tl CCN node. Then we discuss mobility management, routing strategy, and caching policy in CCN. For better network performance, we propose a probability cache replacement policy that is based on cotent popularity. We also propose and evaluate a probability cache with evicted copy-up decision policy.
文摘We explore how an ontology may be used with a database to support reasoning about the “information content” of data whereby to reveal hidden information that would otherwise not derivable by using conventional database query languages. Our basic ideas rest with “ontology” and the notions of “information content”. A public ontology, if available, would be the best choice for reliable domain knowledge. To enable an ontology to work with a database would involve, among others, certain mechanism thereby the two systems can form a coherent whole. This is achieved by means of the notion of “information content inclusion relation”, IIR for short. We present what an IIR is, and how IIR can be identified from both an ontology and a database, and then reasoning about them.
基金Under the auspices of National Natural Science Foundation of China(No.41230751,41101547)Scientific Research Foundation of Graduate School of Nanjing University(No.2012CL14)
文摘Hyperspectral data are an important source for monitoring soil salt content on a large scale. However, in previous studies, barriers such as interference due to the presence of vegetation restricted the precision of mapping soil salt content. This study tested a new method for predicting soil salt content with improved precision by using Chinese hyperspectral data, Huan Jing-Hyper Spectral Imager(HJ-HSI), in the coastal area of Rudong County, Eastern China. The vegetation-covered area and coastal bare flat area were distinguished by using the normalized differential vegetation index at the band length of 705 nm(NDVI705). The soil salt content of each area was predicted by various algorithms. A Normal Soil Salt Content Response Index(NSSRI) was constructed from continuum-removed reflectance(CR-reflectance) at wavelengths of 908.95 nm and 687.41 nm to predict the soil salt content in the coastal bare flat area(NDVI705 < 0.2). The soil adjusted salinity index(SAVI) was applied to predict the soil salt content in the vegetation-covered area(NDVI705 ≥ 0.2). The results demonstrate that 1) the new method significantly improves the accuracy of soil salt content mapping(R2 = 0.6396, RMSE = 0.3591), and 2) HJ-HSI data can be used to map soil salt content precisely and are suitable for monitoring soil salt content on a large scale.
基金This paper is supported by the National Natural Science Foundation ofChina(Nos.40476030,40576031)andthe National Key Basic ResearchSpecial Foundation Project of China(No.G2000078501).
文摘On the basis of the relationship between the carbonate content and the stratal velocity and density, an exercise has been attempted using an artificial neural network on high-resolution seismic data for inversion of carbonate content with limited well measarements as a control. The method was applied to the slope area of the northern South China Sea near ODP Sites 1146 and 1148, and the results are satisfaetory. Before inversion calculation, a stepwise regression method was applied to obtain six properties related most closely to the carbonate content variations among the various properties on the seismic profiles across or near the wells. These include the average frequency, the integrated absolute amplitude, the dominant frequency, the reflection time, the derivative instantaneous amplitude, and the instantaneous frequency. The results, with carbonate content errors of mostly ±5 % relative to those measured from sediment samples, show a relatively accurate picture of carbonate distribution along the slope profile. This method pioneers a new quantitative model to acquire carbonate content variations directly from high-resolution seismic data. It will provide a new approach toward obtaining substitutive high-resolution sediment data for earth system studies related to basin evolution, especially in discussing the coupling between regional sedimentation and climate change.
文摘From the beginning,the process of research and its publication is an ever-growing phenomenon and with the emergence of web technologies,its growth rate is overwhelming.On a rough estimate,more than thirty thousand research journals have been issuing around four million papers annually on average.Search engines,indexing services,and digital libraries have been searching for such publications over the web.Nevertheless,getting the most relevant articles against the user requests is yet a fantasy.It is mainly because the articles are not appropriately indexed based on the hierarchies of granular subject classification.To overcome this issue,researchers are striving to investigate new techniques for the classification of the research articles especially,when the complete article text is not available(a case of nonopen access articles).The proposed study aims to investigate the multilabel classification over the available metadata in the best possible way and to assess,“to what extent metadata-based features can perform in contrast to content-based approaches.”In this regard,novel techniques for investigating multilabel classification have been proposed,developed,and evaluated on metadata such as the Title and Keywords of the articles.The proposed technique has been assessed for two diverse datasets,namely,from the Journal of universal computer science(J.UCS)and the benchmark dataset comprises of the articles published by the Association for computing machinery(ACM).The proposed technique yields encouraging results in contrast to the state-ofthe-art techniques in the literature.
基金supported by the National Natural Science Foundation of China(Grant No.41105013,41375028,and 61271106)the National Natural Science Foundation of Jiangsu Province,China(Grant No.BK2011122)the Key Laboratory of Meteorological Observation and Information Processing Scientific Research Fund of Jiangsu Province,China(Grant No.KDXS1205)
文摘To reconstruct the missing data of the total electron content (TEC) observations, a new method is proposed, which is based on the empirical orthogonal functions (EOF) decomposition and the value of eigenvalue itself. It is a self-adaptive EOF decomposition without any prior information needed, and the error of reconstructed data can be estimated. The interval quartering algorithm and cross-validation algorithm are used to compute the optimal number of EOFs for reconstruction. The interval quartering algorithm can reduce the computation time. The application of the data interpolating empirical orthogonal functions (DINEOF) method to the real data have demonstrated that the method can reconstruct the TEC map with high accuracy, which can be employed on the real-time system in the future work.
文摘The content-ignorant clustering method takes advantages in time complexity and space complexity than the content based methods.In this paper,the authors introduce a unified expanding method for content-ignorant web page clustering by mining the "click-through" log,which tries to solve the problem that the "click-through" log is sparse.The relationship between two nodes which have been expanded is also defined and optimized.Analysis and experiment show that the performance of the new method has improved,by the comparison with the standard content-ignorant method.The new method can also work without iterative clustering.
基金supported in part by the National Natural Science Foundation of China (NSFC) under Grant 61671081in part by the Funds for International Cooperation and Exchange of NSFC under Grant 61720106007+2 种基金in part by the 111 Project under Grant B18008in part by the Beijing Natural Science Foundation under Grant 4172042in part by the Fundamental Research Funds for the Central Universities under Grant 2018XKJC01
文摘As a named data-based clean-slate future Internet architecture,Content-Centric Networking(CCN)uses entirely different protocols and communication patterns from the host-to-host IP network.In CCN,communication is wholly driven by the data consumer.Consumers must send Interest packets with the content name and not by the host’s network address.Its nature of in-network caching,Interest packets aggregation and hop-byhop communication poses unique challenges to provision of Internet applications,where traditional IP network no long works well.This paper presents a comprehensive survey of state-of-the-art application research activities related to CCN architecture.Our main aims in this survey are(a)to identify the advantages and drawbacks of CCN architectures for application provisioning;(b)to discuss the challenges and opportunities regarding service provisioning in CCN architectures;and(c)to further encourage deeper thinking about design principles for future Internet architectures from the perspective of upper-layer applications.
文摘Digital educational content is gaining importance as an incubator of pedagogical methodologies in formal and informal online educational settings. Its educational efficiency is directly dependent on its quality, however educational content is more than information and data. This paper presents a new data quality framework for assessing digital educational content used for teaching in distance learning environments. The model relies on the ISO2500 series quality standard and beside providing the mechanisms for multi-facet quality assessment it also supports organizations that design, create, manage and use educational content with the quality tools (expressed as quality metrics and measurement methods) to provide a more efficient distance education experience. The model describes the quality characteristics of the educational material content using data and software quality characteristics.