In view of the limitations of traditional measurement methods in the field of building information,such as complex operation,low timeliness and poor accuracy,a new way of combining three-dimensional scanning technolog...In view of the limitations of traditional measurement methods in the field of building information,such as complex operation,low timeliness and poor accuracy,a new way of combining three-dimensional scanning technology and BIM(Building Information Modeling)model was discussed.Focused on the efficient acquisition of building geometric information using the fast-developing 3D point cloud technology,an improved deep learning-based 3D point cloud recognition method was proposed.The method optimised the network structure based on RandLA-Net to adapt to the large-scale point cloud processing requirements,while the semantic and instance features of the point cloud were integrated to significantly improve the recognition accuracy and provide a precise basis for BIM model remodeling.In addition,a visual BIM model generation system was developed,which systematically transformed the point cloud recognition results into BIM component parameters,automatically constructed BIM models,and promoted the open sharing and secondary development of models.The research results not only effectively promote the automation process of converting 3D point cloud data to refined BIM models,but also provide important technical support for promoting building informatisation and accelerating the construction of smart cities,showing a wide range of application potential and practical value.展开更多
To integrate reasoning and text retrieval, the architecture of a semantic search engine which includes several kinds of queries is proposed, and the semantic search engine Smartch is designed and implemented. Based on...To integrate reasoning and text retrieval, the architecture of a semantic search engine which includes several kinds of queries is proposed, and the semantic search engine Smartch is designed and implemented. Based on a logical reasoning process and a graphic user-defined process, Smartch provides four kinds of search services. They are basic search, concept search, graphic user-defined query and association relationship search. The experimental results show that compared with the traditional search engine, the recall and precision of Smartch are improved. Graphic user-defined queries can accurately locate the information of user needs. Association relationship search can find complicated relationships between concepts. Smartch can perform some intelligent functions based on ontology inference.展开更多
A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and...A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and document feature encoding. In the Rough-CC4, the documents are described by the equivalent classes of the approximate words. By this method, the dimensions representing the documents can be reduced, which can solve the precision problems caused by the different document sizes and also blur the differences caused by the approximate words. In the Rough-CC4, a binary encoding method is introduced, through which the importance of documents relative to each equivalent class is encoded. By this encoding method, the precision of the Rough-CC4 is improved greatly and the space complexity of the Rough-CC4 is reduced. The Rough-CC4 can be used in automatic classification of documents.展开更多
In order to implement semantic mapping of database metasearch engines, a system is proposed, which uses ontology as the organization form of information and records the new words not appearing in the ontology. When th...In order to implement semantic mapping of database metasearch engines, a system is proposed, which uses ontology as the organization form of information and records the new words not appearing in the ontology. When the new word' s frequency of use exceeds the threshold, it is added into the ontology. Ontology expansion is implemented in this way. The search process supports "and" and "or" Boolean operations accordingly. In order to improve the mapping speed of the system, a memory module is added which can memorize the recent query information of users and automatically learn the user' s query interest during the mapping which can dynamically decide the search order of instances tables. Experiments prove that these measures can obviously reduce the average mapping time.展开更多
The maximal entropy ordered weighted averaging (ME-OWA) operator is used to aggregate metasearch engine results, and its newly analytical solution is also applied. Within the current context of the OWA operator, the...The maximal entropy ordered weighted averaging (ME-OWA) operator is used to aggregate metasearch engine results, and its newly analytical solution is also applied. Within the current context of the OWA operator, the methods for aggregating metasearch engine results are divided into two kinds. One has a unique solution, and the other has multiple solutions. The proposed method not only has crisp weights, but also provides multiple aggregation results for decision makers to choose from. In order to prove the application of the ME-OWA operator method, under the context of aggregating metasearch engine results, an example is given, which shows the results obtained by the ME-OWA operator method and the minimax linear programming ( minimax-LP ) method. Comparison between these two methods are also made. The results show that the ME-OWA operator has nearly the same aggregation results as those of the minimax-LP method.展开更多
A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained ...A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained from the web by search engines and an initial candidate mapping set consisting of ontology concept pairs is generated. According to the concept hierarchies of ontologies, a set of production rules is proposed to delete the concept pairs inconsistent with the ontology semantics from the initial candidate mapping set and add the concept pairs consistent with the ontology semantics to it. Finally, ontology mappings are chosen from the candidate mapping set automatically with a mapping select rule which is based on mutual information. Experimental results show that the F-measure can reach 75% to 100% and it can effectively accomplish the mapping between ontologies.展开更多
In this study,a 3D virtual reality and visualization engine for rendering the ocean,named VV-Ocean,is designed for marine applications.The design goals of VV-Ocean aim at high fidelity simulation of ocean environment,...In this study,a 3D virtual reality and visualization engine for rendering the ocean,named VV-Ocean,is designed for marine applications.The design goals of VV-Ocean aim at high fidelity simulation of ocean environment,visualization of massive and multidimensional marine data,and imitation of marine lives.VV-Ocean is composed of five modules,i.e.memory management module,resources management module,scene management module,rendering process management module and interaction management module.There are three core functions in VV-Ocean:reconstructing vivid virtual ocean scenes,visualizing real data dynamically in real time,imitating and simulating marine lives intuitively.Based on VV-Ocean,we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface.Environment factors such as ocean current and wind field have been considered in this simulation.On this platform oil spilling process can be abstracted as movements of abundant oil particles.The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering.VV-Ocean can be widely used in ocean applications such as demonstrating marine operations,facilitating maritime communications,developing ocean games,reducing marine hazards,forecasting the weather over oceans,serving marine tourism,and so on.Finally,further technological improvements of VV-Ocean are discussed.展开更多
In order to effectively solve combinatorial optimization problems,a membrane-inspired quantum bee colony optimization(MQBCO)is proposed for scientific computing and engineering applications.The proposed MQBCO algorith...In order to effectively solve combinatorial optimization problems,a membrane-inspired quantum bee colony optimization(MQBCO)is proposed for scientific computing and engineering applications.The proposed MQBCO algorithm applies the membrane computing theory to quantum bee colony optimization(QBCO),which is an effective discrete optimization algorithm.The global convergence performance of MQBCO is proved by Markov theory,and the validity of MQBCO is verified by testing the classical benchmark functions.Then the proposed MQBCO algorithm is used to solve decision engine problems of cognitive radio system.By hybridizing the QBCO and membrane computing theory,the quantum state and observation state of the quantum bees can be well evolved within the membrane structure.Simulation results for cognitive radio system show that the proposed decision engine method is superior to the traditional intelligent decision engine algorithms in terms of convergence,precision and stability.Simulation experiments under different communication scenarios illustrate that the balance between three objective functions and the adapted parameter configuration is consistent with the weights of three normalized objective functions.展开更多
We investigated the distribution of four enzymes involved in the immune response of Apostichopus japonicus. We collected samples of the tentacles, papillate podium, integument, respiratory tree, and digestive tract an...We investigated the distribution of four enzymes involved in the immune response of Apostichopus japonicus. We collected samples of the tentacles, papillate podium, integument, respiratory tree, and digestive tract and stained them for acid phosphatase (ACP), alkaline phosphatase (AKP), non-specific esterase (NSE) and peroxidase (POD) activity. The distribution and content of ACE AKP, NSE, and POD differed among the tissues. The coelomic epithelium of the tentacle, papillate podium, and integument and the mucous layer of respiratory tree were positive for ACP activity. The coelomic epithelium and cuticular layer of the tentacle, papillate podium, and integument and the mucous layer and tunica externa of the respiratory tree and digestive tract stained positive or weakly positive for AKP activity. Almost all the epithelial tissues stained positive, strongly positive, or very strongly positive for NSE activity. The cuticular layer of the tentacle and integument and the mucous layer, tunica submucosa, and tunica externa of the respiratory tree and digestive tract stained positive for POD activity. We hypothesize that these enzymes play a role in the immune response in A. japonicus.展开更多
Internet based technologies, such as mobile payments, social networks, search engines and cloud computation, will lead to a paradigm shift in financial sector. Beside indirect financing via commercial banks and direct...Internet based technologies, such as mobile payments, social networks, search engines and cloud computation, will lead to a paradigm shift in financial sector. Beside indirect financing via commercial banks and direct financing through security markets, a third way to conduct financial activities will emerge, which we call "internet finance'" This paper presents a detailed analysis of payment, information processing and resource allocation under internet finance.展开更多
In the last two decades of the 20th century, there has been an increasing interest in and emphasis on the study of the Hong Kong literature in both the academic and general public in Hong Kong. Recognizing the emergen...In the last two decades of the 20th century, there has been an increasing interest in and emphasis on the study of the Hong Kong literature in both the academic and general public in Hong Kong. Recognizing the emergent need of the resources on Hong Kong literature, the University Library System of the Chinese University of Hong Kong set up the Hong Kong Literature Database (the “Database”), which was the first Chinese literature database in the Internet in 2000. The paper will examine how the database is constructed using XML technology andometadata schema, The database also employs Unicode UTF-8 as the internal code. A mapping table for traditional and simplified Chinese characters was created based on Unihan and is used behind the scene so that a user can either input traditional or simplified Chinese characters and retrieval will give both traditional and simplified Chinese characters. Currently 65% of journals use OCR technology so that full-text searching is possible. The Chinese OCR technology will be examined in greater detail. Special features of the Database such as, page-by-page browse mode, position-highlight for full-page newspaper, linking Table-Of-Contents and book jackets from the Library catalogue, etc. are described. The paper will also bring out the problem of massive downloading and compare the state-of-the-art technology and their shortcomings. This paper shows how the Hong Kong Literature Database facilitates future collaboration and data exchange by using open standard, shareable structure and the latest technology.展开更多
Equipment selection for industrial process usually requires the extensive participation of industrial experts and technologists, which causes a serious waste of resources. This work presents an equipment selection kno...Equipment selection for industrial process usually requires the extensive participation of industrial experts and technologists, which causes a serious waste of resources. This work presents an equipment selection knowledge base system for industrial styrene process(S-ESKBS) based on the ontology technology. This structure includes a low-level knowledge base and a top-level interactive application. As the core part of the S-ESKBS, the low-level knowledge base consists of the equipment selection ontology library, equipment selection rule set and Pellet inference engine. The top-level interactive application is implemented using S-ESKBS, including the parsing storage layer, inference query layer and client application layer. Case studies for the industrial styrene process equipment selection of an analytical column and an alkylation reactor are demonstrated to show the characteristics and implementability of the S-ESKBS.展开更多
The idea of positional inverted index is exploited for indexing of graph database. The main idea is the use of hashing tables in order to prune a considerable portion of graph database that cannot contain the answer s...The idea of positional inverted index is exploited for indexing of graph database. The main idea is the use of hashing tables in order to prune a considerable portion of graph database that cannot contain the answer set. These tables are implemented using column-based techniques and are used to store graphs of database, frequent sub-graphs and the neighborhood of nodes. In order to exact checking of remaining graphs, the vertex invariant is used for isomorphism test which can be parallel implemented. The results of evaluation indicate that proposed method outperforms existing methods.展开更多
The goal of web service composition is to choose an optimal scheme according to Quantity of Service (QoS) which selects instances in a distributed network. The networks are clustered with some web services such as o...The goal of web service composition is to choose an optimal scheme according to Quantity of Service (QoS) which selects instances in a distributed network. The networks are clustered with some web services such as ontologies, algorithms and rule engines with similar function and interfaces. In this scheme, web services acted as candidate service construct a distributed model which can't obtain the global services' information. The model is utilized to choose instances according to local QoS information in the progress of service composition. Some QoS matrixes are used to record and compare the instance paths and then choose a better one. Simulation result has proven that our ~pproach has a tradeoff between efficiency and ~quality.展开更多
Due to the rapid development,Internet has become the main field for brand building.Under this circumstance,the image of the brand is always consistent with the consumers' perception.Therefore,this study uses the m...Due to the rapid development,Internet has become the main field for brand building.Under this circumstance,the image of the brand is always consistent with the consumers' perception.Therefore,this study uses the method of text mining of search engine to explore the categories of brand archetype based on Brand Personality Theory from the perspective of Internet.The results find that 12 brand archetypes,including caregiver,sage,hero,innocent,dominator,creator,vitality,explorer,stylish woman,lover,cooperator,and vogue gentleman,have a high degree explanation.Deeper study uses case study to verify the reasonability and effectiveness of the classification standard.展开更多
Transliteration editors are essential for keying-in Indian language scripts into the computer using QWERTY keyboard. Applications of transliteration editors in the context of Universal Digital Library (UDL) include en...Transliteration editors are essential for keying-in Indian language scripts into the computer using QWERTY keyboard. Applications of transliteration editors in the context of Universal Digital Library (UDL) include entry of meta-data and diction- aries for Indian languages. In this paper we propose a simple approach for building transliteration editors for Indian languages using Unicode and by taking advantage of its rendering engine. We demonstrate the usefulness of the Unicode based approach to build transliteration editors for Indian languages, and report its advantages needing little maintenance and few entries in the mapping table, and ease of adding new features such as adding letters, to the transliteration scheme. We demonstrate the trans- literation editor for 9 Indian languages and also explain how this approach can be adapted for Arabic scripts.展开更多
This paper starts with a description of the present status of the Digital Library of India Initiative. As part of this initiative large corpus of scanned text is available in many Indian languages and has stimulated a...This paper starts with a description of the present status of the Digital Library of India Initiative. As part of this initiative large corpus of scanned text is available in many Indian languages and has stimulated a vast amount of research in Indian language technology briefly described in this paper. Other than the Digital Library of India Initiative which is part of the Million Books to the Web Project initiated by Prof Raj Reddy of Carnegie Mellon University, there are a few more initiatives in India towards taking the heritage of the country to the Web. This paper presents the future directions for the Digital Library of India Initiative both in terms of growing collection and the technical challenges in managing such large collection poses.展开更多
Recent advances in broadband technology have caused forwarding engines to handle pack- ets with over 10 gigabit per second. In this paper, we present a high-speed forwarding pipeline which can finish all of the routin...Recent advances in broadband technology have caused forwarding engines to handle pack- ets with over 10 gigabit per second. In this paper, we present a high-speed forwarding pipeline which can finish all of the routing and forwarding tasks in the way of pipelining. We also establish the analysis model of the pipeline with which one can evaluate some key performance parameters of the forwarding engine such as forwarding rate and forwarding delay. We find that the pipeline is of good scalability and can forward unicast packets up to the speed of 40Gbit/s.展开更多
The effect of base oils,sulfur-containing multi-functional additives and dispersants in formulated diesel lubricants on lead corrosion was evaluated by a self-established high temperature corrosion bench test.Test lea...The effect of base oils,sulfur-containing multi-functional additives and dispersants in formulated diesel lubricants on lead corrosion was evaluated by a self-established high temperature corrosion bench test.Test lead coupons were analyzed by XPS to determine the resulting surface chemistry.The results showed a close correlation between the oxidation stability of base oil blend and the lead corrosion of formulated diesel lubricants.The zinc dialkyldithiophosphate(ZDDP)and zinc dialkyldithiocarbamate(ZDDC)have formed different protective films on lead coupon surfaces.A more or less amount of the protective film formed is the main factor affecting the degree of lead corrosion.The glassy zinc phosphates protective film formed by ZDDP is more effective than the zinc sulfides film formed by ZDDC.The interaction between dispersants and ZDDP had a significant impact on lead corrosion.展开更多
文摘In view of the limitations of traditional measurement methods in the field of building information,such as complex operation,low timeliness and poor accuracy,a new way of combining three-dimensional scanning technology and BIM(Building Information Modeling)model was discussed.Focused on the efficient acquisition of building geometric information using the fast-developing 3D point cloud technology,an improved deep learning-based 3D point cloud recognition method was proposed.The method optimised the network structure based on RandLA-Net to adapt to the large-scale point cloud processing requirements,while the semantic and instance features of the point cloud were integrated to significantly improve the recognition accuracy and provide a precise basis for BIM model remodeling.In addition,a visual BIM model generation system was developed,which systematically transformed the point cloud recognition results into BIM component parameters,automatically constructed BIM models,and promoted the open sharing and secondary development of models.The research results not only effectively promote the automation process of converting 3D point cloud data to refined BIM models,but also provide important technical support for promoting building informatisation and accelerating the construction of smart cities,showing a wide range of application potential and practical value.
基金The National Natural Science Foundation of China(No60403027)
文摘To integrate reasoning and text retrieval, the architecture of a semantic search engine which includes several kinds of queries is proposed, and the semantic search engine Smartch is designed and implemented. Based on a logical reasoning process and a graphic user-defined process, Smartch provides four kinds of search services. They are basic search, concept search, graphic user-defined query and association relationship search. The experimental results show that compared with the traditional search engine, the recall and precision of Smartch are improved. Graphic user-defined queries can accurately locate the information of user needs. Association relationship search can find complicated relationships between concepts. Smartch can perform some intelligent functions based on ontology inference.
基金The National Natural Science Foundation of China(No.60503020,60373066,60403016,60425206),the Natural Science Foundation of Jiangsu Higher Education Institutions ( No.04KJB520096),the Doctoral Foundation of Nanjing University of Posts and Telecommunication (No.0302).
文摘A rough set based corner classification neural network, the Rough-CC4, is presented to solve document classification problems such as document representation of different document sizes, document feature selection and document feature encoding. In the Rough-CC4, the documents are described by the equivalent classes of the approximate words. By this method, the dimensions representing the documents can be reduced, which can solve the precision problems caused by the different document sizes and also blur the differences caused by the approximate words. In the Rough-CC4, a binary encoding method is introduced, through which the importance of documents relative to each equivalent class is encoded. By this encoding method, the precision of the Rough-CC4 is improved greatly and the space complexity of the Rough-CC4 is reduced. The Rough-CC4 can be used in automatic classification of documents.
文摘In order to implement semantic mapping of database metasearch engines, a system is proposed, which uses ontology as the organization form of information and records the new words not appearing in the ontology. When the new word' s frequency of use exceeds the threshold, it is added into the ontology. Ontology expansion is implemented in this way. The search process supports "and" and "or" Boolean operations accordingly. In order to improve the mapping speed of the system, a memory module is added which can memorize the recent query information of users and automatically learn the user' s query interest during the mapping which can dynamically decide the search order of instances tables. Experiments prove that these measures can obviously reduce the average mapping time.
基金The National Natural Science Foundation of China(No.71171048)
文摘The maximal entropy ordered weighted averaging (ME-OWA) operator is used to aggregate metasearch engine results, and its newly analytical solution is also applied. Within the current context of the OWA operator, the methods for aggregating metasearch engine results are divided into two kinds. One has a unique solution, and the other has multiple solutions. The proposed method not only has crisp weights, but also provides multiple aggregation results for decision makers to choose from. In order to prove the application of the ME-OWA operator method, under the context of aggregating metasearch engine results, an example is given, which shows the results obtained by the ME-OWA operator method and the minimax linear programming ( minimax-LP ) method. Comparison between these two methods are also made. The results show that the ME-OWA operator has nearly the same aggregation results as those of the minimax-LP method.
基金The National Natural Science Foundation of China(No60425206,90412003)the Foundation of Excellent Doctoral Dis-sertation of Southeast University (NoYBJJ0502)
文摘A new mapping approach for automated ontology mapping using web search engines (such as Google) is presented. Based on lexico-syntactic patterns, the hyponymy relationships between ontology concepts can be obtained from the web by search engines and an initial candidate mapping set consisting of ontology concept pairs is generated. According to the concept hierarchies of ontologies, a set of production rules is proposed to delete the concept pairs inconsistent with the ontology semantics from the initial candidate mapping set and add the concept pairs consistent with the ontology semantics to it. Finally, ontology mappings are chosen from the candidate mapping set automatically with a mapping select rule which is based on mutual information. Experimental results show that the F-measure can reach 75% to 100% and it can effectively accomplish the mapping between ontologies.
基金supported by the Global Change Research Program of China under Project 2012CB955603the Natural Science Foundation of China under Project 41076115+2 种基金the National Basic Research Program of China under Project 2009CB723903the Public Science and Technology Research Funds of the Ocean under Project 201005019the National High-Tech Research and Development Program of China under Project 2008AA121701
文摘In this study,a 3D virtual reality and visualization engine for rendering the ocean,named VV-Ocean,is designed for marine applications.The design goals of VV-Ocean aim at high fidelity simulation of ocean environment,visualization of massive and multidimensional marine data,and imitation of marine lives.VV-Ocean is composed of five modules,i.e.memory management module,resources management module,scene management module,rendering process management module and interaction management module.There are three core functions in VV-Ocean:reconstructing vivid virtual ocean scenes,visualizing real data dynamically in real time,imitating and simulating marine lives intuitively.Based on VV-Ocean,we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface.Environment factors such as ocean current and wind field have been considered in this simulation.On this platform oil spilling process can be abstracted as movements of abundant oil particles.The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering.VV-Ocean can be widely used in ocean applications such as demonstrating marine operations,facilitating maritime communications,developing ocean games,reducing marine hazards,forecasting the weather over oceans,serving marine tourism,and so on.Finally,further technological improvements of VV-Ocean are discussed.
基金Projects(61102106,61102105)supported by the National Natural Science Foundation of ChinaProject(2013M530148)supported by China Postdoctoral Science Foundation+1 种基金Project(HEUCF140809)supported by the Fundamental Research Funds for the Central Universities,ChinaProject(LBH-Z13054)supported by Heilongjiang Postdoctoral Fund,China
文摘In order to effectively solve combinatorial optimization problems,a membrane-inspired quantum bee colony optimization(MQBCO)is proposed for scientific computing and engineering applications.The proposed MQBCO algorithm applies the membrane computing theory to quantum bee colony optimization(QBCO),which is an effective discrete optimization algorithm.The global convergence performance of MQBCO is proved by Markov theory,and the validity of MQBCO is verified by testing the classical benchmark functions.Then the proposed MQBCO algorithm is used to solve decision engine problems of cognitive radio system.By hybridizing the QBCO and membrane computing theory,the quantum state and observation state of the quantum bees can be well evolved within the membrane structure.Simulation results for cognitive radio system show that the proposed decision engine method is superior to the traditional intelligent decision engine algorithms in terms of convergence,precision and stability.Simulation experiments under different communication scenarios illustrate that the balance between three objective functions and the adapted parameter configuration is consistent with the weights of three normalized objective functions.
基金Supported by the Special Financed Project from the Basic Public Scientific Research Expenses of Central Scientific Institutes (No GY02-2007B03)
文摘We investigated the distribution of four enzymes involved in the immune response of Apostichopus japonicus. We collected samples of the tentacles, papillate podium, integument, respiratory tree, and digestive tract and stained them for acid phosphatase (ACP), alkaline phosphatase (AKP), non-specific esterase (NSE) and peroxidase (POD) activity. The distribution and content of ACE AKP, NSE, and POD differed among the tissues. The coelomic epithelium of the tentacle, papillate podium, and integument and the mucous layer of respiratory tree were positive for ACP activity. The coelomic epithelium and cuticular layer of the tentacle, papillate podium, and integument and the mucous layer and tunica externa of the respiratory tree and digestive tract stained positive or weakly positive for AKP activity. Almost all the epithelial tissues stained positive, strongly positive, or very strongly positive for NSE activity. The cuticular layer of the tentacle and integument and the mucous layer, tunica submucosa, and tunica externa of the respiratory tree and digestive tract stained positive for POD activity. We hypothesize that these enzymes play a role in the immune response in A. japonicus.
文摘Internet based technologies, such as mobile payments, social networks, search engines and cloud computation, will lead to a paradigm shift in financial sector. Beside indirect financing via commercial banks and direct financing through security markets, a third way to conduct financial activities will emerge, which we call "internet finance'" This paper presents a detailed analysis of payment, information processing and resource allocation under internet finance.
文摘In the last two decades of the 20th century, there has been an increasing interest in and emphasis on the study of the Hong Kong literature in both the academic and general public in Hong Kong. Recognizing the emergent need of the resources on Hong Kong literature, the University Library System of the Chinese University of Hong Kong set up the Hong Kong Literature Database (the “Database”), which was the first Chinese literature database in the Internet in 2000. The paper will examine how the database is constructed using XML technology andometadata schema, The database also employs Unicode UTF-8 as the internal code. A mapping table for traditional and simplified Chinese characters was created based on Unihan and is used behind the scene so that a user can either input traditional or simplified Chinese characters and retrieval will give both traditional and simplified Chinese characters. Currently 65% of journals use OCR technology so that full-text searching is possible. The Chinese OCR technology will be examined in greater detail. Special features of the Database such as, page-by-page browse mode, position-highlight for full-page newspaper, linking Table-Of-Contents and book jackets from the Library catalogue, etc. are described. The paper will also bring out the problem of massive downloading and compare the state-of-the-art technology and their shortcomings. This paper shows how the Hong Kong Literature Database facilitates future collaboration and data exchange by using open standard, shareable structure and the latest technology.
基金Supported by the National Science Foundation China(61422303)National Key Technology R&D Program(2015BAF22B02)the Development Fund for Shanghai Talents
文摘Equipment selection for industrial process usually requires the extensive participation of industrial experts and technologists, which causes a serious waste of resources. This work presents an equipment selection knowledge base system for industrial styrene process(S-ESKBS) based on the ontology technology. This structure includes a low-level knowledge base and a top-level interactive application. As the core part of the S-ESKBS, the low-level knowledge base consists of the equipment selection ontology library, equipment selection rule set and Pellet inference engine. The top-level interactive application is implemented using S-ESKBS, including the parsing storage layer, inference query layer and client application layer. Case studies for the industrial styrene process equipment selection of an analytical column and an alkylation reactor are demonstrated to show the characteristics and implementability of the S-ESKBS.
文摘The idea of positional inverted index is exploited for indexing of graph database. The main idea is the use of hashing tables in order to prune a considerable portion of graph database that cannot contain the answer set. These tables are implemented using column-based techniques and are used to store graphs of database, frequent sub-graphs and the neighborhood of nodes. In order to exact checking of remaining graphs, the vertex invariant is used for isomorphism test which can be parallel implemented. The results of evaluation indicate that proposed method outperforms existing methods.
基金National Natural Science Foundation of China,Major National Science and Technology Projects of New Generation Broadband Wireless Mobile Communication Network,the National High Technology Research and Development Program of China (863 Program)
文摘The goal of web service composition is to choose an optimal scheme according to Quantity of Service (QoS) which selects instances in a distributed network. The networks are clustered with some web services such as ontologies, algorithms and rule engines with similar function and interfaces. In this scheme, web services acted as candidate service construct a distributed model which can't obtain the global services' information. The model is utilized to choose instances according to local QoS information in the progress of service composition. Some QoS matrixes are used to record and compare the instance paths and then choose a better one. Simulation result has proven that our ~pproach has a tradeoff between efficiency and ~quality.
基金supported by Project 71202155 of National Science Funds for Distinguished Young Scientists of China
文摘Due to the rapid development,Internet has become the main field for brand building.Under this circumstance,the image of the brand is always consistent with the consumers' perception.Therefore,this study uses the method of text mining of search engine to explore the categories of brand archetype based on Brand Personality Theory from the perspective of Internet.The results find that 12 brand archetypes,including caregiver,sage,hero,innocent,dominator,creator,vitality,explorer,stylish woman,lover,cooperator,and vogue gentleman,have a high degree explanation.Deeper study uses case study to verify the reasonability and effectiveness of the classification standard.
文摘Transliteration editors are essential for keying-in Indian language scripts into the computer using QWERTY keyboard. Applications of transliteration editors in the context of Universal Digital Library (UDL) include entry of meta-data and diction- aries for Indian languages. In this paper we propose a simple approach for building transliteration editors for Indian languages using Unicode and by taking advantage of its rendering engine. We demonstrate the usefulness of the Unicode based approach to build transliteration editors for Indian languages, and report its advantages needing little maintenance and few entries in the mapping table, and ease of adding new features such as adding letters, to the transliteration scheme. We demonstrate the trans- literation editor for 9 Indian languages and also explain how this approach can be adapted for Arabic scripts.
文摘This paper starts with a description of the present status of the Digital Library of India Initiative. As part of this initiative large corpus of scanned text is available in many Indian languages and has stimulated a vast amount of research in Indian language technology briefly described in this paper. Other than the Digital Library of India Initiative which is part of the Million Books to the Web Project initiated by Prof Raj Reddy of Carnegie Mellon University, there are a few more initiatives in India towards taking the heritage of the country to the Web. This paper presents the future directions for the Digital Library of India Initiative both in terms of growing collection and the technical challenges in managing such large collection poses.
基金Supported by the National High Technology Research and Development Program of China (No.2003AA103510).
文摘Recent advances in broadband technology have caused forwarding engines to handle pack- ets with over 10 gigabit per second. In this paper, we present a high-speed forwarding pipeline which can finish all of the routing and forwarding tasks in the way of pipelining. We also establish the analysis model of the pipeline with which one can evaluate some key performance parameters of the forwarding engine such as forwarding rate and forwarding delay. We find that the pipeline is of good scalability and can forward unicast packets up to the speed of 40Gbit/s.
基金financilly supported by the Research Project of China Petroleum&Chemical Corporation(112066)
文摘The effect of base oils,sulfur-containing multi-functional additives and dispersants in formulated diesel lubricants on lead corrosion was evaluated by a self-established high temperature corrosion bench test.Test lead coupons were analyzed by XPS to determine the resulting surface chemistry.The results showed a close correlation between the oxidation stability of base oil blend and the lead corrosion of formulated diesel lubricants.The zinc dialkyldithiophosphate(ZDDP)and zinc dialkyldithiocarbamate(ZDDC)have formed different protective films on lead coupon surfaces.A more or less amount of the protective film formed is the main factor affecting the degree of lead corrosion.The glassy zinc phosphates protective film formed by ZDDP is more effective than the zinc sulfides film formed by ZDDC.The interaction between dispersants and ZDDP had a significant impact on lead corrosion.