This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schem...This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schemes like tf-idf and BM25.These conventional methods often struggle with accurately capturing document relevance,leading to inefficiencies in both retrieval performance and index size management.OWS proposes a dynamic weighting mechanism that evaluates the significance of terms based on their orbital position within the vector space,emphasizing term relationships and distribution patterns overlooked by existing models.Our research focuses on evaluating OWS’s impact on model accuracy using Information Retrieval metrics like Recall,Precision,InterpolatedAverage Precision(IAP),andMeanAverage Precision(MAP).Additionally,we assessOWS’s effectiveness in reducing the inverted index size,crucial for model efficiency.We compare OWS-based retrieval models against others using different schemes,including tf-idf variations and BM25Delta.Results reveal OWS’s superiority,achieving a 54%Recall and 81%MAP,and a notable 38%reduction in the inverted index size.This highlights OWS’s potential in optimizing retrieval processes and underscores the need for further research in this underrepresented area to fully leverage OWS’s capabilities in information retrieval methodologies.展开更多
A hybrid model that is based on the Combination of keywords and concept was put forward. The hybrid model is built on vector space model and probabilistic reasoning network. It not only can exert the advantages of key...A hybrid model that is based on the Combination of keywords and concept was put forward. The hybrid model is built on vector space model and probabilistic reasoning network. It not only can exert the advantages of keywords retrieval and concept retrieval but also can compensate for their shortcomings. Their parameters can be adjusted according to different usage in order to accept the best information retrieval result, and it has been proved by our experiments.展开更多
The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is av...The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is available, the word-use variability problem will have substantial impact on the Information Retrieval (IR) performance. To solve the problem, a new technology to short document retrieval named Reference Document Model (RDM) is put forward in this letter. RDM gets the statistical semantic of the query/document by pseudo feedback both for the query and document from reference documents. The contributions of this model are three-fold: (1) Pseudo feedback both for the query and the document; (2) Building the query model and the document model from reference documents; (3) Flexible indexing units, which can be ally linguistic elements such as documents, paragraphs, sentences, n-grams, term or character. For short document retrieval, RDM achieves significant improvements over the classical probabilistic models on the task of ad hoc retrieval on Text REtrieval Conference (TREC) test sets. Results also show that the shorter the document, the better the RDM performance.展开更多
To eliminate the mismatch between words of relevant documents and user's query and more seriousnegative effects it has on the performance of information retrieval,a method of query expansion on the ba-sis of new t...To eliminate the mismatch between words of relevant documents and user's query and more seriousnegative effects it has on the performance of information retrieval,a method of query expansion on the ba-sis of new terms co-occurrence representation was put forward by analyzing the process of producingquery.The expansion terms were selected according to their correlation to the whole query.At the sametime,the position information between terms were considered.The experimental result on test retrievalconference(TREC)data collection shows that the method proposed in the paper has made an improve-ment of 5%~19% all the time than the language modeling method without expansion.Compared to thepopular approach of query expansion,pseudo feedback,the precision of the proposed method is competi-tive.展开更多
Based on the theory of fuzzy logic, the method of obfuscating coefficient and reliability to fuse the information of hand geometry and palm prints for identity discrimination is proposed. The experiment proves that th...Based on the theory of fuzzy logic, the method of obfuscating coefficient and reliability to fuse the information of hand geometry and palm prints for identity discrimination is proposed. The experiment proves that the method is useful and effective. Its identification rate is up to 90%, which is 20%-30% higher than that of using hand geometry or palm prints singly,thus it can be widely used in highly demanded security field, such as finance, entrance guard, etc.展开更多
It is a developing job to distinguish identifications with information fusion of fingerprints and palm prints. It is also a very effective way to resolve the problem of low identification rate and low stability of sin...It is a developing job to distinguish identifications with information fusion of fingerprints and palm prints. It is also a very effective way to resolve the problem of low identification rate and low stability of single biology characteristic identification. Based on the theory of fuzzy logic theory, we bring out the method of obfuscating weigh coefficient and reliability to fuse the information of fingerprints and palm prints to realize high identification rate. The experiment proves the feasibility and effectiveness of this method and the identification rate can be more than 90%, which contributes useful experience to the research of identification using biology characteristics.展开更多
The concept of value of information(VOI)has been widely used in the oil industry when making decisions on the acquisition of new data sets for the development and operation of oil fields.The classical approach to VOI ...The concept of value of information(VOI)has been widely used in the oil industry when making decisions on the acquisition of new data sets for the development and operation of oil fields.The classical approach to VOI assumes that the outcome of the data acquisition process produces crisp values,which are uniquely mapped onto one of the deterministic reservoir models representing the subsurface variability.However,subsurface reservoir data are not always crisp;it can also be fuzzy and may correspond to various reservoir models to different degrees.The classical approach to VOI may not,therefore,lead to the best decision with regard to the need to acquire new data.Fuzzy logic,introduced in the 1960 s as an alternative to the classical logic,is able to manage the uncertainty associated with the fuzziness of the data.In this paper,both classical and fuzzy theoretical formulations for VOI are developed and contrasted using inherently vague data.A case study,which is consistent with the future development of an oil reservoir,is used to compare the application of both approaches to the estimation of VOI.The results of the VOI process show that when the fuzzy nature of the data is included in the assessment,the value of the data decreases.In this case study,the results of the assessment using crisp data and fuzzy data change the decision from"acquire"the additional data(in the former)to"do not acquire"the additional data(in the latter).In general,different decisions are reached,depending on whether the fuzzy nature of the data is considered during the evaluation.The implications of these results are significant in a domain such as the oil and gas industry(where investments are huge).This work strongly suggests the need to define the data as crisp or fuzzy for use in VOI,prior to implementing the assessment to select and define the right approach.展开更多
The classification of system based on faults is studied. Knowledge representation and reasoning technology are comprehensively discussed for logical divisible system, and then a method, named path information extremum...The classification of system based on faults is studied. Knowledge representation and reasoning technology are comprehensively discussed for logical divisible system, and then a method, named path information extremum diagnosis method(PIEDM), is proposed.PIEDM considers all nodes' information at one step, and so, has a high efficiency.展开更多
Sensorial information is very difficult to elicit, to represent and to manage because of its complexity. Fuzzy logic provides an interesting means to deal with such information, since it allows us to represent impreci...Sensorial information is very difficult to elicit, to represent and to manage because of its complexity. Fuzzy logic provides an interesting means to deal with such information, since it allows us to represent imprecise, vague or incomplete descriptions, which are very common in the management of subjective information. Aggregation methods proposed by fuzzy logic are further useful to combine the characteristics of the various components of sensorial information.展开更多
The multisensor information fusion technology is adopted for real time measuring the four parameters which are connected closely with the weld nugget size(welding current, electrode displacement, dynamic resistance, ...The multisensor information fusion technology is adopted for real time measuring the four parameters which are connected closely with the weld nugget size(welding current, electrode displacement, dynamic resistance, welding time), thus much more original information is obtained. In this way, the difficulty caused by measuring indirectly weld nugget size can be decreased in spot welding quality control, and the stability of spot welding quality can be improved. According to this method, two-dimensional fuzzy controllers are designed with the information fusion result as input and the thyristor control signal as output. The spot welding experimental results indicate that the spot welding quality intelligent control method based on multiscnsor information fusion technology can compensate the influence caused by variable factors in welding process and ensure the stability of welding quality.展开更多
This paper proposes a simple scheme for realizing one-qubit and two-qubit quantum gates as well as multiqubit entanglement based on de-SQUID charge qubits through the control of their coupling to a 1D transmission lin...This paper proposes a simple scheme for realizing one-qubit and two-qubit quantum gates as well as multiqubit entanglement based on de-SQUID charge qubits through the control of their coupling to a 1D transmission line resonator (TLR). The TLR behaves effectively as a quantum data-bus mode of a harmonic oscillator, which has several practical advantages including strong coupling strength, reproducibility, immunity to 1/f noise, and suppressed spontaneous emission. In this protocol, the data-bus does not need to stay adiabatically in its ground state, which results in not only fast quantum operation, hut also high-fidelity quantum information processing. Also, it elaborates the transfer process with the 1D transmission line.展开更多
The traditional information hiding methods embed the secret information by modifying the carrier,which will inevitably leave traces of modification on the carrier.In this way,it is hard to resist the detection of steg...The traditional information hiding methods embed the secret information by modifying the carrier,which will inevitably leave traces of modification on the carrier.In this way,it is hard to resist the detection of steganalysis algorithm.To address this problem,the concept of coverless information hiding was proposed.Coverless information hiding can effectively resist steganalysis algorithm,since it uses unmodified natural stego-carriers to represent and convey confidential information.However,the state-of-the-arts method has a low hidden capacity,which makes it less appealing.Because the pixel values of different regions of the molecular structure images of material(MSIM)are usually different,this paper proposes a novel coverless information hiding method based on MSIM,which utilizes the average value of sub-image’s pixels to represent the secret information,according to the mapping between pixel value intervals and secret information.In addition,we employ a pseudo-random label sequence that is used to determine the position of sub-images to improve the security of the method.And the histogram of the Bag of words model(BOW)is used to determine the number of subimages in the image that convey secret information.Moreover,to improve the retrieval efficiency,we built a multi-level inverted index structure.Furthermore,the proposed method can also be used for other natural images.Compared with the state-of-the-arts,experimental results and analysis manifest that our method has better performance in anti-steganalysis,security and capacity.展开更多
To solve the problem of chaining distributed geographic information Web services (GI Web services), this paper provides an ontology-based method. With this method, semantic service description can be achieved by sem...To solve the problem of chaining distributed geographic information Web services (GI Web services), this paper provides an ontology-based method. With this method, semantic service description can be achieved by semantic annotation of the elements in a Web service description language(WSDL) document with concepts of geographic ontology, and then a common under-standing about service semantics between customers and providers of Web services is built. Based on the decomposition and formalization of customer requirements, the discovery, composition and execution of GI Web services are explained in detail, and then a chaining of GI Web services is built and used to achieve the customer's requirement. Finally, an example based on Web ontology language for service (OWL-S) is provided for testing the feasibility of this method.展开更多
During a two day strategic workshop in February 2018,22 information retrieval researchers met to discuss the future challenges and opportunities within the field.The outcome is a list of potential research directions,...During a two day strategic workshop in February 2018,22 information retrieval researchers met to discuss the future challenges and opportunities within the field.The outcome is a list of potential research directions,project ideas,and challenges.This report describes the major conclusions we have obtained during the workshop.A key result is that we need to open our mind to embrace a broader IR field by rethink the definition of information,retrieval,user,system,and evaluation of IR.By providing detailed discussions on these topics,this report is expected to inspire our IR researchers in both academia and industry,and help the future growth of the IR research community.展开更多
Purpose–The purpose of this paper is to show how description logics(DLs)can be applied to formalizing the information bearing capability(IBC)of paths in entity-relationship(ER)schemata.Design/methodology/approach–Th...Purpose–The purpose of this paper is to show how description logics(DLs)can be applied to formalizing the information bearing capability(IBC)of paths in entity-relationship(ER)schemata.Design/methodology/approach–The approach follows and extends the idea presented in Xu and Feng(2004),which applies DLs to classifying paths in an ER schema.To verify whether the information content of a data construct(e.g.a path)covers a semantic relation(which formulates a piece of information requirement),the principle of IBC under the source-bearer-receiver framework is presented.It is observed that the IBC principle can be formalized by constructing DL expressions and examining constructors(e.g.quantifiers).Findings–Description logic can be used as a tool to describe the meanings represented by paths in an ER schema and formalize their IBC.The criteria for identifying data construct distinguishability are also discovered by examining quantifiers in DL expressions of paths of an ER schema.Originality/value–This paper focuses on classifying paths in data schemas and verifying their formalized IBC by using DLs and the IBC principle.It is a new point of view for evaluation of data representation,which looks at the information borne by data but not data dependencies.展开更多
The use of agent technology in a dynamic environment is rapidly growing as one of the powerful technologies and the need to provide the benefits of the Intelligent Information Agent technique to massive open online co...The use of agent technology in a dynamic environment is rapidly growing as one of the powerful technologies and the need to provide the benefits of the Intelligent Information Agent technique to massive open online courses, is very important from various aspects including the rapid growing of MOOCs environments, and the focusing more on static information than on updated information. One of the main problems in such environment is updating the information to the needs of the student who interacts at each moment. Using such technology can ensure more flexible information, lower waste time and hence higher earnings in learning. This paper presents Intelligent Topic-Based Information Agent to offer an updated knowledge including various types of resource for students. Using dominant meaning method, the agent searches the Internet, controls the metadata coming from the Internet, filters and shows them into a categorized content lists. There are two experiments conducted on the Intelligent Topic-Based Information Agent: one measures the improvement in the retrieval effectiveness and the other measures the impact of the agent on the learning. The experiment results indicate that our methodology to expand the query yields a considerable improvement in the retrieval effectiveness in all categories of Google Web Search API. On the other hand, there is a positive impact on the performance of learning session.展开更多
Taking autonomous driving and driverless as the research object,we discuss and define intelligent high-precision map.Intelligent high-precision map is considered as a key link of future travel,a carrier of real-time p...Taking autonomous driving and driverless as the research object,we discuss and define intelligent high-precision map.Intelligent high-precision map is considered as a key link of future travel,a carrier of real-time perception of traffic resources in the entire space-time range,and the criterion for the operation and control of the whole process of the vehicle.As a new form of map,it has distinctive features in terms of cartography theory and application requirements compared with traditional navigation electronic maps.Thus,it is necessary to analyze and discuss its key features and problems to promote the development of research and application of intelligent high-precision map.Accordingly,we propose an information transmission model based on the cartography theory and combine the wheeled robot’s control flow in practical application.Next,we put forward the data logic structure of intelligent high-precision map,and analyze its application in autonomous driving.Then,we summarize the computing mode of“Crowdsourcing+Edge-Cloud Collaborative Computing”,and carry out key technical analysis on how to improve the quality of crowdsourced data.We also analyze the effective application scenarios of intelligent high-precision map in the future.Finally,we present some thoughts and suggestions for the future development of this field.展开更多
At present, the anti-noise property and the information leakage resistant property are two great concerns for quantum dialogue(QD). In this paper, two anti-noise QD protocols without information leakage are presented ...At present, the anti-noise property and the information leakage resistant property are two great concerns for quantum dialogue(QD). In this paper, two anti-noise QD protocols without information leakage are presented by using the entanglement swapping technology for two logical Bell states. One works well over a collective-dephasing noise channel, while the other takes effect over a collective-rotation noise channel. The negative influence of noise is erased by using logical Bell states as the traveling quantum states. The problem of information leakage is avoided by swapping entanglement between two logical Bell states. In addition, only Bell state measurements are used for decoding, rather than four-qubit joint measurements.展开更多
Today's automation industry is driven by the need for an increased productivity, higher flexibility, and higher individuality, and characterized by tailor-made and more complex control solutions. In the processing in...Today's automation industry is driven by the need for an increased productivity, higher flexibility, and higher individuality, and characterized by tailor-made and more complex control solutions. In the processing industry, logic controller design is often a manual, experience-based, and thus an error-prone procedure. Typically, the specifications are given by a set of informal requirements and a technical flowchart and both are used to be directly translated into the control code. This paper proposes a method in which the control program is constructed as a sequential function chart (SFC) by transforming the requirements via clearly defined intermediate formats. For the purpose of analysis, the resulting SFC can be translated algorithmically into timed automata. A rigorous verification can be used to determine whether all specifications are satisfied if a formal model of the plant is available which is then composed with the automata model of the logic controller (LC).展开更多
Without the geometry of light and logic of photon,observer-observability forms a paradox in modern science,truthequilibrium finds no unification,and mind-light-matter unity is unreachable in spacetime.Subsequently,qua...Without the geometry of light and logic of photon,observer-observability forms a paradox in modern science,truthequilibrium finds no unification,and mind-light-matter unity is unreachable in spacetime.Subsequently,quantum mechanics has been shrouded with mysteries preventing itself from reaching definable causality for a general purpose analytical quantum computing paradigm.Ground-0 Axioms are introduced as an equilibrium-based,dynamic,bipolar set-theoretic unification of the first principles of science and the second law of thermodynamics.Related literatures are critically reviewed to justify the self-evident nature of Ground-0 Axioms.A historical misinterpretation by the founding fathers of quantum mechanics is identified and corrected.That disproves spacetime geometries(including but not limited to Euclidean and Hilbert spaces)as the geometries of light and truth-based logics(including but not limited to bra-ket quantum logic)as the logics of photon.Backed with logically definable causality and Dirac 3-polarizer experiment,bipolar quantum geometry(BQG)and bipolar dynamic logic(BDL)are identified as the geometry of light and the logic of photon,respectively,and wave-particle complementarity is shown less fundamental than bipolar complementarity.As a result,Ground-0 Axioms lead to a geometrical and logical illumination of the quantum and classical worlds as well as the physical and mental worlds.With logical resolutions to the EPR and Schr?dinger’s cat paradoxes,an analytical quantum computing paradigm named quantum intelligence(QI)is introduced.It is shown that QI makes mind-light-matter unity and quantum-digital compatibility logically reachable for quantumneuro-fuzzy AI-machinery with groundbreaking applications.It is contended that Ground-0 Axioms open a new era of science and philosophy—the era of mind-light-matter unity in which humanlevel white-box AI&QI is logically prompted to join Einstein’s grand unification to foster major scientific advances.展开更多
文摘This study introduces the Orbit Weighting Scheme(OWS),a novel approach aimed at enhancing the precision and efficiency of Vector Space information retrieval(IR)models,which have traditionally relied on weighting schemes like tf-idf and BM25.These conventional methods often struggle with accurately capturing document relevance,leading to inefficiencies in both retrieval performance and index size management.OWS proposes a dynamic weighting mechanism that evaluates the significance of terms based on their orbital position within the vector space,emphasizing term relationships and distribution patterns overlooked by existing models.Our research focuses on evaluating OWS’s impact on model accuracy using Information Retrieval metrics like Recall,Precision,InterpolatedAverage Precision(IAP),andMeanAverage Precision(MAP).Additionally,we assessOWS’s effectiveness in reducing the inverted index size,crucial for model efficiency.We compare OWS-based retrieval models against others using different schemes,including tf-idf variations and BM25Delta.Results reveal OWS’s superiority,achieving a 54%Recall and 81%MAP,and a notable 38%reduction in the inverted index size.This highlights OWS’s potential in optimizing retrieval processes and underscores the need for further research in this underrepresented area to fully leverage OWS’s capabilities in information retrieval methodologies.
文摘A hybrid model that is based on the Combination of keywords and concept was put forward. The hybrid model is built on vector space model and probabilistic reasoning network. It not only can exert the advantages of keywords retrieval and concept retrieval but also can compensate for their shortcomings. Their parameters can be adjusted according to different usage in order to accept the best information retrieval result, and it has been proved by our experiments.
基金Supported by the Funds of Heilongjiang Outstanding Young Teacher (1151G037).
文摘The major problem of the most current approaches of information models lies in that individual words provide unreliable evidence about the content of the texts. When the document is short, e.g. only the abstract is available, the word-use variability problem will have substantial impact on the Information Retrieval (IR) performance. To solve the problem, a new technology to short document retrieval named Reference Document Model (RDM) is put forward in this letter. RDM gets the statistical semantic of the query/document by pseudo feedback both for the query and document from reference documents. The contributions of this model are three-fold: (1) Pseudo feedback both for the query and the document; (2) Building the query model and the document model from reference documents; (3) Flexible indexing units, which can be ally linguistic elements such as documents, paragraphs, sentences, n-grams, term or character. For short document retrieval, RDM achieves significant improvements over the classical probabilistic models on the task of ad hoc retrieval on Text REtrieval Conference (TREC) test sets. Results also show that the shorter the document, the better the RDM performance.
基金the High Technology Research and Development Program of China(No.2006AA01Z150)the National Natural Science Foundation of China(No.60435020)
文摘To eliminate the mismatch between words of relevant documents and user's query and more seriousnegative effects it has on the performance of information retrieval,a method of query expansion on the ba-sis of new terms co-occurrence representation was put forward by analyzing the process of producingquery.The expansion terms were selected according to their correlation to the whole query.At the sametime,the position information between terms were considered.The experimental result on test retrievalconference(TREC)data collection shows that the method proposed in the paper has made an improve-ment of 5%~19% all the time than the language modeling method without expansion.Compared to thepopular approach of query expansion,pseudo feedback,the precision of the proposed method is competi-tive.
文摘Based on the theory of fuzzy logic, the method of obfuscating coefficient and reliability to fuse the information of hand geometry and palm prints for identity discrimination is proposed. The experiment proves that the method is useful and effective. Its identification rate is up to 90%, which is 20%-30% higher than that of using hand geometry or palm prints singly,thus it can be widely used in highly demanded security field, such as finance, entrance guard, etc.
文摘It is a developing job to distinguish identifications with information fusion of fingerprints and palm prints. It is also a very effective way to resolve the problem of low identification rate and low stability of single biology characteristic identification. Based on the theory of fuzzy logic theory, we bring out the method of obfuscating weigh coefficient and reliability to fuse the information of fingerprints and palm prints to realize high identification rate. The experiment proves the feasibility and effectiveness of this method and the identification rate can be more than 90%, which contributes useful experience to the research of identification using biology characteristics.
文摘The concept of value of information(VOI)has been widely used in the oil industry when making decisions on the acquisition of new data sets for the development and operation of oil fields.The classical approach to VOI assumes that the outcome of the data acquisition process produces crisp values,which are uniquely mapped onto one of the deterministic reservoir models representing the subsurface variability.However,subsurface reservoir data are not always crisp;it can also be fuzzy and may correspond to various reservoir models to different degrees.The classical approach to VOI may not,therefore,lead to the best decision with regard to the need to acquire new data.Fuzzy logic,introduced in the 1960 s as an alternative to the classical logic,is able to manage the uncertainty associated with the fuzziness of the data.In this paper,both classical and fuzzy theoretical formulations for VOI are developed and contrasted using inherently vague data.A case study,which is consistent with the future development of an oil reservoir,is used to compare the application of both approaches to the estimation of VOI.The results of the VOI process show that when the fuzzy nature of the data is included in the assessment,the value of the data decreases.In this case study,the results of the assessment using crisp data and fuzzy data change the decision from"acquire"the additional data(in the former)to"do not acquire"the additional data(in the latter).In general,different decisions are reached,depending on whether the fuzzy nature of the data is considered during the evaluation.The implications of these results are significant in a domain such as the oil and gas industry(where investments are huge).This work strongly suggests the need to define the data as crisp or fuzzy for use in VOI,prior to implementing the assessment to select and define the right approach.
文摘The classification of system based on faults is studied. Knowledge representation and reasoning technology are comprehensively discussed for logical divisible system, and then a method, named path information extremum diagnosis method(PIEDM), is proposed.PIEDM considers all nodes' information at one step, and so, has a high efficiency.
文摘Sensorial information is very difficult to elicit, to represent and to manage because of its complexity. Fuzzy logic provides an interesting means to deal with such information, since it allows us to represent imprecise, vague or incomplete descriptions, which are very common in the management of subjective information. Aggregation methods proposed by fuzzy logic are further useful to combine the characteristics of the various components of sensorial information.
基金This project is supported by Municipal Key Science Foundation of Shenyang,China(No.1041020-1-04)Provincial Natural Science Foundation of Liaoning,China(No.20031022).
文摘The multisensor information fusion technology is adopted for real time measuring the four parameters which are connected closely with the weld nugget size(welding current, electrode displacement, dynamic resistance, welding time), thus much more original information is obtained. In this way, the difficulty caused by measuring indirectly weld nugget size can be decreased in spot welding quality control, and the stability of spot welding quality can be improved. According to this method, two-dimensional fuzzy controllers are designed with the information fusion result as input and the thyristor control signal as output. The spot welding experimental results indicate that the spot welding quality intelligent control method based on multiscnsor information fusion technology can compensate the influence caused by variable factors in welding process and ensure the stability of welding quality.
基金supported by Hunan Provincial Natural Science Foundation of China (Grant No 06JJ50014)the Key Project Foundation of the Education Commission of Hunan Province of China (Grant No 06A055)
文摘This paper proposes a simple scheme for realizing one-qubit and two-qubit quantum gates as well as multiqubit entanglement based on de-SQUID charge qubits through the control of their coupling to a 1D transmission line resonator (TLR). The TLR behaves effectively as a quantum data-bus mode of a harmonic oscillator, which has several practical advantages including strong coupling strength, reproducibility, immunity to 1/f noise, and suppressed spontaneous emission. In this protocol, the data-bus does not need to stay adiabatically in its ground state, which results in not only fast quantum operation, hut also high-fidelity quantum information processing. Also, it elaborates the transfer process with the 1D transmission line.
基金This work is supported,in part,by the National Natural Science Foundation of China under grant numbers U1536206,U1405254,61772283,61602253,61672294,61502242in part,by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20150925 and BK20151530+1 种基金in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fundin part,by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.
文摘The traditional information hiding methods embed the secret information by modifying the carrier,which will inevitably leave traces of modification on the carrier.In this way,it is hard to resist the detection of steganalysis algorithm.To address this problem,the concept of coverless information hiding was proposed.Coverless information hiding can effectively resist steganalysis algorithm,since it uses unmodified natural stego-carriers to represent and convey confidential information.However,the state-of-the-arts method has a low hidden capacity,which makes it less appealing.Because the pixel values of different regions of the molecular structure images of material(MSIM)are usually different,this paper proposes a novel coverless information hiding method based on MSIM,which utilizes the average value of sub-image’s pixels to represent the secret information,according to the mapping between pixel value intervals and secret information.In addition,we employ a pseudo-random label sequence that is used to determine the position of sub-images to improve the security of the method.And the histogram of the Bag of words model(BOW)is used to determine the number of subimages in the image that convey secret information.Moreover,to improve the retrieval efficiency,we built a multi-level inverted index structure.Furthermore,the proposed method can also be used for other natural images.Compared with the state-of-the-arts,experimental results and analysis manifest that our method has better performance in anti-steganalysis,security and capacity.
基金the National Natural Science Fundation ofChina (60774041)
文摘To solve the problem of chaining distributed geographic information Web services (GI Web services), this paper provides an ontology-based method. With this method, semantic service description can be achieved by semantic annotation of the elements in a Web service description language(WSDL) document with concepts of geographic ontology, and then a common under-standing about service semantics between customers and providers of Web services is built. Based on the decomposition and formalization of customer requirements, the discovery, composition and execution of GI Web services are explained in detail, and then a chaining of GI Web services is built and used to achieve the customer's requirement. Finally, an example based on Web ontology language for service (OWL-S) is provided for testing the feasibility of this method.
文摘During a two day strategic workshop in February 2018,22 information retrieval researchers met to discuss the future challenges and opportunities within the field.The outcome is a list of potential research directions,project ideas,and challenges.This report describes the major conclusions we have obtained during the workshop.A key result is that we need to open our mind to embrace a broader IR field by rethink the definition of information,retrieval,user,system,and evaluation of IR.By providing detailed discussions on these topics,this report is expected to inspire our IR researchers in both academia and industry,and help the future growth of the IR research community.
基金This work has been funded by Scientific Research Common Program of Beijing Municipal Commission of Education(No.KM201311417011)the Importation and Development of High-Caliber Talents Project of Beijing Municipal Institutions(No.CIT&TCD201404089)Funding project of Beijing Philosophy and Social Science Research Program(No.11JGB039).
文摘Purpose–The purpose of this paper is to show how description logics(DLs)can be applied to formalizing the information bearing capability(IBC)of paths in entity-relationship(ER)schemata.Design/methodology/approach–The approach follows and extends the idea presented in Xu and Feng(2004),which applies DLs to classifying paths in an ER schema.To verify whether the information content of a data construct(e.g.a path)covers a semantic relation(which formulates a piece of information requirement),the principle of IBC under the source-bearer-receiver framework is presented.It is observed that the IBC principle can be formalized by constructing DL expressions and examining constructors(e.g.quantifiers).Findings–Description logic can be used as a tool to describe the meanings represented by paths in an ER schema and formalize their IBC.The criteria for identifying data construct distinguishability are also discovered by examining quantifiers in DL expressions of paths of an ER schema.Originality/value–This paper focuses on classifying paths in data schemas and verifying their formalized IBC by using DLs and the IBC principle.It is a new point of view for evaluation of data representation,which looks at the information borne by data but not data dependencies.
文摘The use of agent technology in a dynamic environment is rapidly growing as one of the powerful technologies and the need to provide the benefits of the Intelligent Information Agent technique to massive open online courses, is very important from various aspects including the rapid growing of MOOCs environments, and the focusing more on static information than on updated information. One of the main problems in such environment is updating the information to the needs of the student who interacts at each moment. Using such technology can ensure more flexible information, lower waste time and hence higher earnings in learning. This paper presents Intelligent Topic-Based Information Agent to offer an updated knowledge including various types of resource for students. Using dominant meaning method, the agent searches the Internet, controls the metadata coming from the Internet, filters and shows them into a categorized content lists. There are two experiments conducted on the Intelligent Topic-Based Information Agent: one measures the improvement in the retrieval effectiveness and the other measures the impact of the agent on the learning. The experiment results indicate that our methodology to expand the query yields a considerable improvement in the retrieval effectiveness in all categories of Google Web Search API. On the other hand, there is a positive impact on the performance of learning session.
基金National Key Research and Development Program(No.2018YFB1305001)Major Consulting and Research Project of Chinese Academy of Engineering(No.2018-ZD-02-07)。
文摘Taking autonomous driving and driverless as the research object,we discuss and define intelligent high-precision map.Intelligent high-precision map is considered as a key link of future travel,a carrier of real-time perception of traffic resources in the entire space-time range,and the criterion for the operation and control of the whole process of the vehicle.As a new form of map,it has distinctive features in terms of cartography theory and application requirements compared with traditional navigation electronic maps.Thus,it is necessary to analyze and discuss its key features and problems to promote the development of research and application of intelligent high-precision map.Accordingly,we propose an information transmission model based on the cartography theory and combine the wheeled robot’s control flow in practical application.Next,we put forward the data logic structure of intelligent high-precision map,and analyze its application in autonomous driving.Then,we summarize the computing mode of“Crowdsourcing+Edge-Cloud Collaborative Computing”,and carry out key technical analysis on how to improve the quality of crowdsourced data.We also analyze the effective application scenarios of intelligent high-precision map in the future.Finally,we present some thoughts and suggestions for the future development of this field.
基金Supported by the National Natural Science Foundation of China under Grant Nos.61402407 and 11375152
文摘At present, the anti-noise property and the information leakage resistant property are two great concerns for quantum dialogue(QD). In this paper, two anti-noise QD protocols without information leakage are presented by using the entanglement swapping technology for two logical Bell states. One works well over a collective-dephasing noise channel, while the other takes effect over a collective-rotation noise channel. The negative influence of noise is erased by using logical Bell states as the traveling quantum states. The problem of information leakage is avoided by swapping entanglement between two logical Bell states. In addition, only Bell state measurements are used for decoding, rather than four-qubit joint measurements.
基金the European Union through the Network of Excellence Hybrid Control (HYCON) under contract IST-511368.
文摘Today's automation industry is driven by the need for an increased productivity, higher flexibility, and higher individuality, and characterized by tailor-made and more complex control solutions. In the processing industry, logic controller design is often a manual, experience-based, and thus an error-prone procedure. Typically, the specifications are given by a set of informal requirements and a technical flowchart and both are used to be directly translated into the control code. This paper proposes a method in which the control program is constructed as a sequential function chart (SFC) by transforming the requirements via clearly defined intermediate formats. For the purpose of analysis, the resulting SFC can be translated algorithmically into timed automata. A rigorous verification can be used to determine whether all specifications are satisfied if a formal model of the plant is available which is then composed with the automata model of the logic controller (LC).
文摘Without the geometry of light and logic of photon,observer-observability forms a paradox in modern science,truthequilibrium finds no unification,and mind-light-matter unity is unreachable in spacetime.Subsequently,quantum mechanics has been shrouded with mysteries preventing itself from reaching definable causality for a general purpose analytical quantum computing paradigm.Ground-0 Axioms are introduced as an equilibrium-based,dynamic,bipolar set-theoretic unification of the first principles of science and the second law of thermodynamics.Related literatures are critically reviewed to justify the self-evident nature of Ground-0 Axioms.A historical misinterpretation by the founding fathers of quantum mechanics is identified and corrected.That disproves spacetime geometries(including but not limited to Euclidean and Hilbert spaces)as the geometries of light and truth-based logics(including but not limited to bra-ket quantum logic)as the logics of photon.Backed with logically definable causality and Dirac 3-polarizer experiment,bipolar quantum geometry(BQG)and bipolar dynamic logic(BDL)are identified as the geometry of light and the logic of photon,respectively,and wave-particle complementarity is shown less fundamental than bipolar complementarity.As a result,Ground-0 Axioms lead to a geometrical and logical illumination of the quantum and classical worlds as well as the physical and mental worlds.With logical resolutions to the EPR and Schr?dinger’s cat paradoxes,an analytical quantum computing paradigm named quantum intelligence(QI)is introduced.It is shown that QI makes mind-light-matter unity and quantum-digital compatibility logically reachable for quantumneuro-fuzzy AI-machinery with groundbreaking applications.It is contended that Ground-0 Axioms open a new era of science and philosophy—the era of mind-light-matter unity in which humanlevel white-box AI&QI is logically prompted to join Einstein’s grand unification to foster major scientific advances.