Big data knowledge,such as customer demands and consumer preferences,is among the crucial external knowledge that firms need for new product development in the big data environment.Prior research has focused on the pr...Big data knowledge,such as customer demands and consumer preferences,is among the crucial external knowledge that firms need for new product development in the big data environment.Prior research has focused on the profit of big data knowledge providers rather than the profit and pricing schemes of knowledge recipients.This research addresses this theoretical gap and uses theoretical and numerical analysis to compare the profitability of two pricing schemes commonly used by knowledge recipients:subscription pricing and pay-per-use pricing.We find that:(1)the subscription price of big data knowledge has no effect on the optimal time of knowledge transaction in the same pricing scheme,but the usage ratio of the big data knowledge affects the optimal time of knowledge transaction,and the smaller the usage ratio of big data knowledge the earlier the big data knowledge transaction conducts;(2)big data knowledge with a higher update rate can bring greater profits to the firm both in subscription pricing scheme and pay-per-use pricing scheme;(3)a knowledge recipient will choose the knowledge that can bring a higher market share growth rate regardless of what price scheme it adopts,and firms can choose more efficient knowledge in the pay-per-use pricing scheme by adjusting the usage ratio of knowledge usage according to their economic conditions.The model and findings in this paper can help knowledge recipient firms select optimal pricing method and enhance future new product development performance.展开更多
A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterpr...A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterprise or from a big data provider.Numerous simulation experiments are implemented to test the efficiency of the optimization model.Simulation experiment results show that when increasing the weight of knowledge from big data knowledge provider,the total discount expectation of profits will increase,and the transfer cost will be reduced.The calculated results are in accordance with the actual economic situation.The optimization model can provide useful decision support for enterprises in a big data environment.展开更多
In order to compete in the global manufacturing mar ke t, agility is the only possible solution to response to the fragmented market se gments and frequently changed customer requirements. However, manufacturing agil ...In order to compete in the global manufacturing mar ke t, agility is the only possible solution to response to the fragmented market se gments and frequently changed customer requirements. However, manufacturing agil ity can only be attained through the deployment of knowledge. To embed knowledge into a CAD system to form a knowledge intensive CAD (KIC) system is one of way to enhance the design compatibility of a manufacturing company. The most difficu lt phase to develop a KIC system is to capitalize a huge amount of legacy data t o form a knowledge database. In the past, such capitalization process could only be done solely manually or semi-automatic. In this paper, a five step model fo r automatic design knowledge capitalization through the use of data mining is pr oposed whilst details of how to select, verify and performance benchmarking an a ppropriate data mining algorithm for a specific design task will also be discuss ed. A case study concerning the design of a plastic toaster casing was used as an illustration for the proposed methodology and it was found that the avera ge absolute error of the predictions for the most appropriate algorithm is withi n 17%.展开更多
Important Dates Submission due November 15, 2005 Notification of acceptance December 30, 2005 Camera-ready copy due January 10, 2006 Workshop Scope Intelligence and Security Informatics (ISI) can be broadly defined as...Important Dates Submission due November 15, 2005 Notification of acceptance December 30, 2005 Camera-ready copy due January 10, 2006 Workshop Scope Intelligence and Security Informatics (ISI) can be broadly defined as the study of the development and use of advanced information technologies and systems for national and international security-related applications. The First and Second Symposiums on ISI were held in Tucson,Arizona,in 2003 and 2004,respectively. In 2005,the IEEE International Conference on ISI was held in Atlanta,Georgia. These ISI conferences have brought together academic researchers,law enforcement and intelligence experts,information technology consultant and practitioners to discuss their research and practice related to various ISI topics including ISI data management,data and text mining for ISI applications,terrorism informatics,deception detection,terrorist and criminal social network analysis,crime analysis,monitoring and surveillance,policy studies and evaluation,information assurance,among others. We continue this stream of ISI conferences by organizing the Workshop on Intelligence and Security Informatics (WISI’06) in conjunction with the Pacific Asia Conference on Knowledge Discovery and Data Mining (PAKDD’06). WISI’06 will provide a stimulating forum for ISI researchers in Pacific Asia and other regions of the world to exchange ideas and report research progress. The workshop also welcomes contributions dealing with ISI challenges specific to the Pacific Asian region.展开更多
The 5 th generation(5 G)mobile networks has been put into services across a number of markets,which aims at providing subscribers with high bit rates,low latency,high capacity,many new services and vertical applicatio...The 5 th generation(5 G)mobile networks has been put into services across a number of markets,which aims at providing subscribers with high bit rates,low latency,high capacity,many new services and vertical applications.Therefore the research and development on 6 G have been put on the agenda.Regarding demands and characteristics of future 6 G,artificial intelligence(A),big data(B)and cloud computing(C)will play indispensable roles in achieving the highest efficiency and the largest benefits.Interestingly,the initials of these three aspects remind us the significance of vitamin ABC to human body.In this article we specifically expound on the three elements of ABC and relationships in between.We analyze the basic characteristics of wireless big data(WBD)and the corresponding technical action in A and C,which are the high dimensional feature and spatial separation,the predictive ability,and the characteristics of knowledge.Based on the abilities of WBD,a new learning approach for wireless AI called knowledge+data-driven deep learning(KD-DL)method,and a layered computing architecture of mobile network integrating cloud/edge/terminal computing,is proposed,and their achievable efficiency is discussed.These progress will be conducive to the development of future 6 G.展开更多
In order to archive and utilize the information from Chinese polar expeditions to the greatest extent, we design a novel knowledge repository, in which an automatic query model based on neural networks is proposed and...In order to archive and utilize the information from Chinese polar expeditions to the greatest extent, we design a novel knowledge repository, in which an automatic query model based on neural networks is proposed and a data call trigger is established to keep data consistent between polar data-sharing platforms. And in this repository, anybody can make contributions to the repository by creating or updating entries with version control and an authority control mechanism. In this paper, the data sources,data processes and network structure of this repository are described, and the keywords extraction and decision support operation are detailed. The analysis of this design's feasibility and applicability indicates that this knowledge repository is open, sole and authoritative for Chinese polar expeditions.展开更多
With respect to the mathematical structure supposed by theEntity-Roles Model. a first order (three--valued) logic language is constructured. A world to be modelled can be logically specified in this language. The inte...With respect to the mathematical structure supposed by theEntity-Roles Model. a first order (three--valued) logic language is constructured. A world to be modelled can be logically specified in this language. The integrity constraints on the database and展开更多
The marking scheme method removes the low scores of the contractor's attributes given by experts when the overall score is calculated, which may result in that a contractor with some latent risks will win the proj...The marking scheme method removes the low scores of the contractor's attributes given by experts when the overall score is calculated, which may result in that a contractor with some latent risks will win the project. In order to remedy the above defect of the marking scheme method, an outlier detection model, which is one mission of knowledge discovery in data, is established on the basis of the sum of similar coefficients. Then, the model is applied to the historical score data of tender evaluation for civil projects in Tianjin, China, according to which the outliers of the scores of the contractor's attributes can be detected and analyzed. Consequently, risk pre-warning can be carried out, and some advice to employers can be given to prevent some latent risks and help them improve the success rate of bidding projects.展开更多
Tsinghua Science and Technology is founded and published since 1996. It is an international academic journal sponsored by Tsinghua University and is published bimonthly. This journal aims at presenting the up-to-date ...Tsinghua Science and Technology is founded and published since 1996. It is an international academic journal sponsored by Tsinghua University and is published bimonthly. This journal aims at presenting the up-to-date scientific achievements in computer science, and other information technology fields. It is indexed by Ei and other abstracting and indexing services. From 2013, the journal commits to the open access at IEEE Xplore Digital Library.展开更多
In 2015,it was adopted the 2030 Agenda for Sustainable Development to end poverty,protect the planet and ensure that all people enjoy peace and prosperity.The year after,17 Sustainable Development Goals(SDGs)officiall...In 2015,it was adopted the 2030 Agenda for Sustainable Development to end poverty,protect the planet and ensure that all people enjoy peace and prosperity.The year after,17 Sustainable Development Goals(SDGs)officially came into force.In 2015,GEO(Group on Earth Observation)declared to support the implementation of SDGs.The GEO Global Earth Observation System of Systems(GEOSS)required a change of paradigm,moving from a data-centric approach to a more knowledge-driven one.To this end,the GEO System-of-Systems(SoS)framework may refer to the well-known Data-Information-Knowledge-Wisdom(DIKW)paradigm.In the context of an Earth Observation(EO)SoS,a set of main elements are recognized as connecting links for generating knowledge from EO and non-EO data–e.g.social and economic datasets.These elements are:Essential Variables(EVs),Indicators and Indexes,Goals and Targets.Their generation and use requires the development of a SoS KB whose management process has evolved the GEOSS Software Ecosystem into a GEOSS Social Ecosystem.This includes:collect,formalize,publish,access,use,and update knowledge.ConnectinGEO project analysed the knowledge necessary to recognize,formalize,access,and use EVs.The analysis recognized GEOSS gaps providing recommendations on supporting global decision-making within and across different domains.展开更多
The problem of matching schemas or ontologies consists of providing corresponding entities in two or more knowledge models that belong to a same domain but have been developed separately. Nowadays there are a lot of t...The problem of matching schemas or ontologies consists of providing corresponding entities in two or more knowledge models that belong to a same domain but have been developed separately. Nowadays there are a lot of techniques and tools for addressing this problem, however, the complex nature of the matching problem make existing solutions for real situations not fully satisfactory. The Google Similarity Distance has appeared recently. Its purpose is to mine knowledge from the Web using the Google search engine in order to semantically compare text expressions. Our work consists of developing a software application for validating results discovered by schema and ontolog2/ matching tools using the philosophy behind this distance. Moreover, we are interested in using not only Google, but other popular search engines with this similarity distance. The results reveal three main facts. Firstly, some web search engines can help us to validate semantic correspondences satisfactorily. Secondly there are significant differences among the web search engines. And thirdly the best results are obtained when using combinations of the web search engines that we have studied.展开更多
In data analysis tasks, we are often confronted to very high dimensional data. Based on the purpose of a data analysis study, feature selection will find and select the relevant subset of features from the original fe...In data analysis tasks, we are often confronted to very high dimensional data. Based on the purpose of a data analysis study, feature selection will find and select the relevant subset of features from the original features. Many feature selection algorithms have been proposed in classical data analysis, but very few in symbolic data analysis (SDA) which is an extension of the classical data analysis, since it uses rich objects instead to simple matrices. A symbolic object, compared to the data used in classical data analysis can describe not only individuals, but also most of the time a cluster of individuals. In this paper we present an unsupervised feature selection algorithm on probabilistic symbolic objects (PSOs), with the purpose of discrimination. A PSO is a symbolic object that describes a cluster of individuals by modal variables using relative frequency distribution associated with each value. This paper presents new dissimilarity measures between PSOs, which are used as feature selection criteria, and explains how to reduce the complexity of the algorithm by using the discrimination matrix.展开更多
基金This research was funded by(the National Natural Science Foundation of China)Grant Number(71704016),(the Key Scientific Research Fund of Hunan Provincial Education Department of China)Grant Number(19A006),and(the Enterprise Strategic Management and Investment Decision Research Base of Hunan Province)Grant Number(19qyzd03).
文摘Big data knowledge,such as customer demands and consumer preferences,is among the crucial external knowledge that firms need for new product development in the big data environment.Prior research has focused on the profit of big data knowledge providers rather than the profit and pricing schemes of knowledge recipients.This research addresses this theoretical gap and uses theoretical and numerical analysis to compare the profitability of two pricing schemes commonly used by knowledge recipients:subscription pricing and pay-per-use pricing.We find that:(1)the subscription price of big data knowledge has no effect on the optimal time of knowledge transaction in the same pricing scheme,but the usage ratio of the big data knowledge affects the optimal time of knowledge transaction,and the smaller the usage ratio of big data knowledge the earlier the big data knowledge transaction conducts;(2)big data knowledge with a higher update rate can bring greater profits to the firm both in subscription pricing scheme and pay-per-use pricing scheme;(3)a knowledge recipient will choose the knowledge that can bring a higher market share growth rate regardless of what price scheme it adopts,and firms can choose more efficient knowledge in the pay-per-use pricing scheme by adjusting the usage ratio of knowledge usage according to their economic conditions.The model and findings in this paper can help knowledge recipient firms select optimal pricing method and enhance future new product development performance.
基金supported by NSFC(Grant No.71373032)the Natural Science Foundation of Hunan Province(Grant No.12JJ4073)+3 种基金the Scientific Research Fund of Hunan Provincial Education Department(Grant No.11C0029)the Educational Economy and Financial Research Base of Hunan Province(Grant No.13JCJA2)the Project of China Scholarship Council for Overseas Studies(201208430233201508430121)
文摘A decision model of knowledge transfer is presented on the basis of the characteristics of knowledge transfer in a big data environment.This model can determine the weight of knowledge transferred from another enterprise or from a big data provider.Numerous simulation experiments are implemented to test the efficiency of the optimization model.Simulation experiment results show that when increasing the weight of knowledge from big data knowledge provider,the total discount expectation of profits will increase,and the transfer cost will be reduced.The calculated results are in accordance with the actual economic situation.The optimization model can provide useful decision support for enterprises in a big data environment.
文摘In order to compete in the global manufacturing mar ke t, agility is the only possible solution to response to the fragmented market se gments and frequently changed customer requirements. However, manufacturing agil ity can only be attained through the deployment of knowledge. To embed knowledge into a CAD system to form a knowledge intensive CAD (KIC) system is one of way to enhance the design compatibility of a manufacturing company. The most difficu lt phase to develop a KIC system is to capitalize a huge amount of legacy data t o form a knowledge database. In the past, such capitalization process could only be done solely manually or semi-automatic. In this paper, a five step model fo r automatic design knowledge capitalization through the use of data mining is pr oposed whilst details of how to select, verify and performance benchmarking an a ppropriate data mining algorithm for a specific design task will also be discuss ed. A case study concerning the design of a plastic toaster casing was used as an illustration for the proposed methodology and it was found that the avera ge absolute error of the predictions for the most appropriate algorithm is withi n 17%.
文摘Important Dates Submission due November 15, 2005 Notification of acceptance December 30, 2005 Camera-ready copy due January 10, 2006 Workshop Scope Intelligence and Security Informatics (ISI) can be broadly defined as the study of the development and use of advanced information technologies and systems for national and international security-related applications. The First and Second Symposiums on ISI were held in Tucson,Arizona,in 2003 and 2004,respectively. In 2005,the IEEE International Conference on ISI was held in Atlanta,Georgia. These ISI conferences have brought together academic researchers,law enforcement and intelligence experts,information technology consultant and practitioners to discuss their research and practice related to various ISI topics including ISI data management,data and text mining for ISI applications,terrorism informatics,deception detection,terrorist and criminal social network analysis,crime analysis,monitoring and surveillance,policy studies and evaluation,information assurance,among others. We continue this stream of ISI conferences by organizing the Workshop on Intelligence and Security Informatics (WISI’06) in conjunction with the Pacific Asia Conference on Knowledge Discovery and Data Mining (PAKDD’06). WISI’06 will provide a stimulating forum for ISI researchers in Pacific Asia and other regions of the world to exchange ideas and report research progress. The workshop also welcomes contributions dealing with ISI challenges specific to the Pacific Asian region.
基金supported by Key Program of Natural Science Foundation of China(Grant No.61631018)Anhui Provincial Natural Science Foundation(Grant No.1908085MF177)Huawei Technology Innovative Research(YBN2018095087)。
文摘The 5 th generation(5 G)mobile networks has been put into services across a number of markets,which aims at providing subscribers with high bit rates,low latency,high capacity,many new services and vertical applications.Therefore the research and development on 6 G have been put on the agenda.Regarding demands and characteristics of future 6 G,artificial intelligence(A),big data(B)and cloud computing(C)will play indispensable roles in achieving the highest efficiency and the largest benefits.Interestingly,the initials of these three aspects remind us the significance of vitamin ABC to human body.In this article we specifically expound on the three elements of ABC and relationships in between.We analyze the basic characteristics of wireless big data(WBD)and the corresponding technical action in A and C,which are the high dimensional feature and spatial separation,the predictive ability,and the characteristics of knowledge.Based on the abilities of WBD,a new learning approach for wireless AI called knowledge+data-driven deep learning(KD-DL)method,and a layered computing architecture of mobile network integrating cloud/edge/terminal computing,is proposed,and their achievable efficiency is discussed.These progress will be conducive to the development of future 6 G.
基金Supported by the Basic Condition Platform of the Chinese Ministry of Science and Technology-Data Sharing Infrastructure of Earth System Science(2005DKA32300)the Youth Innovation Fund of the State Oceanic Administration(2012621)+2 种基金China Polar Science Strategy Research Fund Project(20120106)the State Oceanic Administration Polar Science Key Lab Open Research Fund(KP201110)Key Laboratory of Digital Ocean,SOA(KLD0201 408)
文摘In order to archive and utilize the information from Chinese polar expeditions to the greatest extent, we design a novel knowledge repository, in which an automatic query model based on neural networks is proposed and a data call trigger is established to keep data consistent between polar data-sharing platforms. And in this repository, anybody can make contributions to the repository by creating or updating entries with version control and an authority control mechanism. In this paper, the data sources,data processes and network structure of this repository are described, and the keywords extraction and decision support operation are detailed. The analysis of this design's feasibility and applicability indicates that this knowledge repository is open, sole and authoritative for Chinese polar expeditions.
文摘With respect to the mathematical structure supposed by theEntity-Roles Model. a first order (three--valued) logic language is constructured. A world to be modelled can be logically specified in this language. The integrity constraints on the database and
基金Project of Tianjin Water Resources Bureau(No.KY2007-09)
文摘The marking scheme method removes the low scores of the contractor's attributes given by experts when the overall score is calculated, which may result in that a contractor with some latent risks will win the project. In order to remedy the above defect of the marking scheme method, an outlier detection model, which is one mission of knowledge discovery in data, is established on the basis of the sum of similar coefficients. Then, the model is applied to the historical score data of tender evaluation for civil projects in Tianjin, China, according to which the outliers of the scores of the contractor's attributes can be detected and analyzed. Consequently, risk pre-warning can be carried out, and some advice to employers can be given to prevent some latent risks and help them improve the success rate of bidding projects.
文摘Tsinghua Science and Technology is founded and published since 1996. It is an international academic journal sponsored by Tsinghua University and is published bimonthly. This journal aims at presenting the up-to-date scientific achievements in computer science, and other information technology fields. It is indexed by Ei and other abstracting and indexing services. From 2013, the journal commits to the open access at IEEE Xplore Digital Library.
基金This work was supported by the European Commission,Directorate-General for Research and Innovation[ConnectinGEO grant#641538,ECOPOTENTIAL grant#641762,ERA-PLANET/GEOEssential grant#689443].
文摘In 2015,it was adopted the 2030 Agenda for Sustainable Development to end poverty,protect the planet and ensure that all people enjoy peace and prosperity.The year after,17 Sustainable Development Goals(SDGs)officially came into force.In 2015,GEO(Group on Earth Observation)declared to support the implementation of SDGs.The GEO Global Earth Observation System of Systems(GEOSS)required a change of paradigm,moving from a data-centric approach to a more knowledge-driven one.To this end,the GEO System-of-Systems(SoS)framework may refer to the well-known Data-Information-Knowledge-Wisdom(DIKW)paradigm.In the context of an Earth Observation(EO)SoS,a set of main elements are recognized as connecting links for generating knowledge from EO and non-EO data–e.g.social and economic datasets.These elements are:Essential Variables(EVs),Indicators and Indexes,Goals and Targets.Their generation and use requires the development of a SoS KB whose management process has evolved the GEOSS Software Ecosystem into a GEOSS Social Ecosystem.This includes:collect,formalize,publish,access,use,and update knowledge.ConnectinGEO project analysed the knowledge necessary to recognize,formalize,access,and use EVs.The analysis recognized GEOSS gaps providing recommendations on supporting global decision-making within and across different domains.
基金supported by Spanish Ministry of Innovation and Science through REALIDAD:Gestion,Analisis y Explotacion Eficiente de Datos Vinculados under Grant No.TIN2011-25840
文摘The problem of matching schemas or ontologies consists of providing corresponding entities in two or more knowledge models that belong to a same domain but have been developed separately. Nowadays there are a lot of techniques and tools for addressing this problem, however, the complex nature of the matching problem make existing solutions for real situations not fully satisfactory. The Google Similarity Distance has appeared recently. Its purpose is to mine knowledge from the Web using the Google search engine in order to semantically compare text expressions. Our work consists of developing a software application for validating results discovered by schema and ontolog2/ matching tools using the philosophy behind this distance. Moreover, we are interested in using not only Google, but other popular search engines with this similarity distance. The results reveal three main facts. Firstly, some web search engines can help us to validate semantic correspondences satisfactorily. Secondly there are significant differences among the web search engines. And thirdly the best results are obtained when using combinations of the web search engines that we have studied.
文摘In data analysis tasks, we are often confronted to very high dimensional data. Based on the purpose of a data analysis study, feature selection will find and select the relevant subset of features from the original features. Many feature selection algorithms have been proposed in classical data analysis, but very few in symbolic data analysis (SDA) which is an extension of the classical data analysis, since it uses rich objects instead to simple matrices. A symbolic object, compared to the data used in classical data analysis can describe not only individuals, but also most of the time a cluster of individuals. In this paper we present an unsupervised feature selection algorithm on probabilistic symbolic objects (PSOs), with the purpose of discrimination. A PSO is a symbolic object that describes a cluster of individuals by modal variables using relative frequency distribution associated with each value. This paper presents new dissimilarity measures between PSOs, which are used as feature selection criteria, and explains how to reduce the complexity of the algorithm by using the discrimination matrix.