Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing nois...Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing noisy texts.Previous studies focus on the attention mechanism to construct the connection between different text features through semantic similarity.However,similarity-based methods cannot distinguish valid information from highly similar retrieved documents well.How to design an effective algorithm to implement aggregated reasoning in confusing information with similar features still remains an open issue.To address this problem,we design a novel local-toglobal causal reasoning(LGCR)network for cross-document RE,which enables efficient distinguishing,filtering and global reasoning on complex information from a causal perspective.Specifically,we propose a local causal estimation algorithm to estimate the causal effect,which is the first trial to use the causal reasoning independent of feature similarity to distinguish between confusing and valid information in cross-document RE.Furthermore,based on the causal effect,we propose a causality guided global reasoning algorithm to filter the confusing information and achieve global reasoning.Experimental results under the closed and the open settings of the large-scale dataset Cod RED demonstrate our LGCR network significantly outperforms the state-ofthe-art methods and validate the effectiveness of causal reasoning in confusing information processing.展开更多
Sentiment analysis in Chinese classical poetry has become a prominent topic in historical and cultural tracing,ancient literature research,etc.However,the existing research on sentiment analysis is relatively small.It...Sentiment analysis in Chinese classical poetry has become a prominent topic in historical and cultural tracing,ancient literature research,etc.However,the existing research on sentiment analysis is relatively small.It does not effectively solve the problems such as the weak feature extraction ability of poetry text,which leads to the low performance of the model on sentiment analysis for Chinese classical poetry.In this research,we offer the SA-Model,a poetic sentiment analysis model.SA-Model firstly extracts text vector information and fuses it through Bidirectional encoder representation from transformers-Whole word masking-extension(BERT-wwmext)and Enhanced representation through knowledge integration(ERNIE)to enrich text vector information;Secondly,it incorporates numerous encoders to remove text features at multiple levels,thereby increasing text feature information,improving text semantics accuracy,and enhancing the model’s learning and generalization capabilities;finally,multi-feature fusion poetry sentiment analysis model is constructed.The feasibility and accuracy of the model are validated through the ancient poetry sentiment corpus.Compared with other baseline models,the experimental findings indicate that SA-Model may increase the accuracy of text semantics and hence improve the capability of poetry sentiment analysis.展开更多
The traditional recommendation algorithm represented by the collaborative filtering algorithm is the most classical and widely recommended algorithm in the practical industry.Most book recommendation systems also use ...The traditional recommendation algorithm represented by the collaborative filtering algorithm is the most classical and widely recommended algorithm in the practical industry.Most book recommendation systems also use this algorithm.However,the traditional recommendation algorithm represented by the collaborative filtering algorithm cannot deal with the data sparsity well.This algorithm only uses the shallow feature design of the interaction between readers and books,so it fails to achieve the high-level abstract learning of the relevant attribute features of readers and books,leading to a decline in recommendation performance.Given the above problems,this study uses deep learning technology to model readers’book borrowing probability.It builds a recommendation system model through themulti-layer neural network and inputs the features extracted from readers and books into the network,and then profoundly integrates the features of readers and books through the multi-layer neural network.The hidden deep interaction between readers and books is explored accordingly.Thus,the quality of book recommendation performance will be significantly improved.In the experiment,the evaluation indexes ofHR@10,MRR,andNDCGof the deep neural network recommendation model constructed in this paper are higher than those of the traditional recommendation algorithm,which verifies the effectiveness of the model in the book recommendation.展开更多
Coronavirus 2019(COVID-19)is the current global buzzword,putting the world at risk.The pandemic’s exponential expansion of infected COVID-19 patients has challenged the medical field’s resources,which are already fe...Coronavirus 2019(COVID-19)is the current global buzzword,putting the world at risk.The pandemic’s exponential expansion of infected COVID-19 patients has challenged the medical field’s resources,which are already few.Even established nations would not be in a perfect position to manage this epidemic correctly,leaving emerging countries and countries that have not yet begun to grow to address the problem.These problems can be solved by using machine learning models in a realistic way,such as by using computer-aided images during medical examinations.These models help predict the effects of the disease outbreak and help detect the effects in the coming days.In this paper,Multi-Features Decease Analysis(MFDA)is used with different ensemble classifiers to diagnose the disease’s impact with the help of Computed Tomography(CT)scan images.There are various features associated with chest CT images,which help know the possibility of an individual being affected and how COVID-19 will affect the persons suffering from pneumonia.The current study attempts to increase the precision of the diagnosis model by evaluating various feature sets and choosing the best combination for better results.The model’s performance is assessed using Receiver Operating Characteristic(ROC)curve,the Root Mean Square Error(RMSE),and the Confusion Matrix.It is observed from the resultant outcome that the performance of the proposed model has exhibited better efficient.展开更多
Aiming at the dynamics and uncertainties of natural colors affected by the natural environment,a color P-law generation model based on the natural environment is proposed to develop algorithms and to provide a theoret...Aiming at the dynamics and uncertainties of natural colors affected by the natural environment,a color P-law generation model based on the natural environment is proposed to develop algorithms and to provide a theoretical basis for plant dynamic color simulation and color sensor data transmission.Based on the HSL(Hue,Saturation,Lightness)color solid,the proposed method uses the function P-set to provide a color P-law generation model and an algorithm of the Dynamic Colors System(DCS),establishing the DCS modeling theory of the natural environment and the color P-reasoning simulation based on the HSL color solid.The experimental results show that based on the color P-law,for the DCS of the natural environment,when the external factors change,the color of the plant changes,accordingly,verifying the effectiveness of the color P-law generation model and the algorithm of the DCS.In the dynamic color intel-ligent simulation system,when external factors change,the dynamic change of plant color generally conforms to the basic laws of the natural environment.This enables the effective extraction of color data from the Internet of Things(IoT)-based color sensors and provides an effective way to significantly reduce the data transmission bandwidth of the IoT network.展开更多
With the growing discovery of exposed vulnerabilities in the Industrial Control Components(ICCs),identification of the exploitable ones is urgent for Industrial Control System(ICS)administrators to proactively forecas...With the growing discovery of exposed vulnerabilities in the Industrial Control Components(ICCs),identification of the exploitable ones is urgent for Industrial Control System(ICS)administrators to proactively forecast potential threats.However,it is not a trivial task due to the complexity of the multi-source heterogeneous data and the lack of automatic analysis methods.To address these challenges,we propose an exploitability reasoning method based on the ICC-Vulnerability Knowledge Graph(KG)in which relation paths contain abundant potential evidence to support the reasoning.The reasoning task in this work refers to determining whether a specific relation is valid between an attacker entity and a possible exploitable vulnerability entity with the help of a collective of the critical paths.The proposed method consists of three primary building blocks:KG construction,relation path representation,and query relation reasoning.A security-oriented ontology combines exploit modeling,which provides a guideline for the integration of the scattered knowledge while constructing the KG.We emphasize the role of the aggregation of the attention mechanism in representation learning and ultimate reasoning.In order to acquire a high-quality representation,the entity and relation embeddings take advantage of their local structure and related semantics.Some critical paths are assigned corresponding attentive weights and then they are aggregated for the determination of the query relation validity.In particular,similarity calculation is introduced into a critical path selection algorithm,which improves search and reasoning performance.Meanwhile,the proposed algorithm avoids redundant paths between the given pairs of entities.Experimental results show that the proposed method outperforms the state-of-the-art ones in the aspects of embedding quality and query relation reasoning accuracy.展开更多
Tropical cyclones(TC)are often associated with severe weather conditions which cause great losses to lives and property.The precise classification of cyclone tracks is significantly important in thefield of weather fo...Tropical cyclones(TC)are often associated with severe weather conditions which cause great losses to lives and property.The precise classification of cyclone tracks is significantly important in thefield of weather forecasting.In this paper we propose a novel hybrid model that integrates ontology and Support Vector Machine(SVM)to classify the tropical cyclone tracks into four types of classes namely straight,quasi-straight,curving and sinuous based on the track shape.Tropical Cyclone TRacks Ontology(TCTRO)described in this paper is a knowledge base which comprises of classes,objects and data properties that represent the interaction among the TC characteristics.A set of SWRL(Semantic Web Rule Language)rules are directly inserted to the TCTRO ontology for reasoning and inferring new knowledge from ontology.Furthermore,we propose a learning algorithm which utilizes the inferred knowledge for optimizing the feature subset.According to experiments on the IBTrACS dataset,the proposed ontology based SVM classifier achieves an accuracy of 98.3%with reduced classification error rates.展开更多
Railway Point System(RPS)is an important infrastructure in railway industry and its faults may have significant impacts on the safety and efficiency of train operations.For the fault diagnosis of RPS,most existing met...Railway Point System(RPS)is an important infrastructure in railway industry and its faults may have significant impacts on the safety and efficiency of train operations.For the fault diagnosis of RPS,most existing methods assume that sufficient samples of each failure mode are available,which may be unrealistic,especially for those modes of low occurrence frequency but with high risk.To address this issue,this work proposes a novel fault diagnosis method that only requires the power signals generated under normal RPS operations in the training stage.Specifically,the failure modes of RPS are distinguished through constructing a reasoning diagram,whose nodes are either binary logic problems or those that can be decomposed into the problems of the binary logic.Then,an unsupervised method for the signal segmentation and a fault detection method are combined to make decisions for each binary logic problem.Based on the results of decisions,the diagnostic rules are established to identify the failure modes.Finally,the data collected from multiple real-world RPSs are used for validation and the results demonstrate that the proposed method outperforms the benchmark in identifying the faults of RPSs.展开更多
Cable fire is one of the most important events for operation and maintenance(O&M)safety in underground utility tunnels(UUTs).Since there are limited studies about cable fire risk assessment,a comprehensive assessm...Cable fire is one of the most important events for operation and maintenance(O&M)safety in underground utility tunnels(UUTs).Since there are limited studies about cable fire risk assessment,a comprehensive assessment model is proposed to evaluate the cable fire risk in different UUT sections and improve O&M efficiency.Considering the uncertainties in the risk assessment,an evidential reasoning(ER)approach is used to combine quantitative sensor data and qualitative expert judgments.Meanwhile,a data transformation technique is contributed to transform continuous data into a five-grade distributed assessment.Then,a case study demonstrates how the model and the ER approach are established.The results show that in Shenzhen,China,the cable fire risk in District 8,B Road is the lowest,while more resources should be paid in District 3,C Road and District 25,C Road,which are selected as comparative roads.Based on the model,a data-driven O&M process is proposed to improve the O&M effectiveness,compared with traditional methods.This study contributes an effective ER-based cable fire evaluation model to improve the O&M efficiency of cable fire in UUTs.展开更多
Earthquake-triggered liquefaction deformation could lead to severe infrastructure damage and associated casualties and property damage.At present,there are few studies on the rapid extraction of liquefaction pits base...Earthquake-triggered liquefaction deformation could lead to severe infrastructure damage and associated casualties and property damage.At present,there are few studies on the rapid extraction of liquefaction pits based on high-resolution satellite images.Therefore,we provide a framework for extracting liquefaction pits based on a case-based reasoning method.Furthermore,five covariates selection methods were used to filter the 11 covariates that were generated from high-resolution satellite images and digital elevation models(DEM).The proposed method was trained with 450 typical samples which were collected based on visual interpretation,then used the trained case-based reasoning method to identify the liquefaction pits in the whole study area.The performance of the proposed methods was evaluated from three aspects,the prediction accuracies of liquefaction pits based on the validation samples by kappa index,the comparison between the pre-and post-earthquake images,the rationality of spatial distribution of liquefaction pits.The final result shows the importance of covariates ranked by different methods could be different.However,the most important of covariates is consistent.When selecting five most important covariates,the value of kappa index could be about 96%.There also exist clear differences between the pre-and post-earthquake areas that were identified as liquefaction pits.The predicted spatial distribution of liquefaction is also consistent with the formation principle of liquefaction.展开更多
In order to solve the shortcomings of current fatigue detection methods such as low accuracy or poor real-time performance,a fatigue detection method based on multi-feature fusion is proposed.Firstly,the HOG face dete...In order to solve the shortcomings of current fatigue detection methods such as low accuracy or poor real-time performance,a fatigue detection method based on multi-feature fusion is proposed.Firstly,the HOG face detection algorithm and KCF target tracking algorithm are integrated and deformable convolutional neural network is introduced to identify the state of extracted eyes and mouth,fast track the detected faces and extract continuous and stable target faces for more efficient extraction.Then the head pose algorithm is introduced to detect the driver’s head in real time and obtain the driver’s head state information.Finally,a multi-feature fusion fatigue detection method is proposed based on the state of the eyes,mouth and head.According to the experimental results,the proposed method can detect the driver’s fatigue state in real time with high accuracy and good robustness compared with the current fatigue detection algorithms.展开更多
Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process ...Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process of discovering intelligence through similarity.This article will explore similarity intelligence,similarity-based reasoning,similarity computing and analytics.More specifically,this article looks at the similarity as an intelligence and its impact on a few areas in the real world.It explores similarity intelligence accompanying experience-based intelligence,knowledge-based intelligence,and data-based intelligence to play an important role in computer science,AI,and data science.This article explores similarity-based reasoning(SBR)and proposes three similarity-based inference rules.It then examines similarity computing and analytics,and a multiagent SBR system.The main contributions of this article are:1)Similarity intelligence is discovered from experience-based intelligence consisting of data-based intelligence and knowledge-based intelligence.2)Similarity-based reasoning,computing and analytics can be used to create similarity intelligence.The proposed approach will facilitate research and development of similarity intelligence,similarity computing and analytics,machine learning and case-based reasoning.展开更多
This paper compared the difference between the traditional Petri nets and reasoning Petri nets(RPN),and presented a fuzzy reasoning Petri net(FRPN) model to represent the fuzzy production rules of a rule based system....This paper compared the difference between the traditional Petri nets and reasoning Petri nets(RPN),and presented a fuzzy reasoning Petri net(FRPN) model to represent the fuzzy production rules of a rule based system.Based on the FRPN model,a formal reasoning algorithm using the operators in max algebra was proposed to perform fuzzy reasoning automatically.The algorithm is consistent with the matrix equation expression method in the traditional Petri net.Its legitimacy and feasibility were testified through an example.展开更多
Human beings’ intellection is the characteristic of a distinct hierarchy and can be taken to construct a heuristic in the shortest path algorithms.It is detailed in this paper how to utilize the hierarchical reasonin...Human beings’ intellection is the characteristic of a distinct hierarchy and can be taken to construct a heuristic in the shortest path algorithms.It is detailed in this paper how to utilize the hierarchical reasoning on the basis of greedy and directional strategy to establish a spatial heuristic,so as to improve running efficiency and suitability of shortest path algorithm for traffic network.The authors divide urban traffic network into three hierarchies and set forward a new node hierarchy division rule to avoid the unreliable solution of shortest path.It is argued that the shortest path,no matter distance shortest or time shortest,is usually not the favorite of drivers in practice.Some factors difficult to expect or quantify influence the drivers’ choice greatly.It makes the drivers prefer choosing a less shortest,but more reliable or flexible path to travel on.The presented optimum path algorithm,in addition to the improvement of the running efficiency of shortest path algorithms up to several times,reduces the emergence of those factors,conforms to the intellection characteristic of human beings,and is more easily accepted by drivers.Moreover,it does not require the completeness of networks in the lowest hierarchy and the applicability and fault tolerance of the algorithm have improved.The experiment result shows the advantages of the presented algorithm.The authors argued that the algorithm has great potential application for navigation systems of large_scale traffic networks.展开更多
Vehicle re-identification(ReID)aims to retrieve the target vehicle in an extensive image gallery through its appearances from various views in the cross-camera scenario.It has gradually become a core technology of int...Vehicle re-identification(ReID)aims to retrieve the target vehicle in an extensive image gallery through its appearances from various views in the cross-camera scenario.It has gradually become a core technology of intelligent transportation system.Most existing vehicle re-identification models adopt the joint learning of global and local features.However,they directly use the extracted global features,resulting in insufficient feature expression.Moreover,local features are primarily obtained through advanced annotation and complex attention mechanisms,which require additional costs.To solve this issue,a multi-feature learning model with enhanced local attention for vehicle re-identification(MFELA)is proposed in this paper.The model consists of global and local branches.The global branch utilizes both middle and highlevel semantic features of ResNet50 to enhance the global representation capability.In addition,multi-scale pooling operations are used to obtain multiscale information.While the local branch utilizes the proposed Region Batch Dropblock(RBD),which encourages the model to learn discriminative features for different local regions and simultaneously drops corresponding same areas randomly in a batch during training to enhance the attention to local regions.Then features from both branches are combined to provide a more comprehensive and distinctive feature representation.Extensive experiments on VeRi-776 and VehicleID datasets prove that our method has excellent performance.展开更多
Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noti...Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noticeable. Urban administrators and decision-makers seek modern methods and technology to provide information support for urban growth. Recently, with the fast development of high-resolution sensor technology, more relevant data can be obtained, which is an advantage in studying the sustainable development of urban land-use. However, these data are only information sources and are a mixture of "information" and "noise". Processing, analysis and information extraction from remote sensing data is necessary to provide useful information. This paper extracts urban land-use information from a high-resolution image by using the multi-feature information of the image objects, and adopts an object-oriented image analysis approach and multi-scale image segmentation technology. A classification and extraction model is set up based on the multi-features of the image objects, in order to contribute to information for reasonable planning and effective management. This new image analysis approach offers a satisfactory solution for extracting information quickly and efficiently.展开更多
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
基金supported in part by the National Key Research and Development Program of China(2022ZD0116405)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA27030300)the Key Research Program of the Chinese Academy of Sciences(ZDBS-SSW-JSC006)。
文摘Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing noisy texts.Previous studies focus on the attention mechanism to construct the connection between different text features through semantic similarity.However,similarity-based methods cannot distinguish valid information from highly similar retrieved documents well.How to design an effective algorithm to implement aggregated reasoning in confusing information with similar features still remains an open issue.To address this problem,we design a novel local-toglobal causal reasoning(LGCR)network for cross-document RE,which enables efficient distinguishing,filtering and global reasoning on complex information from a causal perspective.Specifically,we propose a local causal estimation algorithm to estimate the causal effect,which is the first trial to use the causal reasoning independent of feature similarity to distinguish between confusing and valid information in cross-document RE.Furthermore,based on the causal effect,we propose a causality guided global reasoning algorithm to filter the confusing information and achieve global reasoning.Experimental results under the closed and the open settings of the large-scale dataset Cod RED demonstrate our LGCR network significantly outperforms the state-ofthe-art methods and validate the effectiveness of causal reasoning in confusing information processing.
文摘Sentiment analysis in Chinese classical poetry has become a prominent topic in historical and cultural tracing,ancient literature research,etc.However,the existing research on sentiment analysis is relatively small.It does not effectively solve the problems such as the weak feature extraction ability of poetry text,which leads to the low performance of the model on sentiment analysis for Chinese classical poetry.In this research,we offer the SA-Model,a poetic sentiment analysis model.SA-Model firstly extracts text vector information and fuses it through Bidirectional encoder representation from transformers-Whole word masking-extension(BERT-wwmext)and Enhanced representation through knowledge integration(ERNIE)to enrich text vector information;Secondly,it incorporates numerous encoders to remove text features at multiple levels,thereby increasing text feature information,improving text semantics accuracy,and enhancing the model’s learning and generalization capabilities;finally,multi-feature fusion poetry sentiment analysis model is constructed.The feasibility and accuracy of the model are validated through the ancient poetry sentiment corpus.Compared with other baseline models,the experimental findings indicate that SA-Model may increase the accuracy of text semantics and hence improve the capability of poetry sentiment analysis.
基金This work was partly supported by the Basic Ability Improvement Project for Young andMiddle-aged Teachers in Guangxi Colleges andUniversities(2021KY1800,2021KY1804).
文摘The traditional recommendation algorithm represented by the collaborative filtering algorithm is the most classical and widely recommended algorithm in the practical industry.Most book recommendation systems also use this algorithm.However,the traditional recommendation algorithm represented by the collaborative filtering algorithm cannot deal with the data sparsity well.This algorithm only uses the shallow feature design of the interaction between readers and books,so it fails to achieve the high-level abstract learning of the relevant attribute features of readers and books,leading to a decline in recommendation performance.Given the above problems,this study uses deep learning technology to model readers’book borrowing probability.It builds a recommendation system model through themulti-layer neural network and inputs the features extracted from readers and books into the network,and then profoundly integrates the features of readers and books through the multi-layer neural network.The hidden deep interaction between readers and books is explored accordingly.Thus,the quality of book recommendation performance will be significantly improved.In the experiment,the evaluation indexes ofHR@10,MRR,andNDCGof the deep neural network recommendation model constructed in this paper are higher than those of the traditional recommendation algorithm,which verifies the effectiveness of the model in the book recommendation.
基金This work was supported by the Deanship of Scientific Research,Vice Presidency for Graduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Project no.GRANT 324).
文摘Coronavirus 2019(COVID-19)is the current global buzzword,putting the world at risk.The pandemic’s exponential expansion of infected COVID-19 patients has challenged the medical field’s resources,which are already few.Even established nations would not be in a perfect position to manage this epidemic correctly,leaving emerging countries and countries that have not yet begun to grow to address the problem.These problems can be solved by using machine learning models in a realistic way,such as by using computer-aided images during medical examinations.These models help predict the effects of the disease outbreak and help detect the effects in the coming days.In this paper,Multi-Features Decease Analysis(MFDA)is used with different ensemble classifiers to diagnose the disease’s impact with the help of Computed Tomography(CT)scan images.There are various features associated with chest CT images,which help know the possibility of an individual being affected and how COVID-19 will affect the persons suffering from pneumonia.The current study attempts to increase the precision of the diagnosis model by evaluating various feature sets and choosing the best combination for better results.The model’s performance is assessed using Receiver Operating Characteristic(ROC)curve,the Root Mean Square Error(RMSE),and the Confusion Matrix.It is observed from the resultant outcome that the performance of the proposed model has exhibited better efficient.
基金funded by the Natural Science Foundation Project of Fujian Provincial Department of science and technology,Grant No.:2020J01385Digital Fujian industrial energy big data research institute,Grant No.KB180045Provincial Key Laboratory of industrial big data analysis and Application,Grant No.KB180029,Sanming City 5G Innovation Laboratory,Grant No.:2020 MK18.
文摘Aiming at the dynamics and uncertainties of natural colors affected by the natural environment,a color P-law generation model based on the natural environment is proposed to develop algorithms and to provide a theoretical basis for plant dynamic color simulation and color sensor data transmission.Based on the HSL(Hue,Saturation,Lightness)color solid,the proposed method uses the function P-set to provide a color P-law generation model and an algorithm of the Dynamic Colors System(DCS),establishing the DCS modeling theory of the natural environment and the color P-reasoning simulation based on the HSL color solid.The experimental results show that based on the color P-law,for the DCS of the natural environment,when the external factors change,the color of the plant changes,accordingly,verifying the effectiveness of the color P-law generation model and the algorithm of the DCS.In the dynamic color intel-ligent simulation system,when external factors change,the dynamic change of plant color generally conforms to the basic laws of the natural environment.This enables the effective extraction of color data from the Internet of Things(IoT)-based color sensors and provides an effective way to significantly reduce the data transmission bandwidth of the IoT network.
基金Our work is supported by the National Key R&D Program of China(2021YFB2012400).
文摘With the growing discovery of exposed vulnerabilities in the Industrial Control Components(ICCs),identification of the exploitable ones is urgent for Industrial Control System(ICS)administrators to proactively forecast potential threats.However,it is not a trivial task due to the complexity of the multi-source heterogeneous data and the lack of automatic analysis methods.To address these challenges,we propose an exploitability reasoning method based on the ICC-Vulnerability Knowledge Graph(KG)in which relation paths contain abundant potential evidence to support the reasoning.The reasoning task in this work refers to determining whether a specific relation is valid between an attacker entity and a possible exploitable vulnerability entity with the help of a collective of the critical paths.The proposed method consists of three primary building blocks:KG construction,relation path representation,and query relation reasoning.A security-oriented ontology combines exploit modeling,which provides a guideline for the integration of the scattered knowledge while constructing the KG.We emphasize the role of the aggregation of the attention mechanism in representation learning and ultimate reasoning.In order to acquire a high-quality representation,the entity and relation embeddings take advantage of their local structure and related semantics.Some critical paths are assigned corresponding attentive weights and then they are aggregated for the determination of the query relation validity.In particular,similarity calculation is introduced into a critical path selection algorithm,which improves search and reasoning performance.Meanwhile,the proposed algorithm avoids redundant paths between the given pairs of entities.Experimental results show that the proposed method outperforms the state-of-the-art ones in the aspects of embedding quality and query relation reasoning accuracy.
文摘Tropical cyclones(TC)are often associated with severe weather conditions which cause great losses to lives and property.The precise classification of cyclone tracks is significantly important in thefield of weather forecasting.In this paper we propose a novel hybrid model that integrates ontology and Support Vector Machine(SVM)to classify the tropical cyclone tracks into four types of classes namely straight,quasi-straight,curving and sinuous based on the track shape.Tropical Cyclone TRacks Ontology(TCTRO)described in this paper is a knowledge base which comprises of classes,objects and data properties that represent the interaction among the TC characteristics.A set of SWRL(Semantic Web Rule Language)rules are directly inserted to the TCTRO ontology for reasoning and inferring new knowledge from ontology.Furthermore,we propose a learning algorithm which utilizes the inferred knowledge for optimizing the feature subset.According to experiments on the IBTrACS dataset,the proposed ontology based SVM classifier achieves an accuracy of 98.3%with reduced classification error rates.
基金supported by National Key R&D Program of China(2022YFB2602203)Talent Fund of Beijing Jiaotong University(2021RC274,I22L00131)National Natural Science Foundation of China(U1934219,52202392,52022010,U22A2046,52172322,62271486,62120106011,52172323)。
文摘Railway Point System(RPS)is an important infrastructure in railway industry and its faults may have significant impacts on the safety and efficiency of train operations.For the fault diagnosis of RPS,most existing methods assume that sufficient samples of each failure mode are available,which may be unrealistic,especially for those modes of low occurrence frequency but with high risk.To address this issue,this work proposes a novel fault diagnosis method that only requires the power signals generated under normal RPS operations in the training stage.Specifically,the failure modes of RPS are distinguished through constructing a reasoning diagram,whose nodes are either binary logic problems or those that can be decomposed into the problems of the binary logic.Then,an unsupervised method for the signal segmentation and a fault detection method are combined to make decisions for each binary logic problem.Based on the results of decisions,the diagnostic rules are established to identify the failure modes.Finally,the data collected from multiple real-world RPSs are used for validation and the results demonstrate that the proposed method outperforms the benchmark in identifying the faults of RPSs.
基金Airport New City Utility Tunnel PhaseⅡProject,China。
文摘Cable fire is one of the most important events for operation and maintenance(O&M)safety in underground utility tunnels(UUTs).Since there are limited studies about cable fire risk assessment,a comprehensive assessment model is proposed to evaluate the cable fire risk in different UUT sections and improve O&M efficiency.Considering the uncertainties in the risk assessment,an evidential reasoning(ER)approach is used to combine quantitative sensor data and qualitative expert judgments.Meanwhile,a data transformation technique is contributed to transform continuous data into a five-grade distributed assessment.Then,a case study demonstrates how the model and the ER approach are established.The results show that in Shenzhen,China,the cable fire risk in District 8,B Road is the lowest,while more resources should be paid in District 3,C Road and District 25,C Road,which are selected as comparative roads.Based on the model,a data-driven O&M process is proposed to improve the O&M effectiveness,compared with traditional methods.This study contributes an effective ER-based cable fire evaluation model to improve the O&M efficiency of cable fire in UUTs.
基金Basic Research program from the Institute of Earthquake Forecasting, China Earthquake Administration(Grant No. 2021IEF0505, CEAIEF20220102, and CEAIEF2022050502)high-resolution seismic monitoring and emergency application demonstration (phase Ⅱ)(Grant No. 31-Y30F09-9001-20/22)+1 种基金the National Natural Science Foundation of China (Grant No. 42072248 and 42041006)the National Key Research and Development Program of China (Grant No. 2021YFC3000601-3 and 2019YFE0108900).
文摘Earthquake-triggered liquefaction deformation could lead to severe infrastructure damage and associated casualties and property damage.At present,there are few studies on the rapid extraction of liquefaction pits based on high-resolution satellite images.Therefore,we provide a framework for extracting liquefaction pits based on a case-based reasoning method.Furthermore,five covariates selection methods were used to filter the 11 covariates that were generated from high-resolution satellite images and digital elevation models(DEM).The proposed method was trained with 450 typical samples which were collected based on visual interpretation,then used the trained case-based reasoning method to identify the liquefaction pits in the whole study area.The performance of the proposed methods was evaluated from three aspects,the prediction accuracies of liquefaction pits based on the validation samples by kappa index,the comparison between the pre-and post-earthquake images,the rationality of spatial distribution of liquefaction pits.The final result shows the importance of covariates ranked by different methods could be different.However,the most important of covariates is consistent.When selecting five most important covariates,the value of kappa index could be about 96%.There also exist clear differences between the pre-and post-earthquake areas that were identified as liquefaction pits.The predicted spatial distribution of liquefaction is also consistent with the formation principle of liquefaction.
文摘In order to solve the shortcomings of current fatigue detection methods such as low accuracy or poor real-time performance,a fatigue detection method based on multi-feature fusion is proposed.Firstly,the HOG face detection algorithm and KCF target tracking algorithm are integrated and deformable convolutional neural network is introduced to identify the state of extracted eyes and mouth,fast track the detected faces and extract continuous and stable target faces for more efficient extraction.Then the head pose algorithm is introduced to detect the driver’s head in real time and obtain the driver’s head state information.Finally,a multi-feature fusion fatigue detection method is proposed based on the state of the eyes,mouth and head.According to the experimental results,the proposed method can detect the driver’s fatigue state in real time with high accuracy and good robustness compared with the current fatigue detection algorithms.
文摘Similarity has been playing an important role in computer science,artificial intelligence(AI)and data science.However,similarity intelligence has been ignored in these disciplines.Similarity intelligence is a process of discovering intelligence through similarity.This article will explore similarity intelligence,similarity-based reasoning,similarity computing and analytics.More specifically,this article looks at the similarity as an intelligence and its impact on a few areas in the real world.It explores similarity intelligence accompanying experience-based intelligence,knowledge-based intelligence,and data-based intelligence to play an important role in computer science,AI,and data science.This article explores similarity-based reasoning(SBR)and proposes three similarity-based inference rules.It then examines similarity computing and analytics,and a multiagent SBR system.The main contributions of this article are:1)Similarity intelligence is discovered from experience-based intelligence consisting of data-based intelligence and knowledge-based intelligence.2)Similarity-based reasoning,computing and analytics can be used to create similarity intelligence.The proposed approach will facilitate research and development of similarity intelligence,similarity computing and analytics,machine learning and case-based reasoning.
文摘This paper compared the difference between the traditional Petri nets and reasoning Petri nets(RPN),and presented a fuzzy reasoning Petri net(FRPN) model to represent the fuzzy production rules of a rule based system.Based on the FRPN model,a formal reasoning algorithm using the operators in max algebra was proposed to perform fuzzy reasoning automatically.The algorithm is consistent with the matrix equation expression method in the traditional Petri net.Its legitimacy and feasibility were testified through an example.
文摘Human beings’ intellection is the characteristic of a distinct hierarchy and can be taken to construct a heuristic in the shortest path algorithms.It is detailed in this paper how to utilize the hierarchical reasoning on the basis of greedy and directional strategy to establish a spatial heuristic,so as to improve running efficiency and suitability of shortest path algorithm for traffic network.The authors divide urban traffic network into three hierarchies and set forward a new node hierarchy division rule to avoid the unreliable solution of shortest path.It is argued that the shortest path,no matter distance shortest or time shortest,is usually not the favorite of drivers in practice.Some factors difficult to expect or quantify influence the drivers’ choice greatly.It makes the drivers prefer choosing a less shortest,but more reliable or flexible path to travel on.The presented optimum path algorithm,in addition to the improvement of the running efficiency of shortest path algorithms up to several times,reduces the emergence of those factors,conforms to the intellection characteristic of human beings,and is more easily accepted by drivers.Moreover,it does not require the completeness of networks in the lowest hierarchy and the applicability and fault tolerance of the algorithm have improved.The experiment result shows the advantages of the presented algorithm.The authors argued that the algorithm has great potential application for navigation systems of large_scale traffic networks.
基金This work was supported,in part,by the National Nature Science Foundation of China under Grant Numbers 61502240,61502096,61304205,61773219in part,by the Natural Science Foundation of Jiangsu Province under grant numbers BK20201136,BK20191401+1 种基金in part,by the Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grant Numbers SJCX21_0363in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund.
文摘Vehicle re-identification(ReID)aims to retrieve the target vehicle in an extensive image gallery through its appearances from various views in the cross-camera scenario.It has gradually become a core technology of intelligent transportation system.Most existing vehicle re-identification models adopt the joint learning of global and local features.However,they directly use the extracted global features,resulting in insufficient feature expression.Moreover,local features are primarily obtained through advanced annotation and complex attention mechanisms,which require additional costs.To solve this issue,a multi-feature learning model with enhanced local attention for vehicle re-identification(MFELA)is proposed in this paper.The model consists of global and local branches.The global branch utilizes both middle and highlevel semantic features of ResNet50 to enhance the global representation capability.In addition,multi-scale pooling operations are used to obtain multiscale information.While the local branch utilizes the proposed Region Batch Dropblock(RBD),which encourages the model to learn discriminative features for different local regions and simultaneously drops corresponding same areas randomly in a batch during training to enhance the attention to local regions.Then features from both branches are combined to provide a more comprehensive and distinctive feature representation.Extensive experiments on VeRi-776 and VehicleID datasets prove that our method has excellent performance.
基金The paper is supported by the Research Foundation for OutstandingYoung Teachers , China University of Geosciences ( Wuhan) ( No .CUGQNL0616) Research Foundationfor State Key Laboratory of Geo-logical Processes and Mineral Resources ( No . MGMR2002-02)Hubei Provincial Depart ment of Education (B) .
文摘Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noticeable. Urban administrators and decision-makers seek modern methods and technology to provide information support for urban growth. Recently, with the fast development of high-resolution sensor technology, more relevant data can be obtained, which is an advantage in studying the sustainable development of urban land-use. However, these data are only information sources and are a mixture of "information" and "noise". Processing, analysis and information extraction from remote sensing data is necessary to provide useful information. This paper extracts urban land-use information from a high-resolution image by using the multi-feature information of the image objects, and adopts an object-oriented image analysis approach and multi-scale image segmentation technology. A classification and extraction model is set up based on the multi-features of the image objects, in order to contribute to information for reasonable planning and effective management. This new image analysis approach offers a satisfactory solution for extracting information quickly and efficiently.