When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ...When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.展开更多
Effectively managing extensive,multi-source,and multi-level real-scene 3D models for responsive retrieval scheduling and rapid visualization in the Web environment is a significant challenge in the current development...Effectively managing extensive,multi-source,and multi-level real-scene 3D models for responsive retrieval scheduling and rapid visualization in the Web environment is a significant challenge in the current development of real-scene 3D applications in China.In this paper,we address this challenge by reorganizing spatial and temporal information into a 3D geospatial grid.It introduces the Global 3D Geocoding System(G_(3)DGS),leveraging neighborhood similarity and uniqueness for efficient storage,retrieval,updating,and scheduling of these models.A combination of G_(3)DGS and non-relational databases is implemented,enhancing data storage scalability and flexibility.Additionally,a model detail management scheduling strategy(TLOD)based on G_(3)DGS and an importance factor T is designed.Compared with mainstream commercial and open-source platforms,this method significantly enhances the loadable capacity of massive multi-source real-scene 3D models in the Web environment by 33%,improves browsing efficiency by 48%,and accelerates invocation speed by 40%.展开更多
Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to u...Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to understand the condition and trend of a cyberattack and respond promptly.To address these challenges,we propose a novel approach that consists of three steps.First,we construct the attack and defense analysis of the cybersecurity ontology(ADACO)model by integrating multiple cybersecurity databases.Second,we develop the threat evolution prediction algorithm(TEPA),which can automatically detect threats at device nodes,correlate and map multisource threat information,and dynamically infer the threat evolution process.TEPA leverages knowledge graphs to represent comprehensive threat scenarios and achieves better performance in simulated experiments by combining structural and textual features of entities.Third,we design the intelligent defense decision algorithm(IDDA),which can provide intelligent recommendations for security personnel regarding the most suitable defense techniques.IDDA outperforms the baseline methods in the comparative experiment.展开更多
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall...The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.展开更多
The rapid growth of mobile applications,the popularity of the Android system and its openness have attracted many hackers and even criminals,who are creating lots of Android malware.However,the current methods of Andr...The rapid growth of mobile applications,the popularity of the Android system and its openness have attracted many hackers and even criminals,who are creating lots of Android malware.However,the current methods of Android malware detection need a lot of time in the feature engineering phase.Furthermore,these models have the defects of low detection rate,high complexity,and poor practicability,etc.We analyze the Android malware samples,and the distribution of malware and benign software in application programming interface(API)calls,permissions,and other attributes.We classify the software’s threat levels based on the correlation of features.Then,we propose deep neural networks and convolutional neural networks with ensemble learning(DCEL),a new classifier fusion model for Android malware detection.First,DCEL preprocesses the malware data to remove redundant data,and converts the one-dimensional data into a two-dimensional gray image.Then,the ensemble learning approach is used to combine the deep neural network with the convolutional neural network,and the final classification results are obtained by voting on the prediction of each single classifier.Experiments based on the Drebin and Malgenome datasets show that compared with current state-of-art models,the proposed DCEL has a higher detection rate,higher recall rate,and lower computational cost.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to pred...Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.展开更多
Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological structure through a single 3D modeling method. The multi-source data interpretation method put forward in this ana...Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological structure through a single 3D modeling method. The multi-source data interpretation method put forward in this analysis is based on a database-driven pattern and focuses on the discrete and irregular features of geological data. The geological data from a variety of sources covering a range of accuracy, resolution, quantity and quality are classified and integrated according to their reliability and consistency for 3D modeling. The new interpolation-approximation fitting construction algorithm of geological surfaces with the non-uniform rational B-spline(NURBS) technique is then presented. The NURBS technique can retain the balance among the requirements for accuracy, surface continuity and data storage of geological structures. Finally, four alternative 3D modeling approaches are demonstrated with reference to some examples, which are selected according to the data quantity and accuracy specification. The proposed approaches offer flexible modeling patterns for different practical engineering demands.展开更多
The development of 3D geological models involves the integration of large amounts of geological data,as well as additional accessible proprietary lithological, structural,geochemical,geophysical,and borehole data.Luan...The development of 3D geological models involves the integration of large amounts of geological data,as well as additional accessible proprietary lithological, structural,geochemical,geophysical,and borehole data.Luanchuan,the case study area,southwestern Henan Province,is an important molybdenum-tungsten -lead-zinc polymetallic belt in China.展开更多
Inspired by recent significant agricultural yield losses in the eastern China and a missing operational monitoring system,we developed a comprehensive drought monitoring model to better understand the impact of indivi...Inspired by recent significant agricultural yield losses in the eastern China and a missing operational monitoring system,we developed a comprehensive drought monitoring model to better understand the impact of individual key factors contributing to this issue.The resulting model,the‘Humidity calibrated Drought Condition Index’(HcDCI)was applied for the years 2001 to 2019 in form of a case study to Weihai County,Shandong Province in East China.Design and development are based on a linear combination of the Vegetation Condition Index(VCI),the Temperature Condition Index(TCI),and the Rainfall Condition Index(RCI)using multi-source satellite data to create a basic Drought Condition Index(DCI).VCI and TCI were derived from MODIS(Moderate Resolution Imaging Spectroradiometer)data,while precipitation is taken from CHIRPS(Climate Hazards Group InfraRed Precipitation with Station data)data.For reasons of accuracy,the decisive coefficients were determined by the relative humidity of soils at depth of 10-20 cm of particular areas collected by an agrometeorological ground station.The correlation between DCI and soil humidity was optimized with the factors of 0.53,0.33,and 0.14 for VCI,TCI,and RCI,respectively.The model revealed,light agricultural droughts from 2003 to 2013 and in 2018,while more severe droughts occurred in 2001 and 2002,2014-2017,and 2019.The droughts were most severe in January,March,and December,and our findings coincide with historical records.The average temperature during 2012-2019 is 1℃ higher than that during the period 2001-2011 and the average precipitation during 2014-2019 is 192.77 mm less than that during 2008-2013.The spatio-temporal accuracy of the HcDCI model was positively validated by correlation with agricultural crop yield quantities.The model thus,demonstrates its capability to reveal drought periods in detail,its transferability to other regions and its usefulness to take future measures.展开更多
In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese...In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese medicine during corona virus disease 2019(COVID-19)pandemic has attracted extensive attention globally.Medicinal plants have,therefore,become increasingly popular among the public.However,with increasing demand for and profit with medicinal plants,commercial fraudulent events such as adulteration or counterfeits sometimes occur,which poses a serious threat to the clinical outcomes and interests of consumers.With rapid advances in artificial intelligence,machine learning can be used to mine information on various medicinal plants to establish an ideal resource database.We herein present a review that mainly introduces common machine learning algorithms and discusses their application in multi-source data analysis of medicinal plants.The combination of machine learning algorithms and multi-source data analysis facilitates a comprehensive analysis and aids in the effective evaluation of the quality of medicinal plants.The findings of this review provide new possibilities for promoting the development and utilization of medicinal plants.展开更多
Cardiac diseases are one of the greatest global health challenges.Due to the high annual mortality rates,cardiac diseases have attracted the attention of numerous researchers in recent years.This article proposes a hy...Cardiac diseases are one of the greatest global health challenges.Due to the high annual mortality rates,cardiac diseases have attracted the attention of numerous researchers in recent years.This article proposes a hybrid fuzzy fusion classification model for cardiac arrhythmia diseases.The fusion model is utilized to optimally select the highest-ranked features generated by a variety of well-known feature-selection algorithms.An ensemble of classifiers is then applied to the fusion’s results.The proposed model classifies the arrhythmia dataset from the University of California,Irvine into normal/abnormal classes as well as 16 classes of arrhythmia.Initially,at the preprocessing steps,for the miss-valued attributes,we used the average value in the linear attributes group by the same class and the most frequent value for nominal attributes.However,in order to ensure the model optimality,we eliminated all attributes which have zero or constant values that might bias the results of utilized classifiers.The preprocessing step led to 161 out of 279 attributes(features).Thereafter,a fuzzy-based feature-selection fusion method is applied to fuse high-ranked features obtained from different heuristic feature-selection algorithms.In short,our study comprises three main blocks:(1)sensing data and preprocessing;(2)feature queuing,selection,and extraction;and(3)the predictive model.Our proposed method improves classification performance in terms of accuracy,F1measure,recall,and precision when compared to state-of-the-art techniques.It achieves 98.5%accuracy for binary class mode and 98.9%accuracy for categorized class mode.展开更多
In order to meet the demand of testability analysis and evaluation for complex equipment under a small sample test in the equipment life cycle, the hierarchical hybrid testability model- ing and evaluation method (HH...In order to meet the demand of testability analysis and evaluation for complex equipment under a small sample test in the equipment life cycle, the hierarchical hybrid testability model- ing and evaluation method (HHTME), which combines the testabi- lity structure model (TSM) with the testability Bayesian networks model (TBNM), is presented. Firstly, the testability network topo- logy of complex equipment is built by using the hierarchical hybrid testability modeling method. Secondly, the prior conditional prob- ability distribution between network nodes is determined through expert experience. Then the Bayesian method is used to update the conditional probability distribution, according to history test information, virtual simulation information and similar product in- formation. Finally, the learned hierarchical hybrid testability model (HHTM) is used to estimate the testability of equipment. Compared with the results of other modeling methods, the relative deviation of the HHTM is only 0.52%, and the evaluation result is the most accu rate.展开更多
Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data mu...Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality.展开更多
Based on the dinuclear system model,the calculated evaporation residue cross sections matched well with the current experimental results.The synthesis of superheavy elements Z=121 was systematically studied through co...Based on the dinuclear system model,the calculated evaporation residue cross sections matched well with the current experimental results.The synthesis of superheavy elements Z=121 was systematically studied through combinations of stable projectiles with Z=21-30 and targets with half-lives exceeding 50 d.The influence of mass asymmetry and isotopic dependence on the projectile and target nuclei was investigated in detail.The reactions^(254)Es(^(46)Ti,3n)^(297)121 and^(252)Es(^(46)Ti,3n)^(295)121 were found to be experimentally feasible for synthesizing superheavy element Z=121,with maximal evaporation residue cross sections of 6.619 and 4.123 fb at 219.9 and 223.9 MeV,respectively.展开更多
For reservoirs with complex non-Gaussian geological characteristics,such as carbonate reservoirs or reservoirs with sedimentary facies distribution,it is difficult to implement history matching directly,especially for...For reservoirs with complex non-Gaussian geological characteristics,such as carbonate reservoirs or reservoirs with sedimentary facies distribution,it is difficult to implement history matching directly,especially for the ensemble-based data assimilation methods.In this paper,we propose a multi-source information fused generative adversarial network(MSIGAN)model,which is used for parameterization of the complex geologies.In MSIGAN,various information such as facies distribution,microseismic,and inter-well connectivity,can be integrated to learn the geological features.And two major generative models in deep learning,variational autoencoder(VAE)and generative adversarial network(GAN)are combined in our model.Then the proposed MSIGAN model is integrated into the ensemble smoother with multiple data assimilation(ESMDA)method to conduct history matching.We tested the proposed method on two reservoir models with fluvial facies.The experimental results show that the proposed MSIGAN model can effectively learn the complex geological features,which can promote the accuracy of history matching.展开更多
In precision machining of complex curved surface parts with high performance, geometry accuracy is not the only constraint, but the performance should also be met. Performance of this kind of parts is closely related ...In precision machining of complex curved surface parts with high performance, geometry accuracy is not the only constraint, but the performance should also be met. Performance of this kind of parts is closely related to the geometrical and physical parameters, so the final actual size and shape are affected by multiple source constraints, such as geometry, physics, and performance. These parts are rather difficult to be manufactured and new manufacturing method according to performance requirement is urgently needed. Based on performance and manufacturing requirements for complex curved surface parts, a new classification method is proposed, which divided the complex curved surface parts into two categories: surface re-design complex curved surface parts with multi-source constraints(PRCS) and surface unique complex curved surface parts with pure geometric constraints(PUCS). A correlation model is constructed between the performance and multi-source constraints for PRCS, which reveals the correlation between the performance and multi-source constraints. A re-design method is also developed. Through solving the correlation model of the typical paws performance-associated surface, the mapping relation between the performance-associated surface and the related removal amount is obtained. The explicit correlation model and the method for the corresponding related removal amount of the performance-associated surface are built based on the classification of surface re-design complex curved surface parts with multi-source constraints. Research results have been used in the actual processing of the typical parts such as radome, common bottom components, nozzle, et al., which shows improved efficiency and accuracy of the precision machining for the surface re-design parts with complex curved surface.展开更多
We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both sp...We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.展开更多
Earth resource and environmental monitoring are essential areas that can be used to investigate the environmental conditions and natural resources supporting sustainable policy development,regulatory measures,and thei...Earth resource and environmental monitoring are essential areas that can be used to investigate the environmental conditions and natural resources supporting sustainable policy development,regulatory measures,and their implementation elevating the environment.Large-scale forest fire is considered a major harmful hazard that affects climate change and life over the globe.Therefore,the early identification of forest fires using automated tools is essential to avoid the spread of fire to a large extent.Therefore,this paper focuses on the design of automated forest fire detection using a fusion-based deep learning(AFFD-FDL)model for environmental monitoring.The AFFDFDL technique involves the design of an entropy-based fusion model for feature extraction.The combination of the handcrafted features using histogram of gradients(HOG)with deep features using SqueezeNet and Inception v3 models.Besides,an optimal extreme learning machine(ELM)based classifier is used to identify the existence of fire or not.In order to properly tune the parameters of the ELM model,the oppositional glowworm swarm optimization(OGSO)algorithm is employed and thereby improves the forest fire detection performance.A wide range of simulation analyses takes place on a benchmark dataset and the results are inspected under several aspects.The experimental results highlighted the betterment of the AFFD-FDL technique over the recent state of art techniques.展开更多
Accurate and rapid detection of fish behaviors is critical to perceive health and welfare by allowing farmers to make informed management deci-sions about recirculating the aquaculture system while decreasing labor.Th...Accurate and rapid detection of fish behaviors is critical to perceive health and welfare by allowing farmers to make informed management deci-sions about recirculating the aquaculture system while decreasing labor.The classic detection approach involves placing sensors on the skin or body of the fish,which may interfere with typical behavior and welfare.The progress of deep learning and computer vision technologies opens up new opportunities to understand the biological basis of this behavior and precisely quantify behaviors that contribute to achieving accurate management in precision farming and higher production efficacy.This study develops an intelligent fish behavior classification using modified invasive weed optimization with an ensemble fusion(IFBC-MIWOEF)model.The presented IFBC-MIWOEF model focuses on identifying the distinct kinds of fish behavior classification.To accomplish this,the IFBC-MIWOEF model designs an ensemble of Deep Learning(DL)based fusion models such as VGG-19,DenseNet,and Effi-cientNet models for fish behavior classification.In addition,the hyperparam-eter tuning of the DL models is carried out using the MIWO algorithm,which is derived from the concepts of oppositional-based learning(OBL)and the IWO algorithm.Finally,the softmax(SM)layer at the end of the DL model categorizes the input into distinct fish behavior classes.The experimental validation of the IFBC-MIWOEF model is tested using fish videos,and the results are examined under distinct aspects.An Extensive comparative study pointed out the improved outcomes of the IFBC-MIWOEF model over recent approaches.展开更多
文摘When employing penetration ammunition to strike multi-story buildings,the detection methods using acceleration sensors suffer from signal aliasing,while magnetic detection methods are susceptible to interference from ferromagnetic materials,thereby posing challenges in accurately determining the number of layers.To address this issue,this research proposes a layer counting method for penetration fuze that incorporates multi-source information fusion,utilizing both the temporal convolutional network(TCN)and the long short-term memory(LSTM)recurrent network.By leveraging the strengths of these two network structures,the method extracts temporal and high-dimensional features from the multi-source physical field during the penetration process,establishing a relationship between the multi-source physical field and the distance between the fuze and the target plate.A simulation model is developed to simulate the overload and magnetic field of a projectile penetrating multiple layers of target plates,capturing the multi-source physical field signals and their patterns during the penetration process.The analysis reveals that the proposed multi-source fusion layer counting method reduces errors by 60% and 50% compared to single overload layer counting and single magnetic anomaly signal layer counting,respectively.The model's predictive performance is evaluated under various operating conditions,including different ratios of added noise to random sample positions,penetration speeds,and spacing between target plates.The maximum errors in fuze penetration time predicted by the three modes are 0.08 ms,0.12 ms,and 0.16 ms,respectively,confirming the robustness of the proposed model.Moreover,the model's predictions indicate that the fitting degree for large interlayer spacings is superior to that for small interlayer spacings due to the influence of stress waves.
基金National Key Research and Development Program of China(No.2023YFB3907103).
文摘Effectively managing extensive,multi-source,and multi-level real-scene 3D models for responsive retrieval scheduling and rapid visualization in the Web environment is a significant challenge in the current development of real-scene 3D applications in China.In this paper,we address this challenge by reorganizing spatial and temporal information into a 3D geospatial grid.It introduces the Global 3D Geocoding System(G_(3)DGS),leveraging neighborhood similarity and uniqueness for efficient storage,retrieval,updating,and scheduling of these models.A combination of G_(3)DGS and non-relational databases is implemented,enhancing data storage scalability and flexibility.Additionally,a model detail management scheduling strategy(TLOD)based on G_(3)DGS and an importance factor T is designed.Compared with mainstream commercial and open-source platforms,this method significantly enhances the loadable capacity of massive multi-source real-scene 3D models in the Web environment by 33%,improves browsing efficiency by 48%,and accelerates invocation speed by 40%.
文摘Cyber Threat Intelligence(CTI)is a valuable resource for cybersecurity defense,but it also poses challenges due to its multi-source and heterogeneous nature.Security personnel may be unable to use CTI effectively to understand the condition and trend of a cyberattack and respond promptly.To address these challenges,we propose a novel approach that consists of three steps.First,we construct the attack and defense analysis of the cybersecurity ontology(ADACO)model by integrating multiple cybersecurity databases.Second,we develop the threat evolution prediction algorithm(TEPA),which can automatically detect threats at device nodes,correlate and map multisource threat information,and dynamically infer the threat evolution process.TEPA leverages knowledge graphs to represent comprehensive threat scenarios and achieves better performance in simulated experiments by combining structural and textual features of entities.Third,we design the intelligent defense decision algorithm(IDDA),which can provide intelligent recommendations for security personnel regarding the most suitable defense techniques.IDDA outperforms the baseline methods in the comparative experiment.
基金supported by the National Key Research and Development Program of China(grant number 2019YFE0123600)。
文摘The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT.
基金supported by the National Natural Science Foundation of China(62072255)。
文摘The rapid growth of mobile applications,the popularity of the Android system and its openness have attracted many hackers and even criminals,who are creating lots of Android malware.However,the current methods of Android malware detection need a lot of time in the feature engineering phase.Furthermore,these models have the defects of low detection rate,high complexity,and poor practicability,etc.We analyze the Android malware samples,and the distribution of malware and benign software in application programming interface(API)calls,permissions,and other attributes.We classify the software’s threat levels based on the correlation of features.Then,we propose deep neural networks and convolutional neural networks with ensemble learning(DCEL),a new classifier fusion model for Android malware detection.First,DCEL preprocesses the malware data to remove redundant data,and converts the one-dimensional data into a two-dimensional gray image.Then,the ensemble learning approach is used to combine the deep neural network with the convolutional neural network,and the final classification results are obtained by voting on the prediction of each single classifier.Experiments based on the Drebin and Malgenome datasets show that compared with current state-of-art models,the proposed DCEL has a higher detection rate,higher recall rate,and lower computational cost.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金supported by the National Natural Science Foundation of China(41977215)。
文摘Long runout landslides involve a massive amount of energy and can be extremely hazardous owing to their long movement distance,high mobility and strong destructive power.Numerical methods have been widely used to predict the landslide runout but a fundamental problem remained is how to determine the reliable numerical parameters.This study proposes a framework to predict the runout of potential landslides through multi-source data collaboration and numerical analysis of historical landslide events.Specifically,for the historical landslide cases,the landslide-induced seismic signal,geophysical surveys,and possible in-situ drone/phone videos(multi-source data collaboration)can validate the numerical results in terms of landslide dynamics and deposit features and help calibrate the numerical(rheological)parameters.Subsequently,the calibrated numerical parameters can be used to numerically predict the runout of potential landslides in the region with a similar geological setting to the recorded events.Application of the runout prediction approach to the 2020 Jiashanying landslide in Guizhou,China gives reasonable results in comparison to the field observations.The numerical parameters are determined from the multi-source data collaboration analysis of a historical case in the region(2019 Shuicheng landslide).The proposed framework for landslide runout prediction can be of great utility for landslide risk assessment and disaster reduction in mountainous regions worldwide.
基金Supported by the National Natural Science Foundation of China(No.51379006 and No.51009106)the Program for New Century Excellent Talents in University of Ministry of Education of China(No.NCET-12-0404)the National Basic Research Program of China("973"Program,No.2013CB035903)
文摘Due to the complex nature of multi-source geological data, it is difficult to rebuild every geological structure through a single 3D modeling method. The multi-source data interpretation method put forward in this analysis is based on a database-driven pattern and focuses on the discrete and irregular features of geological data. The geological data from a variety of sources covering a range of accuracy, resolution, quantity and quality are classified and integrated according to their reliability and consistency for 3D modeling. The new interpolation-approximation fitting construction algorithm of geological surfaces with the non-uniform rational B-spline(NURBS) technique is then presented. The NURBS technique can retain the balance among the requirements for accuracy, surface continuity and data storage of geological structures. Finally, four alternative 3D modeling approaches are demonstrated with reference to some examples, which are selected according to the data quantity and accuracy specification. The proposed approaches offer flexible modeling patterns for different practical engineering demands.
文摘The development of 3D geological models involves the integration of large amounts of geological data,as well as additional accessible proprietary lithological, structural,geochemical,geophysical,and borehole data.Luanchuan,the case study area,southwestern Henan Province,is an important molybdenum-tungsten -lead-zinc polymetallic belt in China.
基金Under the auspices of Shenzhen Science and Technology Program(No.KQTD20180410161218820)Guangdong Basic and Applied Basic Research Foundation(No.2021A1515012600)。
文摘Inspired by recent significant agricultural yield losses in the eastern China and a missing operational monitoring system,we developed a comprehensive drought monitoring model to better understand the impact of individual key factors contributing to this issue.The resulting model,the‘Humidity calibrated Drought Condition Index’(HcDCI)was applied for the years 2001 to 2019 in form of a case study to Weihai County,Shandong Province in East China.Design and development are based on a linear combination of the Vegetation Condition Index(VCI),the Temperature Condition Index(TCI),and the Rainfall Condition Index(RCI)using multi-source satellite data to create a basic Drought Condition Index(DCI).VCI and TCI were derived from MODIS(Moderate Resolution Imaging Spectroradiometer)data,while precipitation is taken from CHIRPS(Climate Hazards Group InfraRed Precipitation with Station data)data.For reasons of accuracy,the decisive coefficients were determined by the relative humidity of soils at depth of 10-20 cm of particular areas collected by an agrometeorological ground station.The correlation between DCI and soil humidity was optimized with the factors of 0.53,0.33,and 0.14 for VCI,TCI,and RCI,respectively.The model revealed,light agricultural droughts from 2003 to 2013 and in 2018,while more severe droughts occurred in 2001 and 2002,2014-2017,and 2019.The droughts were most severe in January,March,and December,and our findings coincide with historical records.The average temperature during 2012-2019 is 1℃ higher than that during the period 2001-2011 and the average precipitation during 2014-2019 is 192.77 mm less than that during 2008-2013.The spatio-temporal accuracy of the HcDCI model was positively validated by correlation with agricultural crop yield quantities.The model thus,demonstrates its capability to reveal drought periods in detail,its transferability to other regions and its usefulness to take future measures.
基金supported by the National Natural Science Foundation of China(Grant No.:U2202213)the Special Program for the Major Science and Technology Projects of Yunnan Province,China(Grant Nos.:202102AE090051-1-01,and 202202AE090001).
文摘In traditional medicine and ethnomedicine,medicinal plants have long been recognized as the basis for materials in therapeutic applications worldwide.In particular,the remarkable curative effect of traditional Chinese medicine during corona virus disease 2019(COVID-19)pandemic has attracted extensive attention globally.Medicinal plants have,therefore,become increasingly popular among the public.However,with increasing demand for and profit with medicinal plants,commercial fraudulent events such as adulteration or counterfeits sometimes occur,which poses a serious threat to the clinical outcomes and interests of consumers.With rapid advances in artificial intelligence,machine learning can be used to mine information on various medicinal plants to establish an ideal resource database.We herein present a review that mainly introduces common machine learning algorithms and discusses their application in multi-source data analysis of medicinal plants.The combination of machine learning algorithms and multi-source data analysis facilitates a comprehensive analysis and aids in the effective evaluation of the quality of medicinal plants.The findings of this review provide new possibilities for promoting the development and utilization of medicinal plants.
文摘Cardiac diseases are one of the greatest global health challenges.Due to the high annual mortality rates,cardiac diseases have attracted the attention of numerous researchers in recent years.This article proposes a hybrid fuzzy fusion classification model for cardiac arrhythmia diseases.The fusion model is utilized to optimally select the highest-ranked features generated by a variety of well-known feature-selection algorithms.An ensemble of classifiers is then applied to the fusion’s results.The proposed model classifies the arrhythmia dataset from the University of California,Irvine into normal/abnormal classes as well as 16 classes of arrhythmia.Initially,at the preprocessing steps,for the miss-valued attributes,we used the average value in the linear attributes group by the same class and the most frequent value for nominal attributes.However,in order to ensure the model optimality,we eliminated all attributes which have zero or constant values that might bias the results of utilized classifiers.The preprocessing step led to 161 out of 279 attributes(features).Thereafter,a fuzzy-based feature-selection fusion method is applied to fuse high-ranked features obtained from different heuristic feature-selection algorithms.In short,our study comprises three main blocks:(1)sensing data and preprocessing;(2)feature queuing,selection,and extraction;and(3)the predictive model.Our proposed method improves classification performance in terms of accuracy,F1measure,recall,and precision when compared to state-of-the-art techniques.It achieves 98.5%accuracy for binary class mode and 98.9%accuracy for categorized class mode.
基金supported by the National Defense Pre-research Foundation of China(51327030104)
文摘In order to meet the demand of testability analysis and evaluation for complex equipment under a small sample test in the equipment life cycle, the hierarchical hybrid testability model- ing and evaluation method (HHTME), which combines the testabi- lity structure model (TSM) with the testability Bayesian networks model (TBNM), is presented. Firstly, the testability network topo- logy of complex equipment is built by using the hierarchical hybrid testability modeling method. Secondly, the prior conditional prob- ability distribution between network nodes is determined through expert experience. Then the Bayesian method is used to update the conditional probability distribution, according to history test information, virtual simulation information and similar product in- formation. Finally, the learned hierarchical hybrid testability model (HHTM) is used to estimate the testability of equipment. Compared with the results of other modeling methods, the relative deviation of the HHTM is only 0.52%, and the evaluation result is the most accu rate.
基金This study was supported by National Key Research and Development Project(Project No.2017YFD0301506)National Social Science Foundation(Project No.71774052)+1 种基金Hunan Education Department Scientific Research Project(Project No.17K04417A092).
文摘Data fusion can effectively process multi-sensor information to obtain more accurate and reliable results than a single sensor.The data of water quality in the environment comes from different sensors,thus the data must be fused.In our research,self-adaptive weighted data fusion method is used to respectively integrate the data from the PH value,temperature,oxygen dissolved and NH3 concentration of water quality environment.Based on the fusion,the Grubbs method is used to detect the abnormal data so as to provide data support for estimation,prediction and early warning of the water quality.
基金the National Key R&D Program of China(No.2023YFA1606401)the National Natural Science Foundation of China(Nos.12135004,11635003 and 11961141004).
文摘Based on the dinuclear system model,the calculated evaporation residue cross sections matched well with the current experimental results.The synthesis of superheavy elements Z=121 was systematically studied through combinations of stable projectiles with Z=21-30 and targets with half-lives exceeding 50 d.The influence of mass asymmetry and isotopic dependence on the projectile and target nuclei was investigated in detail.The reactions^(254)Es(^(46)Ti,3n)^(297)121 and^(252)Es(^(46)Ti,3n)^(295)121 were found to be experimentally feasible for synthesizing superheavy element Z=121,with maximal evaporation residue cross sections of 6.619 and 4.123 fb at 219.9 and 223.9 MeV,respectively.
基金supported by the National Natural Science Foundation of China under Grant 51722406,52074340,and 51874335the Shandong Provincial Natural Science Foundation under Grant JQ201808+5 种基金The Fundamental Research Funds for the Central Universities under Grant 18CX02097Athe Major Scientific and Technological Projects of CNPC under Grant ZD2019-183-008the Science and Technology Support Plan for Youth Innovation of University in Shandong Province under Grant 2019KJH002the National Research Council of Science and Technology Major Project of China under Grant 2016ZX05025001-006111 Project under Grant B08028Sinopec Science and Technology Project under Grant P20050-1
文摘For reservoirs with complex non-Gaussian geological characteristics,such as carbonate reservoirs or reservoirs with sedimentary facies distribution,it is difficult to implement history matching directly,especially for the ensemble-based data assimilation methods.In this paper,we propose a multi-source information fused generative adversarial network(MSIGAN)model,which is used for parameterization of the complex geologies.In MSIGAN,various information such as facies distribution,microseismic,and inter-well connectivity,can be integrated to learn the geological features.And two major generative models in deep learning,variational autoencoder(VAE)and generative adversarial network(GAN)are combined in our model.Then the proposed MSIGAN model is integrated into the ensemble smoother with multiple data assimilation(ESMDA)method to conduct history matching.We tested the proposed method on two reservoir models with fluvial facies.The experimental results show that the proposed MSIGAN model can effectively learn the complex geological features,which can promote the accuracy of history matching.
基金supported by Key Program of National Natural Science Foundation of China(Grant No.50835001)Program for New Century Excellent Talents in University,China(Grant No.NCET-13-0081)
文摘In precision machining of complex curved surface parts with high performance, geometry accuracy is not the only constraint, but the performance should also be met. Performance of this kind of parts is closely related to the geometrical and physical parameters, so the final actual size and shape are affected by multiple source constraints, such as geometry, physics, and performance. These parts are rather difficult to be manufactured and new manufacturing method according to performance requirement is urgently needed. Based on performance and manufacturing requirements for complex curved surface parts, a new classification method is proposed, which divided the complex curved surface parts into two categories: surface re-design complex curved surface parts with multi-source constraints(PRCS) and surface unique complex curved surface parts with pure geometric constraints(PUCS). A correlation model is constructed between the performance and multi-source constraints for PRCS, which reveals the correlation between the performance and multi-source constraints. A re-design method is also developed. Through solving the correlation model of the typical paws performance-associated surface, the mapping relation between the performance-associated surface and the related removal amount is obtained. The explicit correlation model and the method for the corresponding related removal amount of the performance-associated surface are built based on the classification of surface re-design complex curved surface parts with multi-source constraints. Research results have been used in the actual processing of the typical parts such as radome, common bottom components, nozzle, et al., which shows improved efficiency and accuracy of the precision machining for the surface re-design parts with complex curved surface.
基金The National Natural Science Foundation of China under contract No.61671481the Qingdao Applied Fundamental Research under contract No.16-5-1-11-jchthe Fundamental Research Funds for Central Universities under contract No.18CX05014A
文摘We present a novel sea-ice classification framework based on locality preserving fusion of multi-source images information.The locality preserving fusion arises from two-fold,i.e.,the local characterization in both spatial and feature domains.We commence by simultaneously learning a projection matrix,which preserves spatial localities,and a similarity matrix,which encodes feature similarities.We map the pixels of multi-source images by the projection matrix to a set fusion vectors that preserve spatial localities of the image.On the other hand,by applying the Laplacian eigen-decomposition to the similarity matrix,we obtain another set of fusion vectors that preserve the feature local similarities.We concatenate the fusion vectors for both spatial and feature locality preservation and obtain the fusion image.Finally,we classify the fusion image pixels by a novel sliding ensemble strategy,which enhances the locality preservation in classification.Our locality preserving fusion framework is effective in classifying multi-source sea-ice images(e.g.,multi-spectral and synthetic aperture radar(SAR)images)because it not only comprehensively captures the spatial neighboring relationships but also intrinsically characterizes the feature associations between different types of sea-ices.Experimental evaluations validate the effectiveness of our framework.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number(RGP.1/172/42)Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R191)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.This study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Earth resource and environmental monitoring are essential areas that can be used to investigate the environmental conditions and natural resources supporting sustainable policy development,regulatory measures,and their implementation elevating the environment.Large-scale forest fire is considered a major harmful hazard that affects climate change and life over the globe.Therefore,the early identification of forest fires using automated tools is essential to avoid the spread of fire to a large extent.Therefore,this paper focuses on the design of automated forest fire detection using a fusion-based deep learning(AFFD-FDL)model for environmental monitoring.The AFFDFDL technique involves the design of an entropy-based fusion model for feature extraction.The combination of the handcrafted features using histogram of gradients(HOG)with deep features using SqueezeNet and Inception v3 models.Besides,an optimal extreme learning machine(ELM)based classifier is used to identify the existence of fire or not.In order to properly tune the parameters of the ELM model,the oppositional glowworm swarm optimization(OGSO)algorithm is employed and thereby improves the forest fire detection performance.A wide range of simulation analyses takes place on a benchmark dataset and the results are inspected under several aspects.The experimental results highlighted the betterment of the AFFD-FDL technique over the recent state of art techniques.
文摘Accurate and rapid detection of fish behaviors is critical to perceive health and welfare by allowing farmers to make informed management deci-sions about recirculating the aquaculture system while decreasing labor.The classic detection approach involves placing sensors on the skin or body of the fish,which may interfere with typical behavior and welfare.The progress of deep learning and computer vision technologies opens up new opportunities to understand the biological basis of this behavior and precisely quantify behaviors that contribute to achieving accurate management in precision farming and higher production efficacy.This study develops an intelligent fish behavior classification using modified invasive weed optimization with an ensemble fusion(IFBC-MIWOEF)model.The presented IFBC-MIWOEF model focuses on identifying the distinct kinds of fish behavior classification.To accomplish this,the IFBC-MIWOEF model designs an ensemble of Deep Learning(DL)based fusion models such as VGG-19,DenseNet,and Effi-cientNet models for fish behavior classification.In addition,the hyperparam-eter tuning of the DL models is carried out using the MIWO algorithm,which is derived from the concepts of oppositional-based learning(OBL)and the IWO algorithm.Finally,the softmax(SM)layer at the end of the DL model categorizes the input into distinct fish behavior classes.The experimental validation of the IFBC-MIWOEF model is tested using fish videos,and the results are examined under distinct aspects.An Extensive comparative study pointed out the improved outcomes of the IFBC-MIWOEF model over recent approaches.