This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the...This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the most efficient methodology for computing exact expressions of sensitivities, of any order, of model responses with respect to features of model parameters and, subsequently, with respect to the model’s uncertain parameters, boundaries, and internal interfaces. The unparalleled efficiency and accuracy of the n<sup>th</sup>-FASAM-N methodology stems from the maximal reduction of the number of adjoint computations (which are considered to be “large-scale” computations) for computing high-order sensitivities. When applying the n<sup>th</sup>-FASAM-N methodology to compute the second- and higher-order sensitivities, the number of large-scale computations is proportional to the number of “model features” as opposed to being proportional to the number of model parameters (which are considerably more than the number of features).When a model has no “feature” functions of parameters, but only comprises primary parameters, the n<sup>th</sup>-FASAM-N methodology becomes identical to the extant n<sup>th</sup> CASAM-N (“n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems”) methodology. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are formulated in linearly increasing higher-dimensional Hilbert spaces as opposed to exponentially increasing parameter-dimensional spaces thus overcoming the curse of dimensionality in sensitivity analysis of nonlinear systems. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N are incomparably more efficient and more accurate than any other methods (statistical, finite differences, etc.) for computing exact expressions of response sensitivities of any order with respect to the model’s features and/or primary uncertain parameters, boundaries, and internal interfaces.展开更多
This work highlights the unparalleled efficiency of the “n<sup>th</sup>-Order Function/ Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-FASAM-N) by con...This work highlights the unparalleled efficiency of the “n<sup>th</sup>-Order Function/ Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-FASAM-N) by considering the well-known Nordheim-Fuchs reactor dynamics/safety model. This model describes a short-time self-limiting power excursion in a nuclear reactor system having a negative temperature coefficient in which a large amount of reactivity is suddenly inserted, either intentionally or by accident. This nonlinear paradigm model is sufficiently complex to model realistically self-limiting power excursions for short times yet admits closed-form exact expressions for the time-dependent neutron flux, temperature distribution and energy released during the transient power burst. The n<sup>th</sup>-FASAM-N methodology is compared to the extant “n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-CASAM-N) showing that: (i) the 1<sup>st</sup>-FASAM-N and the 1<sup>st</sup>-CASAM-N methodologies are equally efficient for computing the first-order sensitivities;each methodology requires a single large-scale computation for solving the “First-Level Adjoint Sensitivity System” (1<sup>st</sup>-LASS);(ii) the 2<sup>nd</sup>-FASAM-N methodology is considerably more efficient than the 2<sup>nd</sup>-CASAM-N methodology for computing the second-order sensitivities since the number of feature-functions is much smaller than the number of primary parameters;specifically for the Nordheim-Fuchs model, the 2<sup>nd</sup>-FASAM-N methodology requires 2 large-scale computations to obtain all of the exact expressions of the 28 distinct second-order response sensitivities with respect to the model parameters while the 2<sup>nd</sup>-CASAM-N methodology requires 7 large-scale computations for obtaining these 28 second-order sensitivities;(iii) the 3<sup>rd</sup>-FASAM-N methodology is even more efficient than the 3<sup>rd</sup>-CASAM-N methodology: only 2 large-scale computations are needed to obtain the exact expressions of the 84 distinct third-order response sensitivities with respect to the Nordheim-Fuchs model’s parameters when applying the 3<sup>rd</sup>-FASAM-N methodology, while the application of the 3<sup>rd</sup>-CASAM-N methodology requires at least 22 large-scale computations for computing the same 84 distinct third-order sensitivities. Together, the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are the most practical methodologies for computing response sensitivities of any order comprehensively and accurately, overcoming the curse of dimensionality in sensitivity analysis.展开更多
Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common an...Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common and variable features with their different relations and problem of Crosstree Constraints(CTC).CTC problems exist in groups of common and variable features among the sub-tree of feature models more diverse in Internet of Things(IoT)devices because different Internet devices and protocols are communicated.Therefore,managing the CTC problem to achieve valid product configuration in IoT-based SPL is more complex,time-consuming,and hard.However,the CTC problem needs to be considered in previously proposed approaches such as Commonality VariabilityModeling of Features(COVAMOF)andGenarch+tool;therefore,invalid products are generated.This research has proposed a novel approach Binary Oriented Feature Selection Crosstree Constraints(BOFS-CTC),to find all possible valid products by selecting the features according to cardinality constraints and cross-tree constraint problems in the featuremodel of SPL.BOFS-CTC removes the invalid products at the early stage of feature selection for the product configuration.Furthermore,this research developed the BOFS-CTC algorithm and applied it to,IoT-based feature models.The findings of this research are that no relationship constraints and CTC violations occur and drive the valid feature product configurations for the application development by removing the invalid product configurations.The accuracy of BOFS-CTC is measured by the integration sampling technique,where different valid product configurations are compared with the product configurations derived by BOFS-CTC and found 100%correct.Using BOFS-CTC eliminates the testing cost and development effort of invalid SPL products.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the ...Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the scale of the parallel training corpus is limited.Considering the pre-trained language model has a strong ability for monolingual representation,it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models.To alleviate the dependence on the parallel corpus,we propose a Linguistics Knowledge-Driven MultiTask(LKMT)approach to inject part-of-speech and syntactic knowledge into pre-trained models,thus enhancing the machine translation performance.On the one hand,we integrate part-of-speech and dependency labels into the embedding layer and exploit large-scale monolingual corpus to update all parameters of pre-trained language models,thus ensuring the updated language model contains potential lexical and syntactic information.On the other hand,we leverage an extra self-attention layer to explicitly inject linguistic knowledge into the pre-trained language model-enhanced machine translation model.Experiments on the benchmark dataset show that our proposed LKMT approach improves the Urdu-English translation accuracy by 1.97 points and the English-Urdu translation accuracy by 2.42 points,highlighting the effectiveness of our LKMT framework.Detailed ablation experiments confirm the positive impact of part-of-speech and dependency parsing on machine translation.展开更多
The adaptability of features definition to applications is an essential condition for implementing feature based design. This paper makes attempt to present a hierarchical definition structure of features. The propos...The adaptability of features definition to applications is an essential condition for implementing feature based design. This paper makes attempt to present a hierarchical definition structure of features. The proposed scheme divides feature definition into application level, form level and geometric level, and provides links between different levels with feature semantics interpretation and enhanced geometric face adjacent graph. respectively. The results not only enable feature definition to abate from the specific dependence and become more extensive, but also provide a theoretical foundation for establishing the concurrent feature based design process model.展开更多
Feature modeling is the key to the realization of CAD/CAPP/CAM and the information integration of concurrent engineering. This paper describes the method for the advanced development of the parametric modeling system ...Feature modeling is the key to the realization of CAD/CAPP/CAM and the information integration of concurrent engineering. This paper describes the method for the advanced development of the parametric modeling system based on features by using I DEAS 5 system. It elaborates the modeling technique based on the features and generates the product information models based on the features providing abundant information for the process of the ensuing applications. The development of the feature modeling system on the commercial CAD software platform can take a great advantage of the solid modeling resources of the existing software, save the input of funds and shorten the development cycles of the new systems.展开更多
Heart disease(HD)is a serious widespread life-threatening disease.The heart of patients with HD fails to pump sufcient amounts of blood to the entire body.Diagnosing the occurrence of HD early and efciently may preven...Heart disease(HD)is a serious widespread life-threatening disease.The heart of patients with HD fails to pump sufcient amounts of blood to the entire body.Diagnosing the occurrence of HD early and efciently may prevent the manifestation of the debilitating effects of this disease and aid in its effective treatment.Classical methods for diagnosing HD are sometimes unreliable and insufcient in analyzing the related symptoms.As an alternative,noninvasive medical procedures based on machine learning(ML)methods provide reliable HD diagnosis and efcient prediction of HD conditions.However,the existing models of automated ML-based HD diagnostic methods cannot satisfy clinical evaluation criteria because of their inability to recognize anomalies in extracted symptoms represented as classication features from patients with HD.In this study,we propose an automated heart disease diagnosis(AHDD)system that integrates a binary convolutional neural network(CNN)with a new multi-agent feature wrapper(MAFW)model.The MAFW model consists of four software agents that operate a genetic algorithm(GA),a support vector machine(SVM),and Naïve Bayes(NB).The agents instruct the GA to perform a global search on HD features and adjust the weights of SVM and BN during initial classication.A nal tuning to CNN is then performed to ensure that the best set of features are included in HD identication.The CNN consists of ve layers that categorize patients as healthy or with HD according to the analysis of optimized HD features.We evaluate the classication performance of the proposed AHDD system via 12 common ML techniques and conventional CNN models by using across-validation technique and by assessing six evaluation criteria.The AHDD system achieves the highest accuracy of 90.1%,whereas the other ML and conventional CNN models attain only 72.3%–83.8%accuracy on average.Therefore,the AHDD system proposed herein has the highest capability to identify patients with HD.This system can be used by medical practitioners to diagnose HD efciently。展开更多
Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) a...Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) and remote sensing imagery, combined with developed object-based methods enables automatic gully feature mapping. But still few studies have specifically focused on gully feature mapping on different scales. In this study, an object-based approach to two-level gully feature mapping, including gully-affected areas and bank gullies, was developed and tested on 1-m DEM and Worldview-3 imagery of a catchment in the Chinese Loess Plateau. The methodology includes a sequence of data preparation, image segmentation, metric calculation, and random forest based classification. The results of the two-level mapping were based on a random forest model after investigating the effects of feature selection and class-imbalance problem. Results show that the segmentation strategy adopted in this paper which considers the topographic information and optimal parameter combination can improve the segmentation results. The distribution of the gully-affected area is closely related to topographic information, however, the spectral features are more dominant for bank gully mapping. The highest overall accuracy of the gully-affected area mapping was 93.06% with four topographic features. The highest overall accuracy of bank gully mapping is 78.5% when all features are adopted. The proposed approach is a creditable option for hierarchical mapping of gully feature information, which is suitable for the application in hily Loess Plateau region.展开更多
With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectiv...With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectively handled by traditional monitoring methods such as linear discriminant analysis(LDA),principal component analysis(PCA)and partial least square(PLS)analysis.Recently,a mixed hidden naive Bayesian model(MHNBM)is developed for the first time to utilize both two-valued and continuous variables for abnormality monitoring.Although the MHNBM is effective,it still has some shortcomings that need to be improved.For the MHNBM,the variables with greater correlation to other variables have greater weights,which can not guarantee greater weights are assigned to the more discriminating variables.In addition,the conditional P(x j|x j′,y=k)probability must be computed based on historical data.When the training data is scarce,the conditional probability between continuous variables tends to be uniformly distributed,which affects the performance of MHNBM.Here a novel feature weighted mixed naive Bayes model(FWMNBM)is developed to overcome the above shortcomings.For the FWMNBM,the variables that are more correlated to the class have greater weights,which makes the more discriminating variables contribute more to the model.At the same time,FWMNBM does not have to calculate the conditional probability between variables,thus it is less restricted by the number of training data samples.Compared with the MHNBM,the FWMNBM has better performance,and its effectiveness is validated through numerical cases of a simulation example and a practical case of the Zhoushan thermal power plant(ZTPP),China.展开更多
A new feature extraction method based on 2D-hidden Markov model(HMM) is proposed. Meanwhile the time index and frequency index are introduced to represent the new features. The new feature extraction strategy is tes...A new feature extraction method based on 2D-hidden Markov model(HMM) is proposed. Meanwhile the time index and frequency index are introduced to represent the new features. The new feature extraction strategy is tested by the experimental data that collected from Bently rotor experiment system. The results show that this methodology is very effective to extract the feature of vibration signals in the rotor speed-up course and can be extended to other non-stationary signal analysis fields in the future.展开更多
Strong mechanical vibration and acoustical signals of grinding process contain useful information related to load parameters in ball mills. It is a challenge to extract latent features and construct soft sensor model ...Strong mechanical vibration and acoustical signals of grinding process contain useful information related to load parameters in ball mills. It is a challenge to extract latent features and construct soft sensor model with high dimensional frequency spectra of these signals. This paper aims to develop a selective ensemble modeling approach based on nonlinear latent frequency spectral feature extraction for accurate measurement of material to ball volume ratio. Latent features are first extracted from different vibrations and acoustic spectral segments by kernel partial least squares. Algorithms of bootstrap and least squares support vector machines are employed to produce candidate sub-models using these latent features as inputs. Ensemble sub-models are selected based on genetic algorithm optimization toolbox. Partial least squares regression is used to combine these sub-models to eliminate collinearity among their prediction outputs. Results indicate that the proposed modeling approach has better prediction performance than previous ones.展开更多
This paper presents a feature modeling approach to address the 3D structural topology design optimization withfeature constraints. In the proposed algorithm, various features are formed into searchable shape features ...This paper presents a feature modeling approach to address the 3D structural topology design optimization withfeature constraints. In the proposed algorithm, various features are formed into searchable shape features bythe feature modeling technology, and the models of feature elements are established. The feature elements thatmeet the design requirements are found by employing a feature matching technology, and the constraint factorscombined with the pseudo density of elements are initialized according to the optimized feature elements. Then,through controlling the constraint factors and utilizing the optimization criterion method along with the filteringtechnology of independent mesh, the structural design optimization is implemented. The present feature modelingapproach is applied to the feature-based structural topology optimization using empirical data. Meanwhile, theimproved mathematical model based on the density method with the constraint factors and the correspondingsolution processes are also presented. Compared with the traditional method which requires complicated constraintprocessing, the present approach is flexibly applied to the 3D structural design optimization with added holesby changing the constraint factors, thus it can design a structure with predetermined features more directly andeasily. Numerical examples show effectiveness of the proposed feature modeling approach, which is suitable for thepractical engineering design.展开更多
Most large-scale systems including self-adaptive systems utilize feature models(FMs)to represent their complex architectures and benefit from the reuse of commonalities and variability information.Self-adaptive system...Most large-scale systems including self-adaptive systems utilize feature models(FMs)to represent their complex architectures and benefit from the reuse of commonalities and variability information.Self-adaptive systems(SASs)are capable of reconfiguring themselves during the run time to satisfy the scenarios of the requisite contexts.However,reconfiguration of SASs corresponding to each adaptation of the system requires significant computational time and resources.The process of configuration reuse can be a better alternative to some contexts to reduce computational time,effort and error-prone.Nevertheless,systems’complexity can be reduced while the development process of systems by reusing elements or components.FMs are considered one of the new ways of reuse process that are able to introduce new opportunities for the reuse process beyond the conventional system components.While current FM-based modelling techniques represent,manage,and reuse elementary features to model SASs concepts,modeling and reusing configurations have not yet been considered.In this context,this study presents an extension to FMs by introducing and managing configuration features and their reuse process.Evaluation results demonstrate that reusing configuration features reduces the effort and time required by a reconfiguration process during the run time to meet the required scenario according to the current context.展开更多
In allusion to the difficulty of integrating data with different models in integrating spatial information, the characteristics of raster structure, vector structure and mixed model were analyzed, and a hierarchical v...In allusion to the difficulty of integrating data with different models in integrating spatial information, the characteristics of raster structure, vector structure and mixed model were analyzed, and a hierarchical vector-raster integrative full feature model was put forward by integrating the advantage of vector and raster model and using the object-oriented method. The data structures of the four basic features, i.e. point, line, surface and solid, were described. An application was analyzed and described, and the characteristics of this model were described. In this model, all objects in the real world are divided into and described as features with hierarchy, and all the data are organized in vector. This model can describe data based on feature, field, network and other models, and avoid the disadvantage of inability to integrate data based on different models and perform spatial analysis on them in spatial information integration.展开更多
BACKGROUND: Electrical stimulation kindling model, having epilepsy-inducing and spontaneous seizure and other advantages, is a very ideal experimental animal model. But the kindling effect might be different at diffe...BACKGROUND: Electrical stimulation kindling model, having epilepsy-inducing and spontaneous seizure and other advantages, is a very ideal experimental animal model. But the kindling effect might be different at different sites. OBJECTIVE: To compare the features of animal models of complex partial epilepsy established through unilateral, bilateral and alternate-side kindling at hippocampus and successful rate of modeling among these 3 different ways. DESIGN: A randomized and controlled animal experiment SETTING: Department of Neurology, Qilu Hospital, Shandong University MATERIALS: Totally 60 healthy adult Wistar rats, weighing 200 to 300 g, of either gender, were used in this experiment. BL-410 biological functional experimental system (Taimeng Science and Technology Co. Ltd, Chengdu) and SE-7102 type electronic stimulator (Guangdian Company, Japan) were used in the experiment. METHODS: This experiment was carried out in the Experimental Animal Center of Shandong University from April to June 2004. After rats were anesthetized, electrode was implanted into the hippocampus. From the first day of measurement of afterdischarge threshold value, rats were given two-square-wave suprathreshold stimulation once per day with 400 μA intensity, 1ms wave length, 60 Hz frequency for 1 s duration. Left hippocampus was stimulated in unilateral kindling group, bilateral hippocampi were stimulated in bilateral kindling group, and left and right hippocampi were stimulated alternately every day in the alternate-side kindling group. Seizure intensity was scored: grade 0: normal, 1: wet dog-like shivering, facial spasm, such as, winking, touching the beard, rhythmic chewing and so on; 2: rhythmic nodding; 3: forelimb spasm;4: standing accompanied by bilateral forelimb spasm;5: tumbling, losing balance, four limbs spasm. Modeling was successful when seizure intensity reached grade 5. t test was used for the comparison of mean value between two samples. MAIN OUTCOME MEASURES: Comparison of the successful rate of modeling, the times of stimulation to reach intensity of grade 5, the lasting time of seizure of grade 3 of rats in each group. RESULTS: Four rats of alternate-side kindling group dropped out due to infection-induced electrode loss, and 56 rats were involved in the result analysis. The successful rate of unilateral kindling group, bilateral kin- dling group and alternate-side kindling group was 55%(11/20),100%(16/16)and 100%(20/20), respective- ly. The stimuli to reach the grade 5 spasm were significantly more in the bilateral kindling group than in the unilateral kindling group [(30.63±3.48), (19.36±3.47)times, t=8.268, P 〈 0.01], and those were significantly fewer in the alternate-side kindling group than in the unilateral kindling group [( 10.85±1.98)times, t=-8.744, P 〈 0.01]. The duration of grade 3 spasm was significantly longer in the bilateral kindling group than in the unilateral kindling group [(9.75±2.59), (3.21 ±1.58)days,t=-8.183,P 〈 0.01], Among 20 successful rats of al- ternate-side kindling group, grade 5 spasm was found in the left hippocampi of 11 rats, but grade 3 spasm in their right hippocampi; Grade 5 spasm was found in the right hippocampi of the other 9 rats, grade 4 spasm in the left hippocampus of 1 rat and grade 3 of 8 rats. CONCLUSION : The speed of establishing epilepsy seizure model by alternate-side kindling is faster than that by unilateral kindling, while that by bilateral kindling is slower than that by unilateral kindling. The successful rate is very high to establish complex partial epilepsy with alternate-side or bilateral kindling. Epilepsy seizure established by alternate-side kindling has antagonistic effect of kindling and the seizure duration of grade 3 spasm is prolonged.展开更多
A novel method to extract conic blending feature in reverse engineering is presented. Different from the methods to recover constant and variable radius blends from unorganized points, it contains not only novel segme...A novel method to extract conic blending feature in reverse engineering is presented. Different from the methods to recover constant and variable radius blends from unorganized points, it contains not only novel segmentation and feature recognition techniques, but also bias corrected technique to capture more reliable distribution of feature parameters along the spine curve. The segmentation depending on point classification separates the points in the conic blend region from the input point cloud. The available feature parameters of the cross-sectional curves are extracted with the processes of slicing point clouds with planes, conic curve fitting, and parameters estimation and compensation, The extracted parameters and its distribution laws are refined according to statistic theory such as regression analysis and hypothesis test. The proposed method can accurately capture the original design intentions and conveniently guide the reverse modeling process. Application examples are presented to verify the high precision and stability of the proposed method.展开更多
On the platform of UG general CAD system, a customized module dedicated to turbo-jet engine blade design is implemented to support the integration of CAD/CAE/CAM processes and multidisciplinary optimization of structu...On the platform of UG general CAD system, a customized module dedicated to turbo-jet engine blade design is implemented to support the integration of CAD/CAE/CAM processes and multidisciplinary optimization of structure design. An example is presented to illustrate the related techniques.展开更多
The current 3D CAD/CAM system, both research prototypes and commercial systems, based on traditional feature modeling are always hampered by the problems in their complicated modeling and difficult maintaining. This p...The current 3D CAD/CAM system, both research prototypes and commercial systems, based on traditional feature modeling are always hampered by the problems in their complicated modeling and difficult maintaining. This paper introduces a new method for modeling parts by using adaptability feature (AF), by which the consistent relationship among parts and assemblies can be maintained in whole design process. In addition, the design process, can be speeded, time-to-market shortened, and product quality improved. Some essential issues of the strategy are discussed. A system, KMCAD3D, by taking advantages of AF has been developed. It is shown that the method discussed is a feasible and effective way to improve current feature modeling technology.展开更多
文摘This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the most efficient methodology for computing exact expressions of sensitivities, of any order, of model responses with respect to features of model parameters and, subsequently, with respect to the model’s uncertain parameters, boundaries, and internal interfaces. The unparalleled efficiency and accuracy of the n<sup>th</sup>-FASAM-N methodology stems from the maximal reduction of the number of adjoint computations (which are considered to be “large-scale” computations) for computing high-order sensitivities. When applying the n<sup>th</sup>-FASAM-N methodology to compute the second- and higher-order sensitivities, the number of large-scale computations is proportional to the number of “model features” as opposed to being proportional to the number of model parameters (which are considerably more than the number of features).When a model has no “feature” functions of parameters, but only comprises primary parameters, the n<sup>th</sup>-FASAM-N methodology becomes identical to the extant n<sup>th</sup> CASAM-N (“n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems”) methodology. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are formulated in linearly increasing higher-dimensional Hilbert spaces as opposed to exponentially increasing parameter-dimensional spaces thus overcoming the curse of dimensionality in sensitivity analysis of nonlinear systems. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N are incomparably more efficient and more accurate than any other methods (statistical, finite differences, etc.) for computing exact expressions of response sensitivities of any order with respect to the model’s features and/or primary uncertain parameters, boundaries, and internal interfaces.
文摘This work highlights the unparalleled efficiency of the “n<sup>th</sup>-Order Function/ Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-FASAM-N) by considering the well-known Nordheim-Fuchs reactor dynamics/safety model. This model describes a short-time self-limiting power excursion in a nuclear reactor system having a negative temperature coefficient in which a large amount of reactivity is suddenly inserted, either intentionally or by accident. This nonlinear paradigm model is sufficiently complex to model realistically self-limiting power excursions for short times yet admits closed-form exact expressions for the time-dependent neutron flux, temperature distribution and energy released during the transient power burst. The n<sup>th</sup>-FASAM-N methodology is compared to the extant “n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-CASAM-N) showing that: (i) the 1<sup>st</sup>-FASAM-N and the 1<sup>st</sup>-CASAM-N methodologies are equally efficient for computing the first-order sensitivities;each methodology requires a single large-scale computation for solving the “First-Level Adjoint Sensitivity System” (1<sup>st</sup>-LASS);(ii) the 2<sup>nd</sup>-FASAM-N methodology is considerably more efficient than the 2<sup>nd</sup>-CASAM-N methodology for computing the second-order sensitivities since the number of feature-functions is much smaller than the number of primary parameters;specifically for the Nordheim-Fuchs model, the 2<sup>nd</sup>-FASAM-N methodology requires 2 large-scale computations to obtain all of the exact expressions of the 28 distinct second-order response sensitivities with respect to the model parameters while the 2<sup>nd</sup>-CASAM-N methodology requires 7 large-scale computations for obtaining these 28 second-order sensitivities;(iii) the 3<sup>rd</sup>-FASAM-N methodology is even more efficient than the 3<sup>rd</sup>-CASAM-N methodology: only 2 large-scale computations are needed to obtain the exact expressions of the 84 distinct third-order response sensitivities with respect to the Nordheim-Fuchs model’s parameters when applying the 3<sup>rd</sup>-FASAM-N methodology, while the application of the 3<sup>rd</sup>-CASAM-N methodology requires at least 22 large-scale computations for computing the same 84 distinct third-order sensitivities. Together, the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are the most practical methodologies for computing response sensitivities of any order comprehensively and accurately, overcoming the curse of dimensionality in sensitivity analysis.
文摘Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common and variable features with their different relations and problem of Crosstree Constraints(CTC).CTC problems exist in groups of common and variable features among the sub-tree of feature models more diverse in Internet of Things(IoT)devices because different Internet devices and protocols are communicated.Therefore,managing the CTC problem to achieve valid product configuration in IoT-based SPL is more complex,time-consuming,and hard.However,the CTC problem needs to be considered in previously proposed approaches such as Commonality VariabilityModeling of Features(COVAMOF)andGenarch+tool;therefore,invalid products are generated.This research has proposed a novel approach Binary Oriented Feature Selection Crosstree Constraints(BOFS-CTC),to find all possible valid products by selecting the features according to cardinality constraints and cross-tree constraint problems in the featuremodel of SPL.BOFS-CTC removes the invalid products at the early stage of feature selection for the product configuration.Furthermore,this research developed the BOFS-CTC algorithm and applied it to,IoT-based feature models.The findings of this research are that no relationship constraints and CTC violations occur and drive the valid feature product configurations for the application development by removing the invalid product configurations.The accuracy of BOFS-CTC is measured by the integration sampling technique,where different valid product configurations are compared with the product configurations derived by BOFS-CTC and found 100%correct.Using BOFS-CTC eliminates the testing cost and development effort of invalid SPL products.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金supported by the National Natural Science Foundation of China under Grant(61732005,61972186)Yunnan Provincial Major Science and Technology Special Plan Projects(Nos.202103AA080015,202203AA080004).
文摘Thanks to the strong representation capability of pre-trained language models,supervised machine translation models have achieved outstanding performance.However,the performances of these models drop sharply when the scale of the parallel training corpus is limited.Considering the pre-trained language model has a strong ability for monolingual representation,it is the key challenge for machine translation to construct the in-depth relationship between the source and target language by injecting the lexical and syntactic information into pre-trained language models.To alleviate the dependence on the parallel corpus,we propose a Linguistics Knowledge-Driven MultiTask(LKMT)approach to inject part-of-speech and syntactic knowledge into pre-trained models,thus enhancing the machine translation performance.On the one hand,we integrate part-of-speech and dependency labels into the embedding layer and exploit large-scale monolingual corpus to update all parameters of pre-trained language models,thus ensuring the updated language model contains potential lexical and syntactic information.On the other hand,we leverage an extra self-attention layer to explicitly inject linguistic knowledge into the pre-trained language model-enhanced machine translation model.Experiments on the benchmark dataset show that our proposed LKMT approach improves the Urdu-English translation accuracy by 1.97 points and the English-Urdu translation accuracy by 2.42 points,highlighting the effectiveness of our LKMT framework.Detailed ablation experiments confirm the positive impact of part-of-speech and dependency parsing on machine translation.
文摘The adaptability of features definition to applications is an essential condition for implementing feature based design. This paper makes attempt to present a hierarchical definition structure of features. The proposed scheme divides feature definition into application level, form level and geometric level, and provides links between different levels with feature semantics interpretation and enhanced geometric face adjacent graph. respectively. The results not only enable feature definition to abate from the specific dependence and become more extensive, but also provide a theoretical foundation for establishing the concurrent feature based design process model.
文摘Feature modeling is the key to the realization of CAD/CAPP/CAM and the information integration of concurrent engineering. This paper describes the method for the advanced development of the parametric modeling system based on features by using I DEAS 5 system. It elaborates the modeling technique based on the features and generates the product information models based on the features providing abundant information for the process of the ensuing applications. The development of the feature modeling system on the commercial CAD software platform can take a great advantage of the solid modeling resources of the existing software, save the input of funds and shorten the development cycles of the new systems.
文摘Heart disease(HD)is a serious widespread life-threatening disease.The heart of patients with HD fails to pump sufcient amounts of blood to the entire body.Diagnosing the occurrence of HD early and efciently may prevent the manifestation of the debilitating effects of this disease and aid in its effective treatment.Classical methods for diagnosing HD are sometimes unreliable and insufcient in analyzing the related symptoms.As an alternative,noninvasive medical procedures based on machine learning(ML)methods provide reliable HD diagnosis and efcient prediction of HD conditions.However,the existing models of automated ML-based HD diagnostic methods cannot satisfy clinical evaluation criteria because of their inability to recognize anomalies in extracted symptoms represented as classication features from patients with HD.In this study,we propose an automated heart disease diagnosis(AHDD)system that integrates a binary convolutional neural network(CNN)with a new multi-agent feature wrapper(MAFW)model.The MAFW model consists of four software agents that operate a genetic algorithm(GA),a support vector machine(SVM),and Naïve Bayes(NB).The agents instruct the GA to perform a global search on HD features and adjust the weights of SVM and BN during initial classication.A nal tuning to CNN is then performed to ensure that the best set of features are included in HD identication.The CNN consists of ve layers that categorize patients as healthy or with HD according to the analysis of optimized HD features.We evaluate the classication performance of the proposed AHDD system via 12 common ML techniques and conventional CNN models by using across-validation technique and by assessing six evaluation criteria.The AHDD system achieves the highest accuracy of 90.1%,whereas the other ML and conventional CNN models attain only 72.3%–83.8%accuracy on average.Therefore,the AHDD system proposed herein has the highest capability to identify patients with HD.This system can be used by medical practitioners to diagnose HD efciently。
基金Under the auspices of Priority Academic Program Development of Jiangsu Higher Education Institutions,National Natural Science Foundation of China(No.41271438,41471316,41401440,41671389)
文摘Gully feature mapping is an indispensable prerequisite for the motioning and control of gully erosion which is a widespread natural hazard. The increasing availability of high-resolution Digital Elevation Model(DEM) and remote sensing imagery, combined with developed object-based methods enables automatic gully feature mapping. But still few studies have specifically focused on gully feature mapping on different scales. In this study, an object-based approach to two-level gully feature mapping, including gully-affected areas and bank gullies, was developed and tested on 1-m DEM and Worldview-3 imagery of a catchment in the Chinese Loess Plateau. The methodology includes a sequence of data preparation, image segmentation, metric calculation, and random forest based classification. The results of the two-level mapping were based on a random forest model after investigating the effects of feature selection and class-imbalance problem. Results show that the segmentation strategy adopted in this paper which considers the topographic information and optimal parameter combination can improve the segmentation results. The distribution of the gully-affected area is closely related to topographic information, however, the spectral features are more dominant for bank gully mapping. The highest overall accuracy of the gully-affected area mapping was 93.06% with four topographic features. The highest overall accuracy of bank gully mapping is 78.5% when all features are adopted. The proposed approach is a creditable option for hierarchical mapping of gully feature information, which is suitable for the application in hily Loess Plateau region.
基金supported by the National Natural Science Foundation of China(62033008,61873143)。
文摘With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectively handled by traditional monitoring methods such as linear discriminant analysis(LDA),principal component analysis(PCA)and partial least square(PLS)analysis.Recently,a mixed hidden naive Bayesian model(MHNBM)is developed for the first time to utilize both two-valued and continuous variables for abnormality monitoring.Although the MHNBM is effective,it still has some shortcomings that need to be improved.For the MHNBM,the variables with greater correlation to other variables have greater weights,which can not guarantee greater weights are assigned to the more discriminating variables.In addition,the conditional P(x j|x j′,y=k)probability must be computed based on historical data.When the training data is scarce,the conditional probability between continuous variables tends to be uniformly distributed,which affects the performance of MHNBM.Here a novel feature weighted mixed naive Bayes model(FWMNBM)is developed to overcome the above shortcomings.For the FWMNBM,the variables that are more correlated to the class have greater weights,which makes the more discriminating variables contribute more to the model.At the same time,FWMNBM does not have to calculate the conditional probability between variables,thus it is less restricted by the number of training data samples.Compared with the MHNBM,the FWMNBM has better performance,and its effectiveness is validated through numerical cases of a simulation example and a practical case of the Zhoushan thermal power plant(ZTPP),China.
基金This project is supported by National Natural Science Foundation of China(No.50075079).
文摘A new feature extraction method based on 2D-hidden Markov model(HMM) is proposed. Meanwhile the time index and frequency index are introduced to represent the new features. The new feature extraction strategy is tested by the experimental data that collected from Bently rotor experiment system. The results show that this methodology is very effective to extract the feature of vibration signals in the rotor speed-up course and can be extended to other non-stationary signal analysis fields in the future.
基金Supported partially by the Post Doctoral Natural Science Foundation of China(2013M532118,2015T81082)the National Natural Science Foundation of China(61573364,61273177,61503066)+2 种基金the State Key Laboratory of Synthetical Automation for Process Industriesthe National High Technology Research and Development Program of China(2015AA043802)the Scientific Research Fund of Liaoning Provincial Education Department(L2013272)
文摘Strong mechanical vibration and acoustical signals of grinding process contain useful information related to load parameters in ball mills. It is a challenge to extract latent features and construct soft sensor model with high dimensional frequency spectra of these signals. This paper aims to develop a selective ensemble modeling approach based on nonlinear latent frequency spectral feature extraction for accurate measurement of material to ball volume ratio. Latent features are first extracted from different vibrations and acoustic spectral segments by kernel partial least squares. Algorithms of bootstrap and least squares support vector machines are employed to produce candidate sub-models using these latent features as inputs. Ensemble sub-models are selected based on genetic algorithm optimization toolbox. Partial least squares regression is used to combine these sub-models to eliminate collinearity among their prediction outputs. Results indicate that the proposed modeling approach has better prediction performance than previous ones.
基金This work is supported by the National Natural Science Foundation of China(12002218)the Youth Foundation of Education Department of Liaoning Province(JYT19034).These supports are gratefully acknowledged.
文摘This paper presents a feature modeling approach to address the 3D structural topology design optimization withfeature constraints. In the proposed algorithm, various features are formed into searchable shape features bythe feature modeling technology, and the models of feature elements are established. The feature elements thatmeet the design requirements are found by employing a feature matching technology, and the constraint factorscombined with the pseudo density of elements are initialized according to the optimized feature elements. Then,through controlling the constraint factors and utilizing the optimization criterion method along with the filteringtechnology of independent mesh, the structural design optimization is implemented. The present feature modelingapproach is applied to the feature-based structural topology optimization using empirical data. Meanwhile, theimproved mathematical model based on the density method with the constraint factors and the correspondingsolution processes are also presented. Compared with the traditional method which requires complicated constraintprocessing, the present approach is flexibly applied to the 3D structural design optimization with added holesby changing the constraint factors, thus it can design a structure with predetermined features more directly andeasily. Numerical examples show effectiveness of the proposed feature modeling approach, which is suitable for thepractical engineering design.
文摘Most large-scale systems including self-adaptive systems utilize feature models(FMs)to represent their complex architectures and benefit from the reuse of commonalities and variability information.Self-adaptive systems(SASs)are capable of reconfiguring themselves during the run time to satisfy the scenarios of the requisite contexts.However,reconfiguration of SASs corresponding to each adaptation of the system requires significant computational time and resources.The process of configuration reuse can be a better alternative to some contexts to reduce computational time,effort and error-prone.Nevertheless,systems’complexity can be reduced while the development process of systems by reusing elements or components.FMs are considered one of the new ways of reuse process that are able to introduce new opportunities for the reuse process beyond the conventional system components.While current FM-based modelling techniques represent,manage,and reuse elementary features to model SASs concepts,modeling and reusing configurations have not yet been considered.In this context,this study presents an extension to FMs by introducing and managing configuration features and their reuse process.Evaluation results demonstrate that reusing configuration features reduces the effort and time required by a reconfiguration process during the run time to meet the required scenario according to the current context.
基金Project (40473029) supported bythe National Natural Science Foundation of China project (04JJ3046) supported bytheNatural Science Foundation of Hunan Province , China
文摘In allusion to the difficulty of integrating data with different models in integrating spatial information, the characteristics of raster structure, vector structure and mixed model were analyzed, and a hierarchical vector-raster integrative full feature model was put forward by integrating the advantage of vector and raster model and using the object-oriented method. The data structures of the four basic features, i.e. point, line, surface and solid, were described. An application was analyzed and described, and the characteristics of this model were described. In this model, all objects in the real world are divided into and described as features with hierarchy, and all the data are organized in vector. This model can describe data based on feature, field, network and other models, and avoid the disadvantage of inability to integrate data based on different models and perform spatial analysis on them in spatial information integration.
文摘BACKGROUND: Electrical stimulation kindling model, having epilepsy-inducing and spontaneous seizure and other advantages, is a very ideal experimental animal model. But the kindling effect might be different at different sites. OBJECTIVE: To compare the features of animal models of complex partial epilepsy established through unilateral, bilateral and alternate-side kindling at hippocampus and successful rate of modeling among these 3 different ways. DESIGN: A randomized and controlled animal experiment SETTING: Department of Neurology, Qilu Hospital, Shandong University MATERIALS: Totally 60 healthy adult Wistar rats, weighing 200 to 300 g, of either gender, were used in this experiment. BL-410 biological functional experimental system (Taimeng Science and Technology Co. Ltd, Chengdu) and SE-7102 type electronic stimulator (Guangdian Company, Japan) were used in the experiment. METHODS: This experiment was carried out in the Experimental Animal Center of Shandong University from April to June 2004. After rats were anesthetized, electrode was implanted into the hippocampus. From the first day of measurement of afterdischarge threshold value, rats were given two-square-wave suprathreshold stimulation once per day with 400 μA intensity, 1ms wave length, 60 Hz frequency for 1 s duration. Left hippocampus was stimulated in unilateral kindling group, bilateral hippocampi were stimulated in bilateral kindling group, and left and right hippocampi were stimulated alternately every day in the alternate-side kindling group. Seizure intensity was scored: grade 0: normal, 1: wet dog-like shivering, facial spasm, such as, winking, touching the beard, rhythmic chewing and so on; 2: rhythmic nodding; 3: forelimb spasm;4: standing accompanied by bilateral forelimb spasm;5: tumbling, losing balance, four limbs spasm. Modeling was successful when seizure intensity reached grade 5. t test was used for the comparison of mean value between two samples. MAIN OUTCOME MEASURES: Comparison of the successful rate of modeling, the times of stimulation to reach intensity of grade 5, the lasting time of seizure of grade 3 of rats in each group. RESULTS: Four rats of alternate-side kindling group dropped out due to infection-induced electrode loss, and 56 rats were involved in the result analysis. The successful rate of unilateral kindling group, bilateral kin- dling group and alternate-side kindling group was 55%(11/20),100%(16/16)and 100%(20/20), respective- ly. The stimuli to reach the grade 5 spasm were significantly more in the bilateral kindling group than in the unilateral kindling group [(30.63±3.48), (19.36±3.47)times, t=8.268, P 〈 0.01], and those were significantly fewer in the alternate-side kindling group than in the unilateral kindling group [( 10.85±1.98)times, t=-8.744, P 〈 0.01]. The duration of grade 3 spasm was significantly longer in the bilateral kindling group than in the unilateral kindling group [(9.75±2.59), (3.21 ±1.58)days,t=-8.183,P 〈 0.01], Among 20 successful rats of al- ternate-side kindling group, grade 5 spasm was found in the left hippocampi of 11 rats, but grade 3 spasm in their right hippocampi; Grade 5 spasm was found in the right hippocampi of the other 9 rats, grade 4 spasm in the left hippocampus of 1 rat and grade 3 of 8 rats. CONCLUSION : The speed of establishing epilepsy seizure model by alternate-side kindling is faster than that by unilateral kindling, while that by bilateral kindling is slower than that by unilateral kindling. The successful rate is very high to establish complex partial epilepsy with alternate-side or bilateral kindling. Epilepsy seizure established by alternate-side kindling has antagonistic effect of kindling and the seizure duration of grade 3 spasm is prolonged.
基金This project is supported by General Electric Company and National Advanced Technology Project of China(No.863-511-942-018).
文摘A novel method to extract conic blending feature in reverse engineering is presented. Different from the methods to recover constant and variable radius blends from unorganized points, it contains not only novel segmentation and feature recognition techniques, but also bias corrected technique to capture more reliable distribution of feature parameters along the spine curve. The segmentation depending on point classification separates the points in the conic blend region from the input point cloud. The available feature parameters of the cross-sectional curves are extracted with the processes of slicing point clouds with planes, conic curve fitting, and parameters estimation and compensation, The extracted parameters and its distribution laws are refined according to statistic theory such as regression analysis and hypothesis test. The proposed method can accurately capture the original design intentions and conveniently guide the reverse modeling process. Application examples are presented to verify the high precision and stability of the proposed method.
基金Supported by the Aeronautical Science Foundation of China (04C51053)
文摘On the platform of UG general CAD system, a customized module dedicated to turbo-jet engine blade design is implemented to support the integration of CAD/CAE/CAM processes and multidisciplinary optimization of structure design. An example is presented to illustrate the related techniques.
文摘The current 3D CAD/CAM system, both research prototypes and commercial systems, based on traditional feature modeling are always hampered by the problems in their complicated modeling and difficult maintaining. This paper introduces a new method for modeling parts by using adaptability feature (AF), by which the consistent relationship among parts and assemblies can be maintained in whole design process. In addition, the design process, can be speeded, time-to-market shortened, and product quality improved. Some essential issues of the strategy are discussed. A system, KMCAD3D, by taking advantages of AF has been developed. It is shown that the method discussed is a feasible and effective way to improve current feature modeling technology.