This paper presents a feature modeling approach to address the 3D structural topology design optimization withfeature constraints. In the proposed algorithm, various features are formed into searchable shape features ...This paper presents a feature modeling approach to address the 3D structural topology design optimization withfeature constraints. In the proposed algorithm, various features are formed into searchable shape features bythe feature modeling technology, and the models of feature elements are established. The feature elements thatmeet the design requirements are found by employing a feature matching technology, and the constraint factorscombined with the pseudo density of elements are initialized according to the optimized feature elements. Then,through controlling the constraint factors and utilizing the optimization criterion method along with the filteringtechnology of independent mesh, the structural design optimization is implemented. The present feature modelingapproach is applied to the feature-based structural topology optimization using empirical data. Meanwhile, theimproved mathematical model based on the density method with the constraint factors and the correspondingsolution processes are also presented. Compared with the traditional method which requires complicated constraintprocessing, the present approach is flexibly applied to the 3D structural design optimization with added holesby changing the constraint factors, thus it can design a structure with predetermined features more directly andeasily. Numerical examples show effectiveness of the proposed feature modeling approach, which is suitable for thepractical engineering design.展开更多
Most large-scale systems including self-adaptive systems utilize feature models(FMs)to represent their complex architectures and benefit from the reuse of commonalities and variability information.Self-adaptive system...Most large-scale systems including self-adaptive systems utilize feature models(FMs)to represent their complex architectures and benefit from the reuse of commonalities and variability information.Self-adaptive systems(SASs)are capable of reconfiguring themselves during the run time to satisfy the scenarios of the requisite contexts.However,reconfiguration of SASs corresponding to each adaptation of the system requires significant computational time and resources.The process of configuration reuse can be a better alternative to some contexts to reduce computational time,effort and error-prone.Nevertheless,systems’complexity can be reduced while the development process of systems by reusing elements or components.FMs are considered one of the new ways of reuse process that are able to introduce new opportunities for the reuse process beyond the conventional system components.While current FM-based modelling techniques represent,manage,and reuse elementary features to model SASs concepts,modeling and reusing configurations have not yet been considered.In this context,this study presents an extension to FMs by introducing and managing configuration features and their reuse process.Evaluation results demonstrate that reusing configuration features reduces the effort and time required by a reconfiguration process during the run time to meet the required scenario according to the current context.展开更多
This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the...This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the most efficient methodology for computing exact expressions of sensitivities, of any order, of model responses with respect to features of model parameters and, subsequently, with respect to the model’s uncertain parameters, boundaries, and internal interfaces. The unparalleled efficiency and accuracy of the n<sup>th</sup>-FASAM-N methodology stems from the maximal reduction of the number of adjoint computations (which are considered to be “large-scale” computations) for computing high-order sensitivities. When applying the n<sup>th</sup>-FASAM-N methodology to compute the second- and higher-order sensitivities, the number of large-scale computations is proportional to the number of “model features” as opposed to being proportional to the number of model parameters (which are considerably more than the number of features).When a model has no “feature” functions of parameters, but only comprises primary parameters, the n<sup>th</sup>-FASAM-N methodology becomes identical to the extant n<sup>th</sup> CASAM-N (“n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems”) methodology. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are formulated in linearly increasing higher-dimensional Hilbert spaces as opposed to exponentially increasing parameter-dimensional spaces thus overcoming the curse of dimensionality in sensitivity analysis of nonlinear systems. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N are incomparably more efficient and more accurate than any other methods (statistical, finite differences, etc.) for computing exact expressions of response sensitivities of any order with respect to the model’s features and/or primary uncertain parameters, boundaries, and internal interfaces.展开更多
This work highlights the unparalleled efficiency of the “n<sup>th</sup>-Order Function/ Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-FASAM-N) by con...This work highlights the unparalleled efficiency of the “n<sup>th</sup>-Order Function/ Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-FASAM-N) by considering the well-known Nordheim-Fuchs reactor dynamics/safety model. This model describes a short-time self-limiting power excursion in a nuclear reactor system having a negative temperature coefficient in which a large amount of reactivity is suddenly inserted, either intentionally or by accident. This nonlinear paradigm model is sufficiently complex to model realistically self-limiting power excursions for short times yet admits closed-form exact expressions for the time-dependent neutron flux, temperature distribution and energy released during the transient power burst. The n<sup>th</sup>-FASAM-N methodology is compared to the extant “n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-CASAM-N) showing that: (i) the 1<sup>st</sup>-FASAM-N and the 1<sup>st</sup>-CASAM-N methodologies are equally efficient for computing the first-order sensitivities;each methodology requires a single large-scale computation for solving the “First-Level Adjoint Sensitivity System” (1<sup>st</sup>-LASS);(ii) the 2<sup>nd</sup>-FASAM-N methodology is considerably more efficient than the 2<sup>nd</sup>-CASAM-N methodology for computing the second-order sensitivities since the number of feature-functions is much smaller than the number of primary parameters;specifically for the Nordheim-Fuchs model, the 2<sup>nd</sup>-FASAM-N methodology requires 2 large-scale computations to obtain all of the exact expressions of the 28 distinct second-order response sensitivities with respect to the model parameters while the 2<sup>nd</sup>-CASAM-N methodology requires 7 large-scale computations for obtaining these 28 second-order sensitivities;(iii) the 3<sup>rd</sup>-FASAM-N methodology is even more efficient than the 3<sup>rd</sup>-CASAM-N methodology: only 2 large-scale computations are needed to obtain the exact expressions of the 84 distinct third-order response sensitivities with respect to the Nordheim-Fuchs model’s parameters when applying the 3<sup>rd</sup>-FASAM-N methodology, while the application of the 3<sup>rd</sup>-CASAM-N methodology requires at least 22 large-scale computations for computing the same 84 distinct third-order sensitivities. Together, the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are the most practical methodologies for computing response sensitivities of any order comprehensively and accurately, overcoming the curse of dimensionality in sensitivity analysis.展开更多
A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated.General initialization scheme...A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated.General initialization schemes as well as general regimes for the network width and training data size are considered.In the overparametrized regime,it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels.In addition,it is proved that throughout the training process the functions represented by the neural network model are uniformly close to those of a kernel method.For general values of the network width and training data size,sharp estimates of the generalization error are established for target functions in the appropriate reproducing kernel Hilbert space.展开更多
Overlapping community detection has become a very hot research topic in recent decades,and a plethora of methods have been proposed.But,a common challenge in many existing overlapping community detection approaches is...Overlapping community detection has become a very hot research topic in recent decades,and a plethora of methods have been proposed.But,a common challenge in many existing overlapping community detection approaches is that the number of communities K must be predefined manually.We propose a flexible nonparametric Bayesian generative model for count-value networks,which can allow K to increase as more and more data are encountered instead of to be fixed in advance.The Indian buffet process was used to model the community assignment matrix Z,and an uncol-lapsed Gibbs sampler has been derived.However,as the community assignment matrix Zis a structured multi-variable parameter,how to summarize the posterior inference results andestimate the inference quality about Z,is still a considerable challenge in the literature.In this paper,a graph convolutional neural network based graph classifier was utilized to help tosummarize the results and to estimate the inference qualityabout Z.We conduct extensive experiments on synthetic data and real data,and find that empirically,the traditional posterior summarization strategy is reliable.展开更多
Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common an...Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common and variable features with their different relations and problem of Crosstree Constraints(CTC).CTC problems exist in groups of common and variable features among the sub-tree of feature models more diverse in Internet of Things(IoT)devices because different Internet devices and protocols are communicated.Therefore,managing the CTC problem to achieve valid product configuration in IoT-based SPL is more complex,time-consuming,and hard.However,the CTC problem needs to be considered in previously proposed approaches such as Commonality VariabilityModeling of Features(COVAMOF)andGenarch+tool;therefore,invalid products are generated.This research has proposed a novel approach Binary Oriented Feature Selection Crosstree Constraints(BOFS-CTC),to find all possible valid products by selecting the features according to cardinality constraints and cross-tree constraint problems in the featuremodel of SPL.BOFS-CTC removes the invalid products at the early stage of feature selection for the product configuration.Furthermore,this research developed the BOFS-CTC algorithm and applied it to,IoT-based feature models.The findings of this research are that no relationship constraints and CTC violations occur and drive the valid feature product configurations for the application development by removing the invalid product configurations.The accuracy of BOFS-CTC is measured by the integration sampling technique,where different valid product configurations are compared with the product configurations derived by BOFS-CTC and found 100%correct.Using BOFS-CTC eliminates the testing cost and development effort of invalid SPL products.展开更多
With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectiv...With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectively handled by traditional monitoring methods such as linear discriminant analysis(LDA),principal component analysis(PCA)and partial least square(PLS)analysis.Recently,a mixed hidden naive Bayesian model(MHNBM)is developed for the first time to utilize both two-valued and continuous variables for abnormality monitoring.Although the MHNBM is effective,it still has some shortcomings that need to be improved.For the MHNBM,the variables with greater correlation to other variables have greater weights,which can not guarantee greater weights are assigned to the more discriminating variables.In addition,the conditional P(x j|x j′,y=k)probability must be computed based on historical data.When the training data is scarce,the conditional probability between continuous variables tends to be uniformly distributed,which affects the performance of MHNBM.Here a novel feature weighted mixed naive Bayes model(FWMNBM)is developed to overcome the above shortcomings.For the FWMNBM,the variables that are more correlated to the class have greater weights,which makes the more discriminating variables contribute more to the model.At the same time,FWMNBM does not have to calculate the conditional probability between variables,thus it is less restricted by the number of training data samples.Compared with the MHNBM,the FWMNBM has better performance,and its effectiveness is validated through numerical cases of a simulation example and a practical case of the Zhoushan thermal power plant(ZTPP),China.展开更多
BACKGROUND: Electrical stimulation kindling model, having epilepsy-inducing and spontaneous seizure and other advantages, is a very ideal experimental animal model. But the kindling effect might be different at differ...BACKGROUND: Electrical stimulation kindling model, having epilepsy-inducing and spontaneous seizure and other advantages, is a very ideal experimental animal model. But the kindling effect might be different at different sites. OBJECTIVE: To compare the features of animal models of complex partial epilepsy established through unilateral, bilateral and alternate-side kindling at hippocampus and successful rate of modeling among these 3 different ways. DESIGN: A randomized and controlled animal experiment. SETTING: Department of Neurology, Qilu Hospital, Shandong University. MATERIALS: Totally 60 healthy adult Wistar rats, weighing 200 to 300 g, of either gender, were used in this experiment. BL-410 biological functional experimental system (Taimeng Science and Technology Co.Ltd, Chengdu) and SE-7102 type electronic stimulator (Guangdian Company, Japan) were used in the experiment. METHODS: This experiment was carried out in the Experimental Animal Center of Shandong University from April to June 2004. After rats were anesthetized, electrode was implanted into the hippocampus. From the first day of measurement of afterdischarge threshold value, rats were given two-square-wave suprathreshold stimulation once per day with 400 μA intensity, 1ms wave length, 60 Hz frequency for 1 s duration. Left hippocampus was stimulated in unilateral kindling group, bilateral hippocampi were stimulated in bilateral kindling group, and left and right hippocampi were stimulated alternately every day in the alternate-side kindling group. Seizure intensity was scored: grade 0: normal, 1: wet dog-like shivering, facial spasm, such as, winking, touching the beard, rhythmic chewing and so on; 2: rhythmic nodding; 3: forelimb spasm;4: standing accompanied by bilateral forelimb spasm;5: tumbling, losing balance, four limbs spasm. Modeling was successful when seizure intensity reached grade 5. t test was used for the comparison of mean value between two samples. MAIN OUTCOME MEASURES: Comparison of the successful rate of modeling, the times of stimulation to reach intensity of grade 5, the lasting time of seizure of grade 3 of rats in each group. RESULTS: Four rats of alternate-side kindling group dropped out due to infection-induced electrode loss, and 56 rats were involved in the result analysis. The successful rate of unilateral kindling group, bilateral kindling group and alternate-side kindling group was 55%(11/20),100%(16/16)and 100%(20/20), respectively. The stimuli to reach the grade 5 spasm were significantly more in the bilateral kindling group than in the unilateral kindling group [(30.63±3.48),(19.36±3.47)times,t=8.268,P < 0.01], and those were significantly fewer in the alternate-side kindling group than in the unilateral kindling group [(10.85±1.98)times,t=8.744,P < 0.01]. The duration of grade 3 spasm was significantly longer in the bilateral kindling group than in the unilateral kindling group [(9.75±2.59),(3.21 ±1.58)days,t=8.183,P < 0.01]. Among 20 successful rats of alternate-side kindling group, grade 5 spasm was found in the left hippocampi of 11 rats, but grade 3 spasm in their right hippocampi; Grade 5 spasm was found in the right hippocampi of the other 9 rats, grade 4 spasm in the left hippocampus of 1 rat and grade 3 of 8 rats. CONCLUSION: The speed of establishing epilepsy seizure model by alternate-side kindling is faster than that by unilateral kindling, while that by bilateral kindling is slower than that by unilateral kindling. The successful rate is very high to establish complex partial epilepsy with alternate-side or bilateral kindling. Epilepsy seizure established by alternate-side kindling has antagonistic effect of kindling and the seizure duration of grade 3 spasm is prolonged.展开更多
Use of features in order to achieve the integration of design and manufacture has been considered to be a key factor recent years. Features such as manufacturing properties form the workpiece. Features are structured ...Use of features in order to achieve the integration of design and manufacture has been considered to be a key factor recent years. Features such as manufacturing properties form the workpiece. Features are structured systematically through object oriented modeling. This article explains an object coding method developed for prismatic workpieces and the use of that method in process planning. Features have been determined and modeled as objects. Features have been coded according to their types and locations on the workpiece in this given method. Feature codings have been seen to be very advantageous in process planning.展开更多
Heart disease(HD)is a serious widespread life-threatening disease.The heart of patients with HD fails to pump sufcient amounts of blood to the entire body.Diagnosing the occurrence of HD early and efciently may preven...Heart disease(HD)is a serious widespread life-threatening disease.The heart of patients with HD fails to pump sufcient amounts of blood to the entire body.Diagnosing the occurrence of HD early and efciently may prevent the manifestation of the debilitating effects of this disease and aid in its effective treatment.Classical methods for diagnosing HD are sometimes unreliable and insufcient in analyzing the related symptoms.As an alternative,noninvasive medical procedures based on machine learning(ML)methods provide reliable HD diagnosis and efcient prediction of HD conditions.However,the existing models of automated ML-based HD diagnostic methods cannot satisfy clinical evaluation criteria because of their inability to recognize anomalies in extracted symptoms represented as classication features from patients with HD.In this study,we propose an automated heart disease diagnosis(AHDD)system that integrates a binary convolutional neural network(CNN)with a new multi-agent feature wrapper(MAFW)model.The MAFW model consists of four software agents that operate a genetic algorithm(GA),a support vector machine(SVM),and Naïve Bayes(NB).The agents instruct the GA to perform a global search on HD features and adjust the weights of SVM and BN during initial classication.A nal tuning to CNN is then performed to ensure that the best set of features are included in HD identication.The CNN consists of ve layers that categorize patients as healthy or with HD according to the analysis of optimized HD features.We evaluate the classication performance of the proposed AHDD system via 12 common ML techniques and conventional CNN models by using across-validation technique and by assessing six evaluation criteria.The AHDD system achieves the highest accuracy of 90.1%,whereas the other ML and conventional CNN models attain only 72.3%–83.8%accuracy on average.Therefore,the AHDD system proposed herein has the highest capability to identify patients with HD.This system can be used by medical practitioners to diagnose HD efciently。展开更多
With the increasing popularity of high-resolution remote sensing images,the remote sensing image retrieval(RSIR)has always been a topic of major issue.A combined,global non-subsampled shearlet transform(NSST)-domain s...With the increasing popularity of high-resolution remote sensing images,the remote sensing image retrieval(RSIR)has always been a topic of major issue.A combined,global non-subsampled shearlet transform(NSST)-domain statistical features(NSSTds)and local three dimensional local ternary pattern(3D-LTP)features,is proposed for high-resolution remote sensing images.We model the NSST image coefficients of detail subbands using 2-state laplacian mixture(LM)distribution and its three parameters are estimated using Expectation-Maximization(EM)algorithm.We also calculate the statistical parameters such as subband kurtosis and skewness from detail subbands along with mean and standard deviation calculated from approximation subband,and concatenate all of them with the 2-state LM parameters to describe the global features of the image.The various properties of NSST such as multiscale,localization and flexible directional sensitivity make it a suitable choice to provide an effective approximation of an image.In order to extract the dense local features,a new 3D-LTP is proposed where dimension reduction is performed via selection of‘uniform’patterns.The 3D-LTP is calculated from spatial RGB planes of the input image.The proposed inter-channel 3D-LTP not only exploits the local texture information but the color information is captured too.Finally,a fused feature representation(NSSTds-3DLTP)is proposed using new global(NSSTds)and local(3D-LTP)features to enhance the discriminativeness of features.The retrieval performance of proposed NSSTds-3DLTP features are tested on three challenging remote sensing image datasets such as WHU-RS19,Aerial Image Dataset(AID)and PatternNet in terms of mean average precision(MAP),average normalized modified retrieval rank(ANMRR)and precision-recall(P-R)graph.The experimental results are encouraging and the NSSTds-3DLTP features leads to superior retrieval performance compared to many well known existing descriptors such as Gabor RGB,Granulometry,local binary pattern(LBP),Fisher vector(FV),vector of locally aggregated descriptors(VLAD)and median robust extended local binary pattern(MRELBP).For WHU-RS19 dataset,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{41.93%,20.87%},{92.30%,32.68%},{86.14%,31.97%},{18.18%,15.22%},{8.96%,19.60%}and{15.60%,13.26%},respectively.For AID,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{152.60%,22.06%},{226.65%,25.08%},{185.03%,23.33%},{80.06%,12.16%},{50.58%,10.49%}and{62.34%,3.24%},respectively.For PatternNet,the NSSTds-3DLTP respectively improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{32.79%,10.34%},{141.30%,24.72%},{17.47%,10.34%},{83.20%,19.07%},{21.56%,3.60%},and{19.30%,0.48%}in terms of{MAP,ANMRR}.The moderate dimensionality of simple NSSTds-3DLTP allows the system to run in real-time.展开更多
To improve the maintenance and quality of software product lines,efficient configurations techniques have been proposed.Nevertheless,due to the complexity of derived and configured products in a product line,the confi...To improve the maintenance and quality of software product lines,efficient configurations techniques have been proposed.Nevertheless,due to the complexity of derived and configured products in a product line,the configuration process of the software product line(SPL)becomes timeconsuming and costly.Each product line consists of a various number of feature models that need to be tested.The different approaches have been presented by Search-based software engineering(SBSE)to resolve the software engineering issues into computational solutions using some metaheuristic approach.Hence,multiobjective evolutionary algorithms help to optimize the configuration process of SPL.In this paper,different multi-objective Evolutionary Algorithms like Non-Dominated Sorting Genetic algorithms II(NSGA-II)and NSGA-III and Indicator based Evolutionary Algorithm(IBEA)are applied to different feature models to generate optimal results for large configurable.The proposed approach is also used to generate the optimized test suites with the help of different multi-objective Evolutionary Algorithms(MOEAs).展开更多
We propose a 3D model feature line extraction method using templates for guidance. The 3D model is first projected into a depth map, and a set of candidate feature points are extracted. Then, a conditional random fiel...We propose a 3D model feature line extraction method using templates for guidance. The 3D model is first projected into a depth map, and a set of candidate feature points are extracted. Then, a conditional random fields (CRF) model is established to match the sketch points and the candidate feature points. Using sketch strokes, the candidate feature points can then be connected to obtain the feature lines, and using a CRF-matching model, the 2D image shape similarity features and 3D model geometric features can be effectively integrated. Finally, a relational metric based on shape and topological similarity is proposed to evaluate the matching results, and an iterative matching process is applied to obtain the globally optimized model feature lines. Experimental results showed that the proposed method can extract sound 3D model feature lines which correspond to the initial sketch template.展开更多
Traditional hand-crafted features for representing local image patches are evolving into current data-driven and learning-based image feature, but learning a robust and discriminative descriptor which is capable of co...Traditional hand-crafted features for representing local image patches are evolving into current data-driven and learning-based image feature, but learning a robust and discriminative descriptor which is capable of controlling various patch-level computer vision tasks is still an open problem. In this work, we propose a novel deep convolutional neural network(CNN) to learn local feature descriptors. We utilize the quadruplets with positive and negative training samples, together with a constraint to restrict the intra-class variance, to learn good discriminative CNN representations. Compared with previous works, our model reduces the overlap in feature space between corresponding and non-corresponding patch pairs, and mitigates margin varying problem caused by commonly used triplet loss. We demonstrate that our method achieves better embedding result than some latest works, like PN-Net and TN-TG, on benchmark dataset.展开更多
The research of emotion recognition based on electroencephalogram(EEG)signals often ignores the related information between the brain electrode channels and the contextual emotional information existing in EEG signals...The research of emotion recognition based on electroencephalogram(EEG)signals often ignores the related information between the brain electrode channels and the contextual emotional information existing in EEG signals,which may contain important characteristics related to emotional states.Aiming at the above defects,a spatiotemporal emotion recognition method based on a 3-dimensional(3 D)time-frequency domain feature matrix was proposed.Specifically,the extracted time-frequency domain EEG features are first expressed as a 3 D matrix format according to the actual position of the cerebral cortex.Then,the input 3 D matrix is processed successively by multivariate convolutional neural network(MVCNN)and long short-term memory(LSTM)to classify the emotional state.Spatiotemporal emotion recognition method is evaluated on the DEAP data set,and achieved accuracy of 87.58%and 88.50%on arousal and valence dimensions respectively in binary classification tasks,as well as obtained accuracy of 84.58%in four class classification tasks.The experimental results show that 3 D matrix representation can represent emotional information more reasonably than two-dimensional(2 D).In addition,MVCNN and LSTM can utilize the spatial information of the electrode channels and the temporal context information of the EEG signal respectively.展开更多
The topic recognition for dynamic topic number can realize the dynamic update of super parameters,and obtain the probability distribution of dynamic topics in time dimension,which helps to clear the understanding and ...The topic recognition for dynamic topic number can realize the dynamic update of super parameters,and obtain the probability distribution of dynamic topics in time dimension,which helps to clear the understanding and tracking of convection text data.However,the current topic recognition model tends to be based on a fixed number of topics K and lacks multi-granularity analysis of subject knowledge.Therefore,it is impossible to deeply perceive the dynamic change of the topic in the time series.By introducing a novel approach on the basis of Infinite Latent Dirichlet allocation model,a topic feature lattice under the dynamic topic number is constructed.In the model,documents,topics and vocabularies are jointly modeled to generate two probability distribution matrices:Documentstopics and topic-feature words.Afterwards,the association intensity is computed between the topic and its feature vocabulary to establish the topic formal context matrix.Finally,the topic feature is induced according to the formal concept analysis(FCA)theory.The topic feature lattice under dynamic topic number(TFL DTN)model is validated on the real dataset by comparing with the mainstream methods.Experiments show that this model is more in line with actual needs,and achieves better results in semi-automatic modeling of topic visualization analysis.展开更多
The problems of biological sequence analysis have great theoretical and practical value in modern bioinformatics.Numerous solving algorithms are used for these problems,and complex similarities and differences exist a...The problems of biological sequence analysis have great theoretical and practical value in modern bioinformatics.Numerous solving algorithms are used for these problems,and complex similarities and differences exist among these algorithms for the same problem,causing difficulty for researchers to select the appropriate one.To address this situation,combined with the formal partition-and-recur method,component technology,domain engineering,and generic programming,the paper presents a method for the development of a family of biological sequence analysis algorithms.It designs highly trustworthy reusable domain algorithm components and further assembles them to generate specifific biological sequence analysis algorithms.The experiment of the development of a dynamic programming based LCS algorithm family shows the proposed method enables the improvement of the reliability,understandability,and development efficiency of particular algorithms.展开更多
The development of hand gesture recognition systems has gained more attention in recent days,due to its support of modern human-computer interfaces.Moreover,sign language recognition is mainly developed for enabling c...The development of hand gesture recognition systems has gained more attention in recent days,due to its support of modern human-computer interfaces.Moreover,sign language recognition is mainly developed for enabling communication between deaf and dumb people.In conventional works,various image processing techniques like segmentation,optimization,and classification are deployed for hand gesture recognition.Still,it limits the major problems of inefficient handling of large dimensional datasets and requires more time consumption,increased false positives,error rate,and misclassification outputs.Hence,this research work intends to develop an efficient hand gesture image recognition system by using advanced image processing techniques.During image segmentation,skin color detection and morphological operations are performed for accurately segmenting the hand gesture portion.Then,the Heuristic Manta-ray Foraging Optimization(HMFO)technique is employed for optimally selecting the features by computing the best fitness value.Moreover,the reduced dimensionality of features helps to increase the accuracy of classification with a reduced error rate.Finally,an Adaptive Extreme Learning Machine(AELM)based classification technique is employed for predicting the recognition output.During results validation,various evaluation measures have been used to compare the proposed model’s performance with other classification approaches.展开更多
Purpose–Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing.In order to solve such problems,the purp...Purpose–Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing.In order to solve such problems,the purposeof this paperis to proposea novel image super-resolutionalgorithmbasedon improved generative adversarial networks(GANs)with Wasserstein distance and gradient penalty.Design/methodology/approach–The proposed algorithm first introduces the conventional GANs architecture,the Wasserstein distance and the gradient penalty for the task of image super-resolution reconstruction(SRWGANs-GP).In addition,a novel perceptual loss function is designed for the SRWGANs-GP to meet the task of image super-resolution reconstruction.The content loss is extracted from the deep model’s feature maps,and such features are introduced to calculate mean square error(MSE)for the loss calculation of generators.Findings–To validate the effectiveness and feasibility of the proposed algorithm,a lot of compared experiments are applied on three common data sets,i.e.Set5,Set14 and BSD100.Experimental results have shown that the proposed SRWGANs-GP architecture has a stable error gradient and iteratively convergence.Compared with the baseline deep models,the proposed GANs models have a significant improvement on performance and efficiency for image super-resolution reconstruction.The MSE calculated by the deep model’s feature maps gives more advantages for constructing contour and texture.Originality/value–Compared with the state-of-the-art algorithms,the proposed algorithm obtains a better performance on image super-resolution and better reconstruction results on contour and texture.展开更多
基金This work is supported by the National Natural Science Foundation of China(12002218)the Youth Foundation of Education Department of Liaoning Province(JYT19034).These supports are gratefully acknowledged.
文摘This paper presents a feature modeling approach to address the 3D structural topology design optimization withfeature constraints. In the proposed algorithm, various features are formed into searchable shape features bythe feature modeling technology, and the models of feature elements are established. The feature elements thatmeet the design requirements are found by employing a feature matching technology, and the constraint factorscombined with the pseudo density of elements are initialized according to the optimized feature elements. Then,through controlling the constraint factors and utilizing the optimization criterion method along with the filteringtechnology of independent mesh, the structural design optimization is implemented. The present feature modelingapproach is applied to the feature-based structural topology optimization using empirical data. Meanwhile, theimproved mathematical model based on the density method with the constraint factors and the correspondingsolution processes are also presented. Compared with the traditional method which requires complicated constraintprocessing, the present approach is flexibly applied to the 3D structural design optimization with added holesby changing the constraint factors, thus it can design a structure with predetermined features more directly andeasily. Numerical examples show effectiveness of the proposed feature modeling approach, which is suitable for thepractical engineering design.
文摘Most large-scale systems including self-adaptive systems utilize feature models(FMs)to represent their complex architectures and benefit from the reuse of commonalities and variability information.Self-adaptive systems(SASs)are capable of reconfiguring themselves during the run time to satisfy the scenarios of the requisite contexts.However,reconfiguration of SASs corresponding to each adaptation of the system requires significant computational time and resources.The process of configuration reuse can be a better alternative to some contexts to reduce computational time,effort and error-prone.Nevertheless,systems’complexity can be reduced while the development process of systems by reusing elements or components.FMs are considered one of the new ways of reuse process that are able to introduce new opportunities for the reuse process beyond the conventional system components.While current FM-based modelling techniques represent,manage,and reuse elementary features to model SASs concepts,modeling and reusing configurations have not yet been considered.In this context,this study presents an extension to FMs by introducing and managing configuration features and their reuse process.Evaluation results demonstrate that reusing configuration features reduces the effort and time required by a reconfiguration process during the run time to meet the required scenario according to the current context.
文摘This work presents the “n<sup>th</sup>-Order Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (abbreviated as “n<sup>th</sup>-FASAM-N”), which will be shown to be the most efficient methodology for computing exact expressions of sensitivities, of any order, of model responses with respect to features of model parameters and, subsequently, with respect to the model’s uncertain parameters, boundaries, and internal interfaces. The unparalleled efficiency and accuracy of the n<sup>th</sup>-FASAM-N methodology stems from the maximal reduction of the number of adjoint computations (which are considered to be “large-scale” computations) for computing high-order sensitivities. When applying the n<sup>th</sup>-FASAM-N methodology to compute the second- and higher-order sensitivities, the number of large-scale computations is proportional to the number of “model features” as opposed to being proportional to the number of model parameters (which are considerably more than the number of features).When a model has no “feature” functions of parameters, but only comprises primary parameters, the n<sup>th</sup>-FASAM-N methodology becomes identical to the extant n<sup>th</sup> CASAM-N (“n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems”) methodology. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are formulated in linearly increasing higher-dimensional Hilbert spaces as opposed to exponentially increasing parameter-dimensional spaces thus overcoming the curse of dimensionality in sensitivity analysis of nonlinear systems. Both the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N are incomparably more efficient and more accurate than any other methods (statistical, finite differences, etc.) for computing exact expressions of response sensitivities of any order with respect to the model’s features and/or primary uncertain parameters, boundaries, and internal interfaces.
文摘This work highlights the unparalleled efficiency of the “n<sup>th</sup>-Order Function/ Feature Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-FASAM-N) by considering the well-known Nordheim-Fuchs reactor dynamics/safety model. This model describes a short-time self-limiting power excursion in a nuclear reactor system having a negative temperature coefficient in which a large amount of reactivity is suddenly inserted, either intentionally or by accident. This nonlinear paradigm model is sufficiently complex to model realistically self-limiting power excursions for short times yet admits closed-form exact expressions for the time-dependent neutron flux, temperature distribution and energy released during the transient power burst. The n<sup>th</sup>-FASAM-N methodology is compared to the extant “n<sup>th</sup>-Order Comprehensive Adjoint Sensitivity Analysis Methodology for Nonlinear Systems” (n<sup>th</sup>-CASAM-N) showing that: (i) the 1<sup>st</sup>-FASAM-N and the 1<sup>st</sup>-CASAM-N methodologies are equally efficient for computing the first-order sensitivities;each methodology requires a single large-scale computation for solving the “First-Level Adjoint Sensitivity System” (1<sup>st</sup>-LASS);(ii) the 2<sup>nd</sup>-FASAM-N methodology is considerably more efficient than the 2<sup>nd</sup>-CASAM-N methodology for computing the second-order sensitivities since the number of feature-functions is much smaller than the number of primary parameters;specifically for the Nordheim-Fuchs model, the 2<sup>nd</sup>-FASAM-N methodology requires 2 large-scale computations to obtain all of the exact expressions of the 28 distinct second-order response sensitivities with respect to the model parameters while the 2<sup>nd</sup>-CASAM-N methodology requires 7 large-scale computations for obtaining these 28 second-order sensitivities;(iii) the 3<sup>rd</sup>-FASAM-N methodology is even more efficient than the 3<sup>rd</sup>-CASAM-N methodology: only 2 large-scale computations are needed to obtain the exact expressions of the 84 distinct third-order response sensitivities with respect to the Nordheim-Fuchs model’s parameters when applying the 3<sup>rd</sup>-FASAM-N methodology, while the application of the 3<sup>rd</sup>-CASAM-N methodology requires at least 22 large-scale computations for computing the same 84 distinct third-order sensitivities. Together, the n<sup>th</sup>-FASAM-N and the n<sup>th</sup>-CASAM-N methodologies are the most practical methodologies for computing response sensitivities of any order comprehensively and accurately, overcoming the curse of dimensionality in sensitivity analysis.
基金supported by a gift to Princeton University from i Flytek and the Office of Naval Research(ONR)(Grant No.N00014-13-1-0338)。
文摘A fairly comprehensive analysis is presented for the gradient descent dynamics for training two-layer neural network models in the situation when the parameters in both layers are updated.General initialization schemes as well as general regimes for the network width and training data size are considered.In the overparametrized regime,it is shown that gradient descent dynamics can achieve zero training loss exponentially fast regardless of the quality of the labels.In addition,it is proved that throughout the training process the functions represented by the neural network model are uniformly close to those of a kernel method.For general values of the network width and training data size,sharp estimates of the generalization error are established for target functions in the appropriate reproducing kernel Hilbert space.
基金supported by the National Basic Research Program of China(973)(2012CB316402)The National Natural Science Foundation of China(Grant Nos.61332005,61725205)+3 种基金The Research Project of the North Minzu University(2019XYZJK02,2019xYZJK05,2017KJ24,2017KJ25,2019MS002)Ningxia first-classdisciplinc and scientific research projects(electronic science and technology,NXYLXK2017A07)NingXia Provincial Key Discipline Project-Computer ApplicationThe Provincial Natural Science Foundation ofNingXia(NZ17111,2020AAC03219).
文摘Overlapping community detection has become a very hot research topic in recent decades,and a plethora of methods have been proposed.But,a common challenge in many existing overlapping community detection approaches is that the number of communities K must be predefined manually.We propose a flexible nonparametric Bayesian generative model for count-value networks,which can allow K to increase as more and more data are encountered instead of to be fixed in advance.The Indian buffet process was used to model the community assignment matrix Z,and an uncol-lapsed Gibbs sampler has been derived.However,as the community assignment matrix Zis a structured multi-variable parameter,how to summarize the posterior inference results andestimate the inference quality about Z,is still a considerable challenge in the literature.In this paper,a graph convolutional neural network based graph classifier was utilized to help tosummarize the results and to estimate the inference qualityabout Z.We conduct extensive experiments on synthetic data and real data,and find that empirically,the traditional posterior summarization strategy is reliable.
文摘Software Product Line(SPL)is a group of software-intensive systems that share common and variable resources for developing a particular system.The feature model is a tree-type structure used to manage SPL’s common and variable features with their different relations and problem of Crosstree Constraints(CTC).CTC problems exist in groups of common and variable features among the sub-tree of feature models more diverse in Internet of Things(IoT)devices because different Internet devices and protocols are communicated.Therefore,managing the CTC problem to achieve valid product configuration in IoT-based SPL is more complex,time-consuming,and hard.However,the CTC problem needs to be considered in previously proposed approaches such as Commonality VariabilityModeling of Features(COVAMOF)andGenarch+tool;therefore,invalid products are generated.This research has proposed a novel approach Binary Oriented Feature Selection Crosstree Constraints(BOFS-CTC),to find all possible valid products by selecting the features according to cardinality constraints and cross-tree constraint problems in the featuremodel of SPL.BOFS-CTC removes the invalid products at the early stage of feature selection for the product configuration.Furthermore,this research developed the BOFS-CTC algorithm and applied it to,IoT-based feature models.The findings of this research are that no relationship constraints and CTC violations occur and drive the valid feature product configurations for the application development by removing the invalid product configurations.The accuracy of BOFS-CTC is measured by the integration sampling technique,where different valid product configurations are compared with the product configurations derived by BOFS-CTC and found 100%correct.Using BOFS-CTC eliminates the testing cost and development effort of invalid SPL products.
基金supported by the National Natural Science Foundation of China(62033008,61873143)。
文摘With the increasing intelligence and integration,a great number of two-valued variables(generally stored in the form of 0 or 1)often exist in large-scale industrial processes.However,these variables cannot be effectively handled by traditional monitoring methods such as linear discriminant analysis(LDA),principal component analysis(PCA)and partial least square(PLS)analysis.Recently,a mixed hidden naive Bayesian model(MHNBM)is developed for the first time to utilize both two-valued and continuous variables for abnormality monitoring.Although the MHNBM is effective,it still has some shortcomings that need to be improved.For the MHNBM,the variables with greater correlation to other variables have greater weights,which can not guarantee greater weights are assigned to the more discriminating variables.In addition,the conditional P(x j|x j′,y=k)probability must be computed based on historical data.When the training data is scarce,the conditional probability between continuous variables tends to be uniformly distributed,which affects the performance of MHNBM.Here a novel feature weighted mixed naive Bayes model(FWMNBM)is developed to overcome the above shortcomings.For the FWMNBM,the variables that are more correlated to the class have greater weights,which makes the more discriminating variables contribute more to the model.At the same time,FWMNBM does not have to calculate the conditional probability between variables,thus it is less restricted by the number of training data samples.Compared with the MHNBM,the FWMNBM has better performance,and its effectiveness is validated through numerical cases of a simulation example and a practical case of the Zhoushan thermal power plant(ZTPP),China.
文摘BACKGROUND: Electrical stimulation kindling model, having epilepsy-inducing and spontaneous seizure and other advantages, is a very ideal experimental animal model. But the kindling effect might be different at different sites. OBJECTIVE: To compare the features of animal models of complex partial epilepsy established through unilateral, bilateral and alternate-side kindling at hippocampus and successful rate of modeling among these 3 different ways. DESIGN: A randomized and controlled animal experiment. SETTING: Department of Neurology, Qilu Hospital, Shandong University. MATERIALS: Totally 60 healthy adult Wistar rats, weighing 200 to 300 g, of either gender, were used in this experiment. BL-410 biological functional experimental system (Taimeng Science and Technology Co.Ltd, Chengdu) and SE-7102 type electronic stimulator (Guangdian Company, Japan) were used in the experiment. METHODS: This experiment was carried out in the Experimental Animal Center of Shandong University from April to June 2004. After rats were anesthetized, electrode was implanted into the hippocampus. From the first day of measurement of afterdischarge threshold value, rats were given two-square-wave suprathreshold stimulation once per day with 400 μA intensity, 1ms wave length, 60 Hz frequency for 1 s duration. Left hippocampus was stimulated in unilateral kindling group, bilateral hippocampi were stimulated in bilateral kindling group, and left and right hippocampi were stimulated alternately every day in the alternate-side kindling group. Seizure intensity was scored: grade 0: normal, 1: wet dog-like shivering, facial spasm, such as, winking, touching the beard, rhythmic chewing and so on; 2: rhythmic nodding; 3: forelimb spasm;4: standing accompanied by bilateral forelimb spasm;5: tumbling, losing balance, four limbs spasm. Modeling was successful when seizure intensity reached grade 5. t test was used for the comparison of mean value between two samples. MAIN OUTCOME MEASURES: Comparison of the successful rate of modeling, the times of stimulation to reach intensity of grade 5, the lasting time of seizure of grade 3 of rats in each group. RESULTS: Four rats of alternate-side kindling group dropped out due to infection-induced electrode loss, and 56 rats were involved in the result analysis. The successful rate of unilateral kindling group, bilateral kindling group and alternate-side kindling group was 55%(11/20),100%(16/16)and 100%(20/20), respectively. The stimuli to reach the grade 5 spasm were significantly more in the bilateral kindling group than in the unilateral kindling group [(30.63±3.48),(19.36±3.47)times,t=8.268,P < 0.01], and those were significantly fewer in the alternate-side kindling group than in the unilateral kindling group [(10.85±1.98)times,t=8.744,P < 0.01]. The duration of grade 3 spasm was significantly longer in the bilateral kindling group than in the unilateral kindling group [(9.75±2.59),(3.21 ±1.58)days,t=8.183,P < 0.01]. Among 20 successful rats of alternate-side kindling group, grade 5 spasm was found in the left hippocampi of 11 rats, but grade 3 spasm in their right hippocampi; Grade 5 spasm was found in the right hippocampi of the other 9 rats, grade 4 spasm in the left hippocampus of 1 rat and grade 3 of 8 rats. CONCLUSION: The speed of establishing epilepsy seizure model by alternate-side kindling is faster than that by unilateral kindling, while that by bilateral kindling is slower than that by unilateral kindling. The successful rate is very high to establish complex partial epilepsy with alternate-side or bilateral kindling. Epilepsy seizure established by alternate-side kindling has antagonistic effect of kindling and the seizure duration of grade 3 spasm is prolonged.
文摘Use of features in order to achieve the integration of design and manufacture has been considered to be a key factor recent years. Features such as manufacturing properties form the workpiece. Features are structured systematically through object oriented modeling. This article explains an object coding method developed for prismatic workpieces and the use of that method in process planning. Features have been determined and modeled as objects. Features have been coded according to their types and locations on the workpiece in this given method. Feature codings have been seen to be very advantageous in process planning.
文摘Heart disease(HD)is a serious widespread life-threatening disease.The heart of patients with HD fails to pump sufcient amounts of blood to the entire body.Diagnosing the occurrence of HD early and efciently may prevent the manifestation of the debilitating effects of this disease and aid in its effective treatment.Classical methods for diagnosing HD are sometimes unreliable and insufcient in analyzing the related symptoms.As an alternative,noninvasive medical procedures based on machine learning(ML)methods provide reliable HD diagnosis and efcient prediction of HD conditions.However,the existing models of automated ML-based HD diagnostic methods cannot satisfy clinical evaluation criteria because of their inability to recognize anomalies in extracted symptoms represented as classication features from patients with HD.In this study,we propose an automated heart disease diagnosis(AHDD)system that integrates a binary convolutional neural network(CNN)with a new multi-agent feature wrapper(MAFW)model.The MAFW model consists of four software agents that operate a genetic algorithm(GA),a support vector machine(SVM),and Naïve Bayes(NB).The agents instruct the GA to perform a global search on HD features and adjust the weights of SVM and BN during initial classication.A nal tuning to CNN is then performed to ensure that the best set of features are included in HD identication.The CNN consists of ve layers that categorize patients as healthy or with HD according to the analysis of optimized HD features.We evaluate the classication performance of the proposed AHDD system via 12 common ML techniques and conventional CNN models by using across-validation technique and by assessing six evaluation criteria.The AHDD system achieves the highest accuracy of 90.1%,whereas the other ML and conventional CNN models attain only 72.3%–83.8%accuracy on average.Therefore,the AHDD system proposed herein has the highest capability to identify patients with HD.This system can be used by medical practitioners to diagnose HD efciently。
文摘With the increasing popularity of high-resolution remote sensing images,the remote sensing image retrieval(RSIR)has always been a topic of major issue.A combined,global non-subsampled shearlet transform(NSST)-domain statistical features(NSSTds)and local three dimensional local ternary pattern(3D-LTP)features,is proposed for high-resolution remote sensing images.We model the NSST image coefficients of detail subbands using 2-state laplacian mixture(LM)distribution and its three parameters are estimated using Expectation-Maximization(EM)algorithm.We also calculate the statistical parameters such as subband kurtosis and skewness from detail subbands along with mean and standard deviation calculated from approximation subband,and concatenate all of them with the 2-state LM parameters to describe the global features of the image.The various properties of NSST such as multiscale,localization and flexible directional sensitivity make it a suitable choice to provide an effective approximation of an image.In order to extract the dense local features,a new 3D-LTP is proposed where dimension reduction is performed via selection of‘uniform’patterns.The 3D-LTP is calculated from spatial RGB planes of the input image.The proposed inter-channel 3D-LTP not only exploits the local texture information but the color information is captured too.Finally,a fused feature representation(NSSTds-3DLTP)is proposed using new global(NSSTds)and local(3D-LTP)features to enhance the discriminativeness of features.The retrieval performance of proposed NSSTds-3DLTP features are tested on three challenging remote sensing image datasets such as WHU-RS19,Aerial Image Dataset(AID)and PatternNet in terms of mean average precision(MAP),average normalized modified retrieval rank(ANMRR)and precision-recall(P-R)graph.The experimental results are encouraging and the NSSTds-3DLTP features leads to superior retrieval performance compared to many well known existing descriptors such as Gabor RGB,Granulometry,local binary pattern(LBP),Fisher vector(FV),vector of locally aggregated descriptors(VLAD)and median robust extended local binary pattern(MRELBP).For WHU-RS19 dataset,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{41.93%,20.87%},{92.30%,32.68%},{86.14%,31.97%},{18.18%,15.22%},{8.96%,19.60%}and{15.60%,13.26%},respectively.For AID,in terms of{MAP,ANMRR},the NSSTds-3DLTP improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{152.60%,22.06%},{226.65%,25.08%},{185.03%,23.33%},{80.06%,12.16%},{50.58%,10.49%}and{62.34%,3.24%},respectively.For PatternNet,the NSSTds-3DLTP respectively improves upon Gabor RGB,Granulometry,LBP,FV,VLAD and MRELBP descriptors by{32.79%,10.34%},{141.30%,24.72%},{17.47%,10.34%},{83.20%,19.07%},{21.56%,3.60%},and{19.30%,0.48%}in terms of{MAP,ANMRR}.The moderate dimensionality of simple NSSTds-3DLTP allows the system to run in real-time.
基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQUyouracademicnumberDSRxx).
文摘To improve the maintenance and quality of software product lines,efficient configurations techniques have been proposed.Nevertheless,due to the complexity of derived and configured products in a product line,the configuration process of the software product line(SPL)becomes timeconsuming and costly.Each product line consists of a various number of feature models that need to be tested.The different approaches have been presented by Search-based software engineering(SBSE)to resolve the software engineering issues into computational solutions using some metaheuristic approach.Hence,multiobjective evolutionary algorithms help to optimize the configuration process of SPL.In this paper,different multi-objective Evolutionary Algorithms like Non-Dominated Sorting Genetic algorithms II(NSGA-II)and NSGA-III and Indicator based Evolutionary Algorithm(IBEA)are applied to different feature models to generate optimal results for large configurable.The proposed approach is also used to generate the optimized test suites with the help of different multi-objective Evolutionary Algorithms(MOEAs).
基金supported by the National Natural Science Foundation of China (Nos. 61272219, 61100110, and 61021062)the National High-Tech R&D Program (863) of China (No. 2007AA01Z334)+1 种基金the Program for New Century Excellent Talents in University (No. NCET-0404605)the Science and Technology Program of Jiangsu Province, China (Nos. BE2010072, BE2011058, and BY2012190)
文摘We propose a 3D model feature line extraction method using templates for guidance. The 3D model is first projected into a depth map, and a set of candidate feature points are extracted. Then, a conditional random fields (CRF) model is established to match the sketch points and the candidate feature points. Using sketch strokes, the candidate feature points can then be connected to obtain the feature lines, and using a CRF-matching model, the 2D image shape similarity features and 3D model geometric features can be effectively integrated. Finally, a relational metric based on shape and topological similarity is proposed to evaluate the matching results, and an iterative matching process is applied to obtain the globally optimized model feature lines. Experimental results showed that the proposed method can extract sound 3D model feature lines which correspond to the initial sketch template.
基金supported by the Natural Science Foundation of Zhejiang Province(No.Y16F020023)
文摘Traditional hand-crafted features for representing local image patches are evolving into current data-driven and learning-based image feature, but learning a robust and discriminative descriptor which is capable of controlling various patch-level computer vision tasks is still an open problem. In this work, we propose a novel deep convolutional neural network(CNN) to learn local feature descriptors. We utilize the quadruplets with positive and negative training samples, together with a constraint to restrict the intra-class variance, to learn good discriminative CNN representations. Compared with previous works, our model reduces the overlap in feature space between corresponding and non-corresponding patch pairs, and mitigates margin varying problem caused by commonly used triplet loss. We demonstrate that our method achieves better embedding result than some latest works, like PN-Net and TN-TG, on benchmark dataset.
基金supported by the National Natural Science Foundation of China(61872126)the Key Scientific Research Project Plan of Colleges and Universities in Henan Province(19A520004)。
文摘The research of emotion recognition based on electroencephalogram(EEG)signals often ignores the related information between the brain electrode channels and the contextual emotional information existing in EEG signals,which may contain important characteristics related to emotional states.Aiming at the above defects,a spatiotemporal emotion recognition method based on a 3-dimensional(3 D)time-frequency domain feature matrix was proposed.Specifically,the extracted time-frequency domain EEG features are first expressed as a 3 D matrix format according to the actual position of the cerebral cortex.Then,the input 3 D matrix is processed successively by multivariate convolutional neural network(MVCNN)and long short-term memory(LSTM)to classify the emotional state.Spatiotemporal emotion recognition method is evaluated on the DEAP data set,and achieved accuracy of 87.58%and 88.50%on arousal and valence dimensions respectively in binary classification tasks,as well as obtained accuracy of 84.58%in four class classification tasks.The experimental results show that 3 D matrix representation can represent emotional information more reasonably than two-dimensional(2 D).In addition,MVCNN and LSTM can utilize the spatial information of the electrode channels and the temporal context information of the EEG signal respectively.
基金the Key Projects of Social Sciences of Anhui Provincial Department of Education(SK2018A1064,SK2018A1072)the Natural Scientific Project of Anhui Provincial Department of Education(KJ2019A0371)Innovation Team of Health Information Management and Application Research(BYKC201913),BBMC。
文摘The topic recognition for dynamic topic number can realize the dynamic update of super parameters,and obtain the probability distribution of dynamic topics in time dimension,which helps to clear the understanding and tracking of convection text data.However,the current topic recognition model tends to be based on a fixed number of topics K and lacks multi-granularity analysis of subject knowledge.Therefore,it is impossible to deeply perceive the dynamic change of the topic in the time series.By introducing a novel approach on the basis of Infinite Latent Dirichlet allocation model,a topic feature lattice under the dynamic topic number is constructed.In the model,documents,topics and vocabularies are jointly modeled to generate two probability distribution matrices:Documentstopics and topic-feature words.Afterwards,the association intensity is computed between the topic and its feature vocabulary to establish the topic formal context matrix.Finally,the topic feature is induced according to the formal concept analysis(FCA)theory.The topic feature lattice under dynamic topic number(TFL DTN)model is validated on the real dataset by comparing with the mainstream methods.Experiments show that this model is more in line with actual needs,and achieves better results in semi-automatic modeling of topic visualization analysis.
基金supported by the National Natural Science Foundation of China(No.62062039)Natural Science Foundation of Jiangxi Province(Nos.20202BAB202024 and 20212BAB202017).
文摘The problems of biological sequence analysis have great theoretical and practical value in modern bioinformatics.Numerous solving algorithms are used for these problems,and complex similarities and differences exist among these algorithms for the same problem,causing difficulty for researchers to select the appropriate one.To address this situation,combined with the formal partition-and-recur method,component technology,domain engineering,and generic programming,the paper presents a method for the development of a family of biological sequence analysis algorithms.It designs highly trustworthy reusable domain algorithm components and further assembles them to generate specifific biological sequence analysis algorithms.The experiment of the development of a dynamic programming based LCS algorithm family shows the proposed method enables the improvement of the reliability,understandability,and development efficiency of particular algorithms.
文摘The development of hand gesture recognition systems has gained more attention in recent days,due to its support of modern human-computer interfaces.Moreover,sign language recognition is mainly developed for enabling communication between deaf and dumb people.In conventional works,various image processing techniques like segmentation,optimization,and classification are deployed for hand gesture recognition.Still,it limits the major problems of inefficient handling of large dimensional datasets and requires more time consumption,increased false positives,error rate,and misclassification outputs.Hence,this research work intends to develop an efficient hand gesture image recognition system by using advanced image processing techniques.During image segmentation,skin color detection and morphological operations are performed for accurately segmenting the hand gesture portion.Then,the Heuristic Manta-ray Foraging Optimization(HMFO)technique is employed for optimally selecting the features by computing the best fitness value.Moreover,the reduced dimensionality of features helps to increase the accuracy of classification with a reduced error rate.Finally,an Adaptive Extreme Learning Machine(AELM)based classification technique is employed for predicting the recognition output.During results validation,various evaluation measures have been used to compare the proposed model’s performance with other classification approaches.
文摘Purpose–Conventional image super-resolution reconstruction by the conventional deep learning architectures suffers from the problems of hard training and gradient disappearing.In order to solve such problems,the purposeof this paperis to proposea novel image super-resolutionalgorithmbasedon improved generative adversarial networks(GANs)with Wasserstein distance and gradient penalty.Design/methodology/approach–The proposed algorithm first introduces the conventional GANs architecture,the Wasserstein distance and the gradient penalty for the task of image super-resolution reconstruction(SRWGANs-GP).In addition,a novel perceptual loss function is designed for the SRWGANs-GP to meet the task of image super-resolution reconstruction.The content loss is extracted from the deep model’s feature maps,and such features are introduced to calculate mean square error(MSE)for the loss calculation of generators.Findings–To validate the effectiveness and feasibility of the proposed algorithm,a lot of compared experiments are applied on three common data sets,i.e.Set5,Set14 and BSD100.Experimental results have shown that the proposed SRWGANs-GP architecture has a stable error gradient and iteratively convergence.Compared with the baseline deep models,the proposed GANs models have a significant improvement on performance and efficiency for image super-resolution reconstruction.The MSE calculated by the deep model’s feature maps gives more advantages for constructing contour and texture.Originality/value–Compared with the state-of-the-art algorithms,the proposed algorithm obtains a better performance on image super-resolution and better reconstruction results on contour and texture.