In the production of the sucker rod well, the dynamic liquid level is important for the production efficiency and safety in the lifting process. It is influenced by multi-source data which need to be combined for the ...In the production of the sucker rod well, the dynamic liquid level is important for the production efficiency and safety in the lifting process. It is influenced by multi-source data which need to be combined for the dynamic liquid level real-time calculation. In this paper, the multi-source data are regarded as the different views including the load of the sucker rod and liquid in the wellbore, the image of the dynamometer card and production dynamics parameters. These views can be fused by the multi-branch neural network with special fusion layer. With this method, the features of different views can be extracted by considering the difference of the modality and physical meaning between them. Then, the extraction results which are selected by multinomial sampling can be the input of the fusion layer.During the fusion process, the availability under different views determines whether the views are fused in the fusion layer or not. In this way, not only the correlation between the views can be considered, but also the missing data can be processed automatically. The results have shown that the load and production features fusion(the method proposed in this paper) performs best with the lowest mean absolute error(MAE) 39.63 m, followed by the features concatenation with MAE 42.47 m. They both performed better than only a single view and the lower MAE of the features fusion indicates that its generalization ability is stronger. In contrast, the image feature as a single view contributes little to the accuracy improvement after fused with other views with the highest MAE. When there is data missing in some view, compared with the features concatenation, the multi-view features fusion will not result in the unavailability of a large number of samples. When the missing rate is 10%, 30%, 50% and 80%, the method proposed in this paper can reduce MAE by 5.8, 7, 9.3 and 20.3 m respectively. In general, the multi-view features fusion method proposed in this paper can improve the accuracy obviously and process the missing data effectively, which helps provide technical support for real-time monitoring of the dynamic liquid level in oil fields.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified fro...In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.展开更多
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi...With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.展开更多
This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep ...This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep feature extraction,which can fully extract the global deep features of different terrains in PolSAR images,so it is widely used in PolSAR terrain classification.However,VGG-Net ignores the local edge & shape features,resulting in incomplete feature representation of the PolSAR terrains,as a consequence,the terrain classification accuracy is not promising.In fact,edge and shape features play an important role in PolSAR terrain classification.To solve this problem,a new VGG network with HOG feature fusion was specifically proposed for high-precision PolSAR terrain classification.HOG-VGG extracts both the global deep semantic features and the local edge & shape features of the PolSAR terrains,so the terrain feature representation completeness is greatly elevated.Moreover,HOG-VGG optimally fuses the global deep features and the local edge & shape features to achieve the best classification results.The superiority of HOG-VGG is verified on the Flevoland,San Francisco and Oberpfaffenhofen datasets.Experiments show that the proposed HOG-VGG achieves much better PolSAR terrain classification performance,with overall accuracies of 97.54%,94.63%,and 96.07%,respectively.展开更多
A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a ...A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a complicated environment.In this approach,the Harris algorithm is introduced to detect the corner points of the object,and the corner matching algorithm based on singular value decomposition is used to compute the firstorder weights and make particles centralize in the high likelihood area.Then the local binary pattern(LBP) operator is used to build the observation model of the target based on the color and texture features,by which the second-order weights of particles and the accurate location of the target can be obtained.Moreover,a backstepping controller is proposed to complete the whole tracking system.Simulations and experiments are carried out,and the results show that the HPF algorithm with the backstepping controller achieves stable and accurate tracking with good robustness in complex environments.展开更多
Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and g...Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and get more reliable results, a novel medical image fusion algorithm based on pulse coupled neural networks (PCNN) and multi-feature fuzzy clustering is proposed, which makes use of the multi-feature of image and combines the advantages of the local entropy and variance of local entropy based PCNN. The results of experiments indicate that the proposed image fusion method can better preserve the image details and robustness and significantly improve the image visual effect than the other fusion methods with less information distortion.展开更多
On-orbit servicing, such as spacecraft maintenance, on-orbit assembly, refueling, and de-orbiting, can reduce the cost of space missions, improve the performance of spacecraft, and extend its life span. The relative s...On-orbit servicing, such as spacecraft maintenance, on-orbit assembly, refueling, and de-orbiting, can reduce the cost of space missions, improve the performance of spacecraft, and extend its life span. The relative state between the servicing and target spacecraft is vital for on-orbit servicing missions, especially the final approaching stage. The major challenge of this stage is that the observed features of the target are incomplete or are constantly changing due to the short distance and limited Field of View (FOV) of camera. Different from cooperative spacecraft, non-cooperative target does not have artificial feature markers. Therefore, contour features, including triangle supports of solar array, docking ring, and corner points of the spacecraft body, are used as the measuring features. To overcome the drawback of FOV limitation and imaging ambiguity of the camera, a "selfie stick" structure and a self-calibration strategy were implemented, ensuring that part of the contour features could be observed precisely when the two spacecraft approached each other. The observed features were constantly changing as the relative distance shortened. It was difficult to build a unified measurement model for different types of features, including points, line segments, and circle. Therefore, dual quaternion was implemented to model the relative dynamics and measuring features. With the consideration of state uncertainty of the target, a fuzzy adaptive strong tracking filter( FASTF) combining fuzzy logic adaptive controller (FLAC) with strong tracking filter(STF) was designed to robustly estimate the relative states between the servicing spacecraft and the target. Finally, the effectiveness of the strategy was verified by mathematical simulation. The achievement of this research provides a theoretical and technical foundation for future on-orbit servicing missions.展开更多
This paper presents an effective image classification algorithm based on superpixels and feature fusion.Differing from classical image classification algorithms that extract feature descriptors directly from the origi...This paper presents an effective image classification algorithm based on superpixels and feature fusion.Differing from classical image classification algorithms that extract feature descriptors directly from the original image,the proposed method first segments the input image into superpixels and,then,several different types of features are calculated according to these superpixels.To increase classification accuracy,the dimensions of these features are reduced using the principal component analysis(PCA)algorithm followed by a weighted serial feature fusion strategy.After constructing a coding dictionary using the nonnegative matrix factorization(NMF)algorithm,the input image is recognized by a support vector machine(SVM)model.The effectiveness of the proposed method was tested on the public Scene-15,Caltech-101,and Caltech-256 datasets,and the experimental results demonstrate that the proposed method can effectively improve image classification accuracy.展开更多
Acquired immunodeficiency syndrome (AIDS) is a fatal disease which highly threatens the health of human being. Human immunodeficiency virus (HIV) is the pathogeny for this disease. Investigating HIV-1 protease cleavag...Acquired immunodeficiency syndrome (AIDS) is a fatal disease which highly threatens the health of human being. Human immunodeficiency virus (HIV) is the pathogeny for this disease. Investigating HIV-1 protease cleavage sites can help researchers find or develop protease inhibitors which can restrain the replication of HIV-1, thus resisting AIDS. Feature selection is a new approach for solving the HIV-1 protease cleavage site prediction task and it’s a key point in our research. Comparing with the previous work, there are several advantages in our work. First, a filter method is used to eliminate the redundant features. Second, besides traditional orthogonal encoding (OE), two kinds of newly proposed features extracted by conducting principal component analysis (PCA) and non-linear Fisher transformation (NLF) on AAindex database are used. The two new features are proven to perform better than OE. Third, the data set used here is largely expanded to 1922 samples. Also to improve prediction performance, we conduct parameter optimization for SVM, thus the classifier can obtain better prediction capability. We also fuse the three kinds of features to make sure comprehensive feature representation and improve prediction performance. To effectively evaluate the prediction performance of our method, five parameters, which are much more than previous work, are used to conduct complete comparison. The experimental results of our method show that our method gain better performance than the state of art method. This means that the feature selection combined with feature fusion and classifier parameter optimization can effectively improve HIV-1 cleavage site prediction. Moreover, our work can provide useful help for HIV-1 protease inhibitor developing in the future.展开更多
We propose and compare two multi-channel fusion schemes to utilize the information extracted from simultaneously recorded multiple newborn electroencephalogram (EEG) channels for seizure detection. The first approach ...We propose and compare two multi-channel fusion schemes to utilize the information extracted from simultaneously recorded multiple newborn electroencephalogram (EEG) channels for seizure detection. The first approach is known as the multi-channel feature fusion. It involves concatenating EEG feature vectors independently obtained from the different EEG channels to form a single feature vector. The second approach, called the multi-channel decision/classifier fusion, is achieved by combining the independent decisions of the different EEG channels to form an overall decision as to the existence of a newborn EEG seizure. The first approach suffers from the large dimensionality problem. In order to overcome this problem, three different dimensionality reduction techniques based on the sum, Fisher’s linear discriminant and symmetrical uncertainty (SU) were considered. It was found that feature fusion based on SU technique outperformed the other two techniques. It was also shown that feature fusion, which was developed on the basis that there was inter-dependence between recorded EEG channels, was superior to the independent decision fusion.展开更多
Image fusion has been developing into an important area of research. In remote sensing, the use of the same image sensor in different working modes, or different image sensors, can provide reinforcing or complementary...Image fusion has been developing into an important area of research. In remote sensing, the use of the same image sensor in different working modes, or different image sensors, can provide reinforcing or complementary information. Therefore, it is highly valuable to fuse outputs from multiple sensors (or the same sensor in different working modes) to improve the overall performance of the remote images, which are very useful for human visual perception and image processing task. Accordingly, in this paper, we first provide a comprehensive survey of the state of the art of multi-sensor image fusion methods in terms of three aspects: pixel-level fusion, feature-level fusion and decision-level fusion. An overview of existing fusion strategies is then introduced, after which the existing fusion quality measures are summarized. Finally, this review analyzes the development trends in fusion algorithms that may attract researchers to further explore the research in this field.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) ba...In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.展开更多
In response to the construction needs of “Real 3D China”, the system structure, functional framework, application direction and product form of block level augmented reality three-dimensional map is designed. Those ...In response to the construction needs of “Real 3D China”, the system structure, functional framework, application direction and product form of block level augmented reality three-dimensional map is designed. Those provide references and ideas for the later large-scale production of augmented reality three-dimensional map. The augmented reality three-dimensional map is produced based on skyline software. Including the map browsing, measurement and analysis and so on, the basic function of three-dimensional map is realized. The special functional module including housing management, pipeline management and so on is developed combining the need of residential quarters development, that expands the application fields of augmented reality three-dimensional map. Those lay the groundwork for the application of augmented reality three-dimensional map. .展开更多
基金supported by the National Natural Science Foundation of China under Grant 52325402, 52274057, 52074340 and 51874335the National Key R&D Program of China under Grant 2023YFB4104200+1 种基金the Major Scientific and Technological Projects of CNOOC under Grant CCL2022RCPS0397RSN111 Project under Grant B08028。
文摘In the production of the sucker rod well, the dynamic liquid level is important for the production efficiency and safety in the lifting process. It is influenced by multi-source data which need to be combined for the dynamic liquid level real-time calculation. In this paper, the multi-source data are regarded as the different views including the load of the sucker rod and liquid in the wellbore, the image of the dynamometer card and production dynamics parameters. These views can be fused by the multi-branch neural network with special fusion layer. With this method, the features of different views can be extracted by considering the difference of the modality and physical meaning between them. Then, the extraction results which are selected by multinomial sampling can be the input of the fusion layer.During the fusion process, the availability under different views determines whether the views are fused in the fusion layer or not. In this way, not only the correlation between the views can be considered, but also the missing data can be processed automatically. The results have shown that the load and production features fusion(the method proposed in this paper) performs best with the lowest mean absolute error(MAE) 39.63 m, followed by the features concatenation with MAE 42.47 m. They both performed better than only a single view and the lower MAE of the features fusion indicates that its generalization ability is stronger. In contrast, the image feature as a single view contributes little to the accuracy improvement after fused with other views with the highest MAE. When there is data missing in some view, compared with the features concatenation, the multi-view features fusion will not result in the unavailability of a large number of samples. When the missing rate is 10%, 30%, 50% and 80%, the method proposed in this paper can reduce MAE by 5.8, 7, 9.3 and 20.3 m respectively. In general, the multi-view features fusion method proposed in this paper can improve the accuracy obviously and process the missing data effectively, which helps provide technical support for real-time monitoring of the dynamic liquid level in oil fields.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
文摘In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.
文摘With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.
基金Sponsored by the Fundamental Research Funds for the Central Universities of China(Grant No.PA2023IISL0098)the Hefei Municipal Natural Science Foundation(Grant No.202201)+1 种基金the National Natural Science Foundation of China(Grant No.62071164)the Open Fund of Information Materials and Intelligent Sensing Laboratory of Anhui Province(Anhui University)(Grant No.IMIS202214 and IMIS202102)。
文摘This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep feature extraction,which can fully extract the global deep features of different terrains in PolSAR images,so it is widely used in PolSAR terrain classification.However,VGG-Net ignores the local edge & shape features,resulting in incomplete feature representation of the PolSAR terrains,as a consequence,the terrain classification accuracy is not promising.In fact,edge and shape features play an important role in PolSAR terrain classification.To solve this problem,a new VGG network with HOG feature fusion was specifically proposed for high-precision PolSAR terrain classification.HOG-VGG extracts both the global deep semantic features and the local edge & shape features of the PolSAR terrains,so the terrain feature representation completeness is greatly elevated.Moreover,HOG-VGG optimally fuses the global deep features and the local edge & shape features to achieve the best classification results.The superiority of HOG-VGG is verified on the Flevoland,San Francisco and Oberpfaffenhofen datasets.Experiments show that the proposed HOG-VGG achieves much better PolSAR terrain classification performance,with overall accuracies of 97.54%,94.63%,and 96.07%,respectively.
基金supported by the National Natural Science Foundation of China(61304097)the Projects of Major International(Regional)Joint Research Program NSFC(61120106010)the Foundation for Innovation Research Groups of the National National Natural Science Foundation of China(61321002)
文摘A hierarchical particle filter(HPF) framework based on multi-feature fusion is proposed.The proposed HPF effectively uses different feature information to avoid the tracking failure based on the single feature in a complicated environment.In this approach,the Harris algorithm is introduced to detect the corner points of the object,and the corner matching algorithm based on singular value decomposition is used to compute the firstorder weights and make particles centralize in the high likelihood area.Then the local binary pattern(LBP) operator is used to build the observation model of the target based on the color and texture features,by which the second-order weights of particles and the accurate location of the target can be obtained.Moreover,a backstepping controller is proposed to complete the whole tracking system.Simulations and experiments are carried out,and the results show that the HPF algorithm with the backstepping controller achieves stable and accurate tracking with good robustness in complex environments.
文摘Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and get more reliable results, a novel medical image fusion algorithm based on pulse coupled neural networks (PCNN) and multi-feature fuzzy clustering is proposed, which makes use of the multi-feature of image and combines the advantages of the local entropy and variance of local entropy based PCNN. The results of experiments indicate that the proposed image fusion method can better preserve the image details and robustness and significantly improve the image visual effect than the other fusion methods with less information distortion.
基金Sponsored by the National Natural Science Foundation of China(Grant No.61973153)
文摘On-orbit servicing, such as spacecraft maintenance, on-orbit assembly, refueling, and de-orbiting, can reduce the cost of space missions, improve the performance of spacecraft, and extend its life span. The relative state between the servicing and target spacecraft is vital for on-orbit servicing missions, especially the final approaching stage. The major challenge of this stage is that the observed features of the target are incomplete or are constantly changing due to the short distance and limited Field of View (FOV) of camera. Different from cooperative spacecraft, non-cooperative target does not have artificial feature markers. Therefore, contour features, including triangle supports of solar array, docking ring, and corner points of the spacecraft body, are used as the measuring features. To overcome the drawback of FOV limitation and imaging ambiguity of the camera, a "selfie stick" structure and a self-calibration strategy were implemented, ensuring that part of the contour features could be observed precisely when the two spacecraft approached each other. The observed features were constantly changing as the relative distance shortened. It was difficult to build a unified measurement model for different types of features, including points, line segments, and circle. Therefore, dual quaternion was implemented to model the relative dynamics and measuring features. With the consideration of state uncertainty of the target, a fuzzy adaptive strong tracking filter( FASTF) combining fuzzy logic adaptive controller (FLAC) with strong tracking filter(STF) was designed to robustly estimate the relative states between the servicing spacecraft and the target. Finally, the effectiveness of the strategy was verified by mathematical simulation. The achievement of this research provides a theoretical and technical foundation for future on-orbit servicing missions.
基金This paper is supported by National Natural Science Foundation of China (No. 61074078) and Fundamental Research Funds for the Central Universities (No. 12MS121).
基金the National Key Research and Development Program of China under Grant No.2018AAA0103203.
文摘This paper presents an effective image classification algorithm based on superpixels and feature fusion.Differing from classical image classification algorithms that extract feature descriptors directly from the original image,the proposed method first segments the input image into superpixels and,then,several different types of features are calculated according to these superpixels.To increase classification accuracy,the dimensions of these features are reduced using the principal component analysis(PCA)algorithm followed by a weighted serial feature fusion strategy.After constructing a coding dictionary using the nonnegative matrix factorization(NMF)algorithm,the input image is recognized by a support vector machine(SVM)model.The effectiveness of the proposed method was tested on the public Scene-15,Caltech-101,and Caltech-256 datasets,and the experimental results demonstrate that the proposed method can effectively improve image classification accuracy.
文摘Acquired immunodeficiency syndrome (AIDS) is a fatal disease which highly threatens the health of human being. Human immunodeficiency virus (HIV) is the pathogeny for this disease. Investigating HIV-1 protease cleavage sites can help researchers find or develop protease inhibitors which can restrain the replication of HIV-1, thus resisting AIDS. Feature selection is a new approach for solving the HIV-1 protease cleavage site prediction task and it’s a key point in our research. Comparing with the previous work, there are several advantages in our work. First, a filter method is used to eliminate the redundant features. Second, besides traditional orthogonal encoding (OE), two kinds of newly proposed features extracted by conducting principal component analysis (PCA) and non-linear Fisher transformation (NLF) on AAindex database are used. The two new features are proven to perform better than OE. Third, the data set used here is largely expanded to 1922 samples. Also to improve prediction performance, we conduct parameter optimization for SVM, thus the classifier can obtain better prediction capability. We also fuse the three kinds of features to make sure comprehensive feature representation and improve prediction performance. To effectively evaluate the prediction performance of our method, five parameters, which are much more than previous work, are used to conduct complete comparison. The experimental results of our method show that our method gain better performance than the state of art method. This means that the feature selection combined with feature fusion and classifier parameter optimization can effectively improve HIV-1 cleavage site prediction. Moreover, our work can provide useful help for HIV-1 protease inhibitor developing in the future.
文摘We propose and compare two multi-channel fusion schemes to utilize the information extracted from simultaneously recorded multiple newborn electroencephalogram (EEG) channels for seizure detection. The first approach is known as the multi-channel feature fusion. It involves concatenating EEG feature vectors independently obtained from the different EEG channels to form a single feature vector. The second approach, called the multi-channel decision/classifier fusion, is achieved by combining the independent decisions of the different EEG channels to form an overall decision as to the existence of a newborn EEG seizure. The first approach suffers from the large dimensionality problem. In order to overcome this problem, three different dimensionality reduction techniques based on the sum, Fisher’s linear discriminant and symmetrical uncertainty (SU) were considered. It was found that feature fusion based on SU technique outperformed the other two techniques. It was also shown that feature fusion, which was developed on the basis that there was inter-dependence between recorded EEG channels, was superior to the independent decision fusion.
文摘Image fusion has been developing into an important area of research. In remote sensing, the use of the same image sensor in different working modes, or different image sensors, can provide reinforcing or complementary information. Therefore, it is highly valuable to fuse outputs from multiple sensors (or the same sensor in different working modes) to improve the overall performance of the remote images, which are very useful for human visual perception and image processing task. Accordingly, in this paper, we first provide a comprehensive survey of the state of the art of multi-sensor image fusion methods in terms of three aspects: pixel-level fusion, feature-level fusion and decision-level fusion. An overview of existing fusion strategies is then introduced, after which the existing fusion quality measures are summarized. Finally, this review analyzes the development trends in fusion algorithms that may attract researchers to further explore the research in this field.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金supported by the National Natural Science Foundation of China (62271255,61871218)the Fundamental Research Funds for the Central University (3082019NC2019002)+1 种基金the Aeronautical Science Foundation (ASFC-201920007002)the Program of Remote Sensing Intelligent Monitoring and Emergency Services for Regional Security Elements。
文摘In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.
文摘In response to the construction needs of “Real 3D China”, the system structure, functional framework, application direction and product form of block level augmented reality three-dimensional map is designed. Those provide references and ideas for the later large-scale production of augmented reality three-dimensional map. The augmented reality three-dimensional map is produced based on skyline software. Including the map browsing, measurement and analysis and so on, the basic function of three-dimensional map is realized. The special functional module including housing management, pipeline management and so on is developed combining the need of residential quarters development, that expands the application fields of augmented reality three-dimensional map. Those lay the groundwork for the application of augmented reality three-dimensional map. .