A new medical image fusion technique is presented.The method is based on three-dimensional reconstruction.After reconstruction,the three-dimensional volume data is normalized by three-dimensional coordinate conversion...A new medical image fusion technique is presented.The method is based on three-dimensional reconstruction.After reconstruction,the three-dimensional volume data is normalized by three-dimensional coordinate conversion in the same way and intercepted through setting up cutting plane including anatomical structure,as a result two images in entire registration on space and geometry are obtained and the images are fused at last.Compared with traditional two-dimensional fusion technique,three-dimensional fusion technique can not only resolve the different problems existed in the two kinds of images,but also avoid the registration error of the two kinds of images when they have different scan and imaging parameter.The research proves this fusion technique is more exact and has no registration,so it is more adapt to arbitrary medical image fusion with different equipments.展开更多
In order to obtain a better sandstone three-dimensional (3D) reconstruction result which is more similar to the original sample, an algorithm based on stationarity for a two-dimensional (2D) training image is prop...In order to obtain a better sandstone three-dimensional (3D) reconstruction result which is more similar to the original sample, an algorithm based on stationarity for a two-dimensional (2D) training image is proposed. The second-order statistics based on texture features are analyzed to evaluate the scale stationarity of the training image. The multiple-point statistics of the training image are applied to obtain the multiple-point statistics stationarity estimation by the multi-point density function. The results show that the reconstructed 3D structures are closer to reality when the training image has better scale stationarity and multiple-point statistics stationarity by the indications of local percolation probability and two-point probability. Moreover, training images with higher multiple-point statistics stationarity and lower scale stationarity are likely to obtain closer results to the real 3D structure, and vice versa. Thus, stationarity analysis of the training image has far-reaching significance in choosing a better 2D thin section image for the 3D reconstruction of porous media. Especially, high-order statistics perform better than low-order statistics.展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed...To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks.展开更多
Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aimin...Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.展开更多
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color...Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images.展开更多
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot...The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality.展开更多
A toroidal soft x-ray imaging(T-SXRI)system has been developed to investigate threedimensional(3D)plasma physics on J-TEXT.This T-SXRI system consists of three sets of SXR arrays.Two sets are newly developed and locat...A toroidal soft x-ray imaging(T-SXRI)system has been developed to investigate threedimensional(3D)plasma physics on J-TEXT.This T-SXRI system consists of three sets of SXR arrays.Two sets are newly developed and located on the vacuum chamber wall at toroidal positionsφof 126.4°and 272.6°,respectively,while one set was established previously atφ=65.50.Each set of SXR arrays consists of three arrays viewing the plasma poloidally,and hence can be used separately to obtain SXR images via the tomographic method.The sawtooth precursor oscillations are measured by T-SXRI,and the corresponding images of perturbative SXR signals are successfully reconstructed at these three toroidal positions,hence providing measurement of the 3D structure of precursor oscillations.The observed 3D structure is consistent with the helical structure of the m/n=1/1 mode.The experimental observation confirms that the T-SXRI system is able to observe 3D structures in the J-TEXT plasma.展开更多
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier...Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.展开更多
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta...Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.展开更多
The employment of deep convolutional neural networks has recently contributed to significant progress in single image super-resolution(SISR)research.However,the high computational demands of most SR techniques hinder ...The employment of deep convolutional neural networks has recently contributed to significant progress in single image super-resolution(SISR)research.However,the high computational demands of most SR techniques hinder their applicability to edge devices,despite their satisfactory reconstruction performance.These methods commonly use standard convolutions,which increase the convolutional operation cost of the model.In this paper,a lightweight Partial Separation and Multiscale Fusion Network(PSMFNet)is proposed to alleviate this problem.Specifically,this paper introduces partial convolution(PConv),which reduces the redundant convolution operations throughout the model by separating some of the features of an image while retaining features useful for image reconstruction.Additionally,it is worth noting that the existing methods have not fully utilized the rich feature information,leading to information loss,which reduces the ability to learn feature representations.Inspired by self-attention,this paper develops a multiscale feature fusion block(MFFB),which can better utilize the non-local features of an image.MFFB can learn long-range dependencies from the spatial dimension and extract features from the channel dimension,thereby obtaining more comprehensive and rich feature information.As the role of the MFFB is to capture rich global features,this paper further introduces an efficient inverted residual block(EIRB)to supplement the local feature extraction ability of PSMFNet.A comprehensive analysis of the experimental results shows that PSMFNet maintains a better performance with fewer parameters than the state-of-the-art models.展开更多
Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful ...Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators.展开更多
The growth optimizer(GO)is an innovative and robust metaheuristic optimization algorithm designed to simulate the learning and reflective processes experienced by individuals as they mature within the social environme...The growth optimizer(GO)is an innovative and robust metaheuristic optimization algorithm designed to simulate the learning and reflective processes experienced by individuals as they mature within the social environment.However,the original GO algorithm is constrained by two significant limitations:slow convergence and high mem-ory requirements.This restricts its application to large-scale and complex problems.To address these problems,this paper proposes an innovative enhanced growth optimizer(eGO).In contrast to conventional population-based optimization algorithms,the eGO algorithm utilizes a probabilistic model,designated as the virtual population,which is capable of accurately replicating the behavior of actual populations while simultaneously reducing memory consumption.Furthermore,this paper introduces the Lévy flight mechanism,which enhances the diversity and flexibility of the search process,thus further improving the algorithm’s global search capability and convergence speed.To verify the effectiveness of the eGO algorithm,a series of experiments were conducted using the CEC2014 and CEC2017 test sets.The results demonstrate that the eGO algorithm outperforms the original GO algorithm and other compact algorithms regarding memory usage and convergence speed,thus exhibiting powerful optimization capabilities.Finally,the eGO algorithm was applied to image fusion.Through a comparative analysis with the existing PSO and GO algorithms and other compact algorithms,the eGO algorithm demonstrates superior performance in image fusion.展开更多
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans...Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical ...Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.展开更多
A method and procedure is presented to reconstruct three-dimensional(3D) positions of scattering centers from multiple synthetic aperture radar(SAR) images. Firstly, two-dimensional(2D) attribute scattering centers of...A method and procedure is presented to reconstruct three-dimensional(3D) positions of scattering centers from multiple synthetic aperture radar(SAR) images. Firstly, two-dimensional(2D) attribute scattering centers of targets are extracted from 2D SAR images. Secondly, similarity measure is developed based on 2D attributed scatter centers' location, type, and radargrammetry principle between multiple SAR images. By this similarity, we can associate 2D scatter centers and then obtain candidate 3D scattering centers. Thirdly, these candidate scattering centers are clustered in 3D space to reconstruct final 3D positions. Compared with presented methods, the proposed method has a capability of describing distributed scattering center, reduces false and missing 3D scattering centers, and has fewer restrictionson modeling data. Finally, results of experiments have demonstrated the effectiveness of the proposed method.展开更多
The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based...The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based on deep learning was proposed to reconstruct 3D models of nonwovens from multi-focus images.A convolutional neural network was trained to extract clear fibers from sequence images.Image processing algorithms were used to obtain the radius,the central axis,and depth information of fibers from the extraction results.Based on this information,3D models were built in 3D space.Furthermore,self-developed algorithms optimized the central axis and depth of fibers,which made fibers more realistic and continuous.The method with lower cost could reconstruct 3D models of nonwovens conveniently.展开更多
At present, many chaos-based image encryption algorithms have proved to be unsafe, few encryption schemes permute the plain images as three-dimensional(3D) bit matrices, and thus bits cannot move to any position, th...At present, many chaos-based image encryption algorithms have proved to be unsafe, few encryption schemes permute the plain images as three-dimensional(3D) bit matrices, and thus bits cannot move to any position, the movement range of bits are limited, and based on them, in this paper we present a novel image encryption algorithm based on 3D Brownian motion and chaotic systems. The architecture of confusion and diffusion is adopted. Firstly, the plain image is converted into a 3D bit matrix and split into sub blocks. Secondly, block confusion based on 3D Brownian motion(BCB3DBM)is proposed to permute the position of the bits within the sub blocks, and the direction of particle movement is generated by logistic-tent system(LTS). Furthermore, block confusion based on position sequence group(BCBPSG) is introduced, a four-order memristive chaotic system is utilized to give random chaotic sequences, and the chaotic sequences are sorted and a position sequence group is chosen based on the plain image, then the sub blocks are confused. The proposed confusion strategy can change the positions of the bits and modify their weights, and effectively improve the statistical performance of the algorithm. Finally, a pixel level confusion is employed to enhance the encryption effect. The initial values and parameters of chaotic systems are produced by the SHA 256 hash function of the plain image. Simulation results and security analyses illustrate that our algorithm has excellent encryption performance in terms of security and speed.展开更多
文摘A new medical image fusion technique is presented.The method is based on three-dimensional reconstruction.After reconstruction,the three-dimensional volume data is normalized by three-dimensional coordinate conversion in the same way and intercepted through setting up cutting plane including anatomical structure,as a result two images in entire registration on space and geometry are obtained and the images are fused at last.Compared with traditional two-dimensional fusion technique,three-dimensional fusion technique can not only resolve the different problems existed in the two kinds of images,but also avoid the registration error of the two kinds of images when they have different scan and imaging parameter.The research proves this fusion technique is more exact and has no registration,so it is more adapt to arbitrary medical image fusion with different equipments.
基金The National Natural Science Foundation of China(No.60972130)
文摘In order to obtain a better sandstone three-dimensional (3D) reconstruction result which is more similar to the original sample, an algorithm based on stationarity for a two-dimensional (2D) training image is proposed. The second-order statistics based on texture features are analyzed to evaluate the scale stationarity of the training image. The multiple-point statistics of the training image are applied to obtain the multiple-point statistics stationarity estimation by the multi-point density function. The results show that the reconstructed 3D structures are closer to reality when the training image has better scale stationarity and multiple-point statistics stationarity by the indications of local percolation probability and two-point probability. Moreover, training images with higher multiple-point statistics stationarity and lower scale stationarity are likely to obtain closer results to the real 3D structure, and vice versa. Thus, stationarity analysis of the training image has far-reaching significance in choosing a better 2D thin section image for the 3D reconstruction of porous media. Especially, high-order statistics perform better than low-order statistics.
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
文摘To address the issues of incomplete information,blurred details,loss of details,and insufficient contrast in infrared and visible image fusion,an image fusion algorithm based on a convolutional autoencoder is proposed.The region attention module is meant to extract the background feature map based on the distinct properties of the background feature map and the detail feature map.A multi-scale convolution attention module is suggested to enhance the communication of feature information.At the same time,the feature transformation module is introduced to learn more robust feature representations,aiming to preserve the integrity of image information.This study uses three available datasets from TNO,FLIR,and NIR to perform thorough quantitative and qualitative trials with five additional algorithms.The methods are assessed based on four indicators:information entropy(EN),standard deviation(SD),spatial frequency(SF),and average gradient(AG).Object detection experiments were done on the M3FD dataset to further verify the algorithm’s performance in comparison with five other algorithms.The algorithm’s accuracy was evaluated using the mean average precision at a threshold of 0.5(mAP@0.5)index.Comprehensive experimental findings show that CAEFusion performs well in subjective visual and objective evaluation criteria and has promising potential in downstream object detection tasks.
基金The Key R&D Project of Hainan Province under contract No.ZDYF2023SHFZ097the National Natural Science Foundation of China under contract No.42376180。
文摘Mangroves are indispensable to coastlines,maintaining biodiversity,and mitigating climate change.Therefore,improving the accuracy of mangrove information identification is crucial for their ecological protection.Aiming at the limited morphological information of synthetic aperture radar(SAR)images,which is greatly interfered by noise,and the susceptibility of optical images to weather and lighting conditions,this paper proposes a pixel-level weighted fusion method for SAR and optical images.Image fusion enhanced the target features and made mangrove monitoring more comprehensive and accurate.To address the problem of high similarity between mangrove forests and other forests,this paper is based on the U-Net convolutional neural network,and an attention mechanism is added in the feature extraction stage to make the model pay more attention to the mangrove vegetation area in the image.In order to accelerate the convergence and normalize the input,batch normalization(BN)layer and Dropout layer are added after each convolutional layer.Since mangroves are a minority class in the image,an improved cross-entropy loss function is introduced in this paper to improve the model’s ability to recognize mangroves.The AttU-Net model for mangrove recognition in high similarity environments is thus constructed based on the fused images.Through comparison experiments,the overall accuracy of the improved U-Net model trained from the fused images to recognize the predicted regions is significantly improved.Based on the fused images,the recognition results of the AttU-Net model proposed in this paper are compared with its benchmark model,U-Net,and the Dense-Net,Res-Net,and Seg-Net methods.The AttU-Net model captured mangroves’complex structures and textural features in images more effectively.The average OA,F1-score,and Kappa coefficient in the four tested regions were 94.406%,90.006%,and 84.045%,which were significantly higher than several other methods.This method can provide some technical support for the monitoring and protection of mangrove ecosystems.
基金supported by the national key research and development program (No.2020YFB1806608)Jiangsu natural science foundation for distinguished young scholars (No.BK20220054)。
文摘Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images.
基金This project is supported by the National Natural Science Foundation of China(NSFC)(No.61902158).
文摘The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality.
基金supported by the National Magnetic Confinement Fusion Energy R&D Program of China(Nos.2018YFE0309100 and 2019YFE03010004)National Natural Science Foundation of China(No.51821005)。
文摘A toroidal soft x-ray imaging(T-SXRI)system has been developed to investigate threedimensional(3D)plasma physics on J-TEXT.This T-SXRI system consists of three sets of SXR arrays.Two sets are newly developed and located on the vacuum chamber wall at toroidal positionsφof 126.4°and 272.6°,respectively,while one set was established previously atφ=65.50.Each set of SXR arrays consists of three arrays viewing the plasma poloidally,and hence can be used separately to obtain SXR images via the tomographic method.The sawtooth precursor oscillations are measured by T-SXRI,and the corresponding images of perturbative SXR signals are successfully reconstructed at these three toroidal positions,hence providing measurement of the 3D structure of precursor oscillations.The observed 3D structure is consistent with the helical structure of the m/n=1/1 mode.The experimental observation confirms that the T-SXRI system is able to observe 3D structures in the J-TEXT plasma.
基金Major Program of National Natural Science Foundation of China(NSFC12292980,NSFC12292984)National Key R&D Program of China(2023YFA1009000,2023YFA1009004,2020YFA0712203,2020YFA0712201)+2 种基金Major Program of National Natural Science Foundation of China(NSFC12031016)Beijing Natural Science Foundation(BNSFZ210003)Department of Science,Technology and Information of the Ministry of Education(8091B042240).
文摘Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.
文摘Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.
基金Guangdong Science and Technology Program under Grant No.202206010052Foshan Province R&D Key Project under Grant No.2020001006827Guangdong Academy of Sciences Integrated Industry Technology Innovation Center Action Special Project under Grant No.2022GDASZH-2022010108.
文摘The employment of deep convolutional neural networks has recently contributed to significant progress in single image super-resolution(SISR)research.However,the high computational demands of most SR techniques hinder their applicability to edge devices,despite their satisfactory reconstruction performance.These methods commonly use standard convolutions,which increase the convolutional operation cost of the model.In this paper,a lightweight Partial Separation and Multiscale Fusion Network(PSMFNet)is proposed to alleviate this problem.Specifically,this paper introduces partial convolution(PConv),which reduces the redundant convolution operations throughout the model by separating some of the features of an image while retaining features useful for image reconstruction.Additionally,it is worth noting that the existing methods have not fully utilized the rich feature information,leading to information loss,which reduces the ability to learn feature representations.Inspired by self-attention,this paper develops a multiscale feature fusion block(MFFB),which can better utilize the non-local features of an image.MFFB can learn long-range dependencies from the spatial dimension and extract features from the channel dimension,thereby obtaining more comprehensive and rich feature information.As the role of the MFFB is to capture rich global features,this paper further introduces an efficient inverted residual block(EIRB)to supplement the local feature extraction ability of PSMFNet.A comprehensive analysis of the experimental results shows that PSMFNet maintains a better performance with fewer parameters than the state-of-the-art models.
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
基金Princess Nourah bint Abdulrahman University and Researchers Supporting Project Number(PNURSP2024R346)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Recently,there have been several uses for digital image processing.Image fusion has become a prominent application in the domain of imaging processing.To create one final image that provesmore informative and helpful compared to the original input images,image fusion merges two or more initial images of the same item.Image fusion aims to produce,enhance,and transform significant elements of the source images into combined images for the sake of human visual perception.Image fusion is commonly employed for feature extraction in smart robots,clinical imaging,audiovisual camera integration,manufacturing process monitoring,electronic circuit design,advanced device diagnostics,and intelligent assembly line robots,with image quality varying depending on application.The research paper presents various methods for merging images in spatial and frequency domains,including a blend of stable and curvelet transformations,everageMax-Min,weighted principal component analysis(PCA),HIS(Hue,Intensity,Saturation),wavelet transform,discrete cosine transform(DCT),dual-tree Complex Wavelet Transform(CWT),and multiple wavelet transform.Image fusion methods integrate data from several source images of an identical target,thereby enhancing information in an extremely efficient manner.More precisely,in imaging techniques,the depth of field constraint precludes images from focusing on every object,leading to the exclusion of certain characteristics.To tackle thess challanges,a very efficient multi-focus wavelet decomposition and recompositionmethod is proposed.The use of these wavelet decomposition and recomposition techniques enables this method to make use of existing optimized wavelet code and filter choice.The simulated outcomes provide evidence that the suggested approach initially extracts particular characteristics from images in order to accurately reflect the level of clarity portrayed in the original images.This study enhances the performance of the eXtreme Gradient Boosting(XGBoost)algorithm in detecting brain malignancies with greater precision through the integration of computational image analysis and feature selection.The performance of images is improved by segmenting them employing the K-Means algorithm.The segmentation method aids in identifying specific regions of interest,using Particle Swarm Optimization(PCA)for trait selection and XGBoost for data classification.Extensive trials confirm the model’s exceptional visual performance,achieving an accuracy of up to 97.067%and providing good objective indicators.
文摘The growth optimizer(GO)is an innovative and robust metaheuristic optimization algorithm designed to simulate the learning and reflective processes experienced by individuals as they mature within the social environment.However,the original GO algorithm is constrained by two significant limitations:slow convergence and high mem-ory requirements.This restricts its application to large-scale and complex problems.To address these problems,this paper proposes an innovative enhanced growth optimizer(eGO).In contrast to conventional population-based optimization algorithms,the eGO algorithm utilizes a probabilistic model,designated as the virtual population,which is capable of accurately replicating the behavior of actual populations while simultaneously reducing memory consumption.Furthermore,this paper introduces the Lévy flight mechanism,which enhances the diversity and flexibility of the search process,thus further improving the algorithm’s global search capability and convergence speed.To verify the effectiveness of the eGO algorithm,a series of experiments were conducted using the CEC2014 and CEC2017 test sets.The results demonstrate that the eGO algorithm outperforms the original GO algorithm and other compact algorithms regarding memory usage and convergence speed,thus exhibiting powerful optimization capabilities.Finally,the eGO algorithm was applied to image fusion.Through a comparative analysis with the existing PSO and GO algorithms and other compact algorithms,the eGO algorithm demonstrates superior performance in image fusion.
基金supported by the National Key R&D Program of China(2018AAA0102100)the National Natural Science Foundation of China(No.62376287)+3 种基金the International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province(2021CB1013)the Key Research and Development Program of Hunan Province(2022SK2054)the Natural Science Foundation of Hunan Province(No.2022JJ30762,2023JJ70016)the 111 Project under Grant(No.B18059).
文摘Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金funded by the National Natural Science Foundation of China,grant number 61302188.
文摘Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.
文摘A method and procedure is presented to reconstruct three-dimensional(3D) positions of scattering centers from multiple synthetic aperture radar(SAR) images. Firstly, two-dimensional(2D) attribute scattering centers of targets are extracted from 2D SAR images. Secondly, similarity measure is developed based on 2D attributed scatter centers' location, type, and radargrammetry principle between multiple SAR images. By this similarity, we can associate 2D scatter centers and then obtain candidate 3D scattering centers. Thirdly, these candidate scattering centers are clustered in 3D space to reconstruct final 3D positions. Compared with presented methods, the proposed method has a capability of describing distributed scattering center, reduces false and missing 3D scattering centers, and has fewer restrictionson modeling data. Finally, results of experiments have demonstrated the effectiveness of the proposed method.
基金National Natural Science Foundation of China(No.61771123)。
文摘The three-dimensional(3D)model is of great significance to analyze the performance of nonwovens.However,the existing modelling methods could not reconstruct the 3D structure of nonwovens at low cost.A new method based on deep learning was proposed to reconstruct 3D models of nonwovens from multi-focus images.A convolutional neural network was trained to extract clear fibers from sequence images.Image processing algorithms were used to obtain the radius,the central axis,and depth information of fibers from the extraction results.Based on this information,3D models were built in 3D space.Furthermore,self-developed algorithms optimized the central axis and depth of fibers,which made fibers more realistic and continuous.The method with lower cost could reconstruct 3D models of nonwovens conveniently.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.41571417 and 61305042)the National Science Foundation of the United States(Grant Nos.CNS-1253424 and ECCS-1202225)+4 种基金the Science and Technology Foundation of Henan Province,China(Grant No.152102210048)the Foundation and Frontier Project of Henan Province,China(Grant No.162300410196)China Postdoctoral Science Foundation(Grant No.2016M602235)the Natural Science Foundation of Educational Committee of Henan Province,China(Grant No.14A413015)the Research Foundation of Henan University,China(Grant No.xxjc20140006)
文摘At present, many chaos-based image encryption algorithms have proved to be unsafe, few encryption schemes permute the plain images as three-dimensional(3D) bit matrices, and thus bits cannot move to any position, the movement range of bits are limited, and based on them, in this paper we present a novel image encryption algorithm based on 3D Brownian motion and chaotic systems. The architecture of confusion and diffusion is adopted. Firstly, the plain image is converted into a 3D bit matrix and split into sub blocks. Secondly, block confusion based on 3D Brownian motion(BCB3DBM)is proposed to permute the position of the bits within the sub blocks, and the direction of particle movement is generated by logistic-tent system(LTS). Furthermore, block confusion based on position sequence group(BCBPSG) is introduced, a four-order memristive chaotic system is utilized to give random chaotic sequences, and the chaotic sequences are sorted and a position sequence group is chosen based on the plain image, then the sub blocks are confused. The proposed confusion strategy can change the positions of the bits and modify their weights, and effectively improve the statistical performance of the algorithm. Finally, a pixel level confusion is employed to enhance the encryption effect. The initial values and parameters of chaotic systems are produced by the SHA 256 hash function of the plain image. Simulation results and security analyses illustrate that our algorithm has excellent encryption performance in terms of security and speed.