Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imag...Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.展开更多
Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model an...Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model and effect of ENVISAT/SAR and HJ-1A satel ite multispectral remote sensing images. Based on the ARSIS strat-egy, using the wavelet transform and the Interaction between the Band Structure Model (IBSM), the research progressed the ENVISAT satel ite SAR and the HJ-1A satel ite CCD images wavelet decomposition, and low/high frequency coefficient re-construction, and obtained the fusion images through the inverse wavelet transform. In the light of low and high-frequency images have different characteristics in differ-ent areas, different fusion rules which can enhance the integration process of self-adaptive were taken, with comparisons with the PCA transformation, IHS transfor-mation and other traditional methods by subjective and the corresponding quantita-tive evaluation. Furthermore, the research extracted the bands and NDVI values around the fusion with GPS samples, analyzed and explained the fusion effect. The results showed that the spectral distortion of wavelet fusion, IHS transform, PCA transform images was 0.101 6, 0.326 1 and 1.277 2, respectively and entropy was 14.701 5, 11.899 3 and 13.229 3, respectively, the wavelet fusion is the highest. The method of wavelet maintained good spectral capability, and visual effects while improved the spatial resolution, the information interpretation effect was much better than other two methods.展开更多
A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both sp...A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both spectral and spatial information simultaneously in fused image.NMF is a matrix factorization method,which can extract the local feature by choosing suitable dimension of the feature subspace.Firstly the multispectral image was represented in intensity hue saturation(IHS) system.Then the I component and panchromatic image were decomposed by NSCT.Next we used NMF to learn the feature of both multispectral and panchromatic images' low-frequency subbands,and the selection principle of the other coefficients was absolute maximum criterion.Finally the new coefficients were reconstructed to get the fused image.Experiments are carried out and the results are compared with some other methods,which show that the new method performs better in improving the spatial resolution and preserving the feature information than the other existing relative methods.展开更多
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.展开更多
A novel image fusion algorithm based on bandelet transform is proposed. Bandelet transform can take advantage of the geometrical regularity of image structure and represent sharp image transitions such as edges effici...A novel image fusion algorithm based on bandelet transform is proposed. Bandelet transform can take advantage of the geometrical regularity of image structure and represent sharp image transitions such as edges efficiently in image fusion. For reconstructing the fused image, the maximum rule is used to select source images' geometric flow and bandelet coefficients. Experimental results indicate that the bandelet-based fusion algorithm represents the edge and detailed information well and outperforms the wavelet-based and Laplacian pyramid-based fusion algorithms, especially when the abundant texture and edges are contained in the source images.展开更多
Increased penetration of renewables for power generation has negatively impacted the dynamics of conventional fossil fuel-based power plants.The power plants operating on the base load are forced to cycle,to adjust to...Increased penetration of renewables for power generation has negatively impacted the dynamics of conventional fossil fuel-based power plants.The power plants operating on the base load are forced to cycle,to adjust to the fluctuating power demands.This results in an inefficient operation of the coal power plants,which leads up to higher operating losses.To overcome such operational challenge associated with cycling and to develop an optimal process control,this work analyzes a set of models for predicting power generation.Moreover,the power generation is intrinsically affected by the state of the power plant components,and therefore our model development also incorporates additional power plant process variables while forecasting the power generation.We present and compare multiple state-of-the-art forecasting data-driven methods for power generation to determine the most adequate and accurate model.We also develop an interpretable attention-based transformer model to explain the importance of process variables during training and forecasting.The trained deep neural network(DNN)LSTM model has good accuracy in predicting gross power generation under various prediction horizons with/without cycling events and outperforms the other models for long-term forecasting.The DNN memory-based models show significant superiority over other state-of-the-art machine learning models for short,medium and long range predictions.The transformer-based model with attention enhances the selection of historical data for multi-horizon forecasting,and also allows to interpret the significance of internal power plant components on the power generation.This newly gained insights can be used by operation engineers to anticipate and monitor the health of power plant equipment during high cycling periods.展开更多
文摘Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.
基金supported by the National Natural Science Foundation of China(41171336)the Project of Jiangsu Province Agricultural Science and Technology Innovation Fund(CX12-3054)
文摘Because of cloudy and rainy weather in south China, optical remote sens-ing images often can't be obtained easily. With the regional trial results in Baoying, Jiangsu province, this paper explored the fusion model and effect of ENVISAT/SAR and HJ-1A satel ite multispectral remote sensing images. Based on the ARSIS strat-egy, using the wavelet transform and the Interaction between the Band Structure Model (IBSM), the research progressed the ENVISAT satel ite SAR and the HJ-1A satel ite CCD images wavelet decomposition, and low/high frequency coefficient re-construction, and obtained the fusion images through the inverse wavelet transform. In the light of low and high-frequency images have different characteristics in differ-ent areas, different fusion rules which can enhance the integration process of self-adaptive were taken, with comparisons with the PCA transformation, IHS transfor-mation and other traditional methods by subjective and the corresponding quantita-tive evaluation. Furthermore, the research extracted the bands and NDVI values around the fusion with GPS samples, analyzed and explained the fusion effect. The results showed that the spectral distortion of wavelet fusion, IHS transform, PCA transform images was 0.101 6, 0.326 1 and 1.277 2, respectively and entropy was 14.701 5, 11.899 3 and 13.229 3, respectively, the wavelet fusion is the highest. The method of wavelet maintained good spectral capability, and visual effects while improved the spatial resolution, the information interpretation effect was much better than other two methods.
基金Supported by the National Natural Science Foundation of China(60872065)
文摘A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both spectral and spatial information simultaneously in fused image.NMF is a matrix factorization method,which can extract the local feature by choosing suitable dimension of the feature subspace.Firstly the multispectral image was represented in intensity hue saturation(IHS) system.Then the I component and panchromatic image were decomposed by NSCT.Next we used NMF to learn the feature of both multispectral and panchromatic images' low-frequency subbands,and the selection principle of the other coefficients was absolute maximum criterion.Finally the new coefficients were reconstructed to get the fused image.Experiments are carried out and the results are compared with some other methods,which show that the new method performs better in improving the spatial resolution and preserving the feature information than the other existing relative methods.
基金supported by Scientific Research Deanship at University of Ha’il,Saudi Arabia through project number RG-23137.
文摘The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.
基金This work was supported by the Navigation Science Foundation (No.05F07001)the National Natural Science Foundation of China (No.60472081).
文摘A novel image fusion algorithm based on bandelet transform is proposed. Bandelet transform can take advantage of the geometrical regularity of image structure and represent sharp image transitions such as edges efficiently in image fusion. For reconstructing the fused image, the maximum rule is used to select source images' geometric flow and bandelet coefficients. Experimental results indicate that the bandelet-based fusion algorithm represents the edge and detailed information well and outperforms the wavelet-based and Laplacian pyramid-based fusion algorithms, especially when the abundant texture and edges are contained in the source images.
文摘Increased penetration of renewables for power generation has negatively impacted the dynamics of conventional fossil fuel-based power plants.The power plants operating on the base load are forced to cycle,to adjust to the fluctuating power demands.This results in an inefficient operation of the coal power plants,which leads up to higher operating losses.To overcome such operational challenge associated with cycling and to develop an optimal process control,this work analyzes a set of models for predicting power generation.Moreover,the power generation is intrinsically affected by the state of the power plant components,and therefore our model development also incorporates additional power plant process variables while forecasting the power generation.We present and compare multiple state-of-the-art forecasting data-driven methods for power generation to determine the most adequate and accurate model.We also develop an interpretable attention-based transformer model to explain the importance of process variables during training and forecasting.The trained deep neural network(DNN)LSTM model has good accuracy in predicting gross power generation under various prediction horizons with/without cycling events and outperforms the other models for long-term forecasting.The DNN memory-based models show significant superiority over other state-of-the-art machine learning models for short,medium and long range predictions.The transformer-based model with attention enhances the selection of historical data for multi-horizon forecasting,and also allows to interpret the significance of internal power plant components on the power generation.This newly gained insights can be used by operation engineers to anticipate and monitor the health of power plant equipment during high cycling periods.