期刊文献+
共找到12篇文章
< 1 >
每页显示 20 50 100
Unified deep learning model for predicting fundus fluorescein angiography image from fundus structure image
1
作者 Yiwei Chen Yi He +3 位作者 Hong Ye Lina Xing Xin Zhang Guohua Shi 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第3期105-113,共9页
The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera im... The prediction of fundus fluorescein angiography(FFA)images from fundus structural images is a cutting-edge research topic in ophthalmological image processing.Prediction comprises estimating FFA from fundus camera imaging,single-phase FFA from scanning laser ophthalmoscopy(SLO),and three-phase FFA also from SLO.Although many deep learning models are available,a single model can only perform one or two of these prediction tasks.To accomplish three prediction tasks using a unified method,we propose a unified deep learning model for predicting FFA images from fundus structure images using a supervised generative adversarial network.The three prediction tasks are processed as follows:data preparation,network training under FFA supervision,and FFA image prediction from fundus structure images on a test set.By comparing the FFA images predicted by our model,pix2pix,and CycleGAN,we demonstrate the remarkable progress achieved by our proposal.The high performance of our model is validated in terms of the peak signal-to-noise ratio,structural similarity index,and mean squared error. 展开更多
关键词 Fundus fluorescein angiography image fundus structure image image translation unified deep learning model generative adversarial networks
下载PDF
Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module
2
作者 胡振涛 HU Chonghao +1 位作者 YANG Haoran SHUAI Weiwei 《High Technology Letters》 EI CAS 2024年第1期23-30,共8页
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera... The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable. 展开更多
关键词 multi-modal image translation generative adversarial network(GAN) squeezeand-excitation(SE)mechanism feature attention(FA)module
下载PDF
A Novel Unsupervised MRI Synthetic CT Image Generation Framework with Registration Network
3
作者 Liwei Deng Henan Sun +2 位作者 Jing Wang Sijuan Huang Xin Yang 《Computers, Materials & Continua》 SCIE EI 2023年第11期2271-2287,共17页
In recent years,radiotherapy based only on Magnetic Resonance(MR)images has become a hot spot for radiotherapy planning research in the current medical field.However,functional computed tomography(CT)is still needed f... In recent years,radiotherapy based only on Magnetic Resonance(MR)images has become a hot spot for radiotherapy planning research in the current medical field.However,functional computed tomography(CT)is still needed for dose calculation in the clinic.Recent deep-learning approaches to synthesized CT images from MR images have raised much research interest,making radiotherapy based only on MR images possible.In this paper,we proposed a novel unsupervised image synthesis framework with registration networks.This paper aims to enforce the constraints between the reconstructed image and the input image by registering the reconstructed image with the input image and registering the cycle-consistent image with the input image.Furthermore,this paper added ConvNeXt blocks to the network and used large kernel convolutional layers to improve the network’s ability to extract features.This research used the collected head and neck data of 180 patients with nasopharyngeal carcinoma to experiment and evaluate the training model with four evaluation metrics.At the same time,this research made a quantitative comparison of several commonly used model frameworks.We evaluate the model performance in four evaluation metrics which achieve Mean Absolute Error(MAE),Root Mean Square Error(RMSE),Peak Signal-to-Noise Ratio(PSNR),and Structural Similarity(SSIM)are 18.55±1.44,86.91±4.31,33.45±0.74 and 0.960±0.005,respectively.Compared with other methods,MAE decreased by 2.17,RMSE decreased by 7.82,PSNR increased by 0.76,and SSIM increased by 0.011.The results show that the model proposed in this paper outperforms other methods in the quality of image synthesis.The work in this paper is of guiding significance to the study of MR-only radiotherapy planning. 展开更多
关键词 MRI-CT image synthesis variational auto-encoder medical image translation MRI-only based radiotherapy
下载PDF
What can poetry translation focus on-Sense, image or spirit?
4
作者 程水英 《Sino-US English Teaching》 2009年第7期55-59,共5页
"Qi" (spirit) is originally an ancient Chinese philosophical term regarded as the fundamental material to constitute the universe. Later, it's used as a term in literary criticism referring to the authors' talen... "Qi" (spirit) is originally an ancient Chinese philosophical term regarded as the fundamental material to constitute the universe. Later, it's used as a term in literary criticism referring to the authors' talent, qualities, and their work styles. Poetry, as a language form concentrating thoughts and feelings, is the best form reflecting the various changes of"Qi" (spirit). "Qi" (spirit) combines the style, imagery and charm of a poem in a union, translation of the poem should be close to such a combination. Translation of"Qi" (spirit) is a pursuit of translating poetry which is based on translation of sense and image, but it may focus more on the wording technique with a wide insight on the original poem including the meaning and the tone of the poem and the author's thoughts and feelings, focusing on an overall effect of appreciation. 展开更多
关键词 translation of sense translation of image translation of spirit
下载PDF
Exploring Image Generation for UAV Change Detection 被引量:3
5
作者 Xuan Li Haibin Duan +1 位作者 Yonglin Tian Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2022年第6期1061-1072,共12页
Change detection(CD)is becoming indispensable for unmanned aerial vehicles(UAVs),especially in the domain of water landing,rescue and search.However,even the most advanced models require large amounts of data for mode... Change detection(CD)is becoming indispensable for unmanned aerial vehicles(UAVs),especially in the domain of water landing,rescue and search.However,even the most advanced models require large amounts of data for model training and testing.Therefore,sufficient labeled images with different imaging conditions are needed.Inspired by computer graphics,we present a cloning method to simulate inland-water scene and collect an auto-labeled simulated dataset.The simulated dataset consists of six challenges to test the effects of dynamic background,weather,and noise on change detection models.Then,we propose an image translation framework that translates simulated images to synthetic images.This framework uses shared parameters(encoder and generator)and 22×22 receptive fields(discriminator)to generate realistic synthetic images as model training sets.The experimental results indicate that:1)different imaging challenges affect the performance of change detection models;2)compared with simulated images,synthetic images can effectively improve the accuracy of supervised models. 展开更多
关键词 Change detection computer graphics image translation simulated images synthetic images unmanned aerial vehicles(UAVs)
下载PDF
Medical image translation using an edge-guided generative adversarial network with global-to-local feature fusion
6
作者 Hamed Amini Amirkolaee Hamid Amini Amirkolaee 《The Journal of Biomedical Research》 CAS CSCD 2022年第6期409-422,共14页
In this paper,we propose a framework based deep learning for medical image translation using paired and unpaired training data.Initially,a deep neural network with an encoder-decoder structure is proposed for image-to... In this paper,we propose a framework based deep learning for medical image translation using paired and unpaired training data.Initially,a deep neural network with an encoder-decoder structure is proposed for image-to-image translation using paired training data.A multi-scale context aggregation approach is then used to extract various features from different levels of encoding,which are used during the corresponding network decoding stage.At this point,we further propose an edge-guided generative adversarial network for image-to-image translation based on unpaired training data.An edge constraint loss function is used to improve network performance in tissue boundaries.To analyze framework performance,we conducted five different medical image translation tasks.The assessment demonstrates that the proposed deep learning framework brings significant improvement beyond state-of-the-arts. 展开更多
关键词 edge-guided generative adversarial network global to local medical image translation magnetic resonance imaging computed tomography
下载PDF
Deep learning method for cell count from transmitted-light microscope 被引量:1
7
作者 Mengyang Lu Wei Shi +3 位作者 Zhengfen Jiang Boyi Li Dean Ta Xin Liu 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2023年第5期115-127,共13页
Automatic cell counting provides an effective tool for medical research and diagnosis.Currently,cell counting can be completed by transmitted-light microscope,however,it requires expert knowledge and the counting accu... Automatic cell counting provides an effective tool for medical research and diagnosis.Currently,cell counting can be completed by transmitted-light microscope,however,it requires expert knowledge and the counting accuracy which is unsatisfied for overlapped cells.Further,the image-translation-based detection method has been proposed and the potential has been shown to accomplish cell counting from transmitted-light microscope,automatically and effectively.In this work,a new deep-learning(DL)-based two-stage detection method(cGAN-YOLO)is designed to further enhance the performance of cell counting,which is achieved by combining a DL-based fluorescent image translation model and a DL-based cell detection model.The various results show that cGAN-YOLO can effectively detect and count some different types of cells from the acquired transmitted-light microscope images.Compared with the previously reported YOLO-based one-stage detection method,high recognition accuracy(RA)is achieved by the cGAN-YOLO method,with an improvement of 29.80%.Furthermore,we can also observe that cGAN-YOLO obtains an improvement of 12.11%in RA compared with the previously reported image-translation-based detection method.In a word,cGAN-YOLO makes it possible to implement cell counting directly from the experimental acquired transmitted-light microscopy images with high flexibility and performance,which extends the applicability in clinical research. 展开更多
关键词 Automatic cell counting transmitted-light microscope deep-learning fluorescent image translation.
下载PDF
FSA-Net:A Cost-efficient Face Swapping Attention Network with Occlusion-Aware Normalization
8
作者 Zhipeng Bin Huihuang Zhao +1 位作者 Xiaoman Liang Wenli Chen 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期971-983,共13页
The main challenges in face swapping are the preservation and adaptive superimposition of attributes of two images.In this study,the Face Swapping Attention Network(FSA-Net)is proposed to generate photoreal-istic face... The main challenges in face swapping are the preservation and adaptive superimposition of attributes of two images.In this study,the Face Swapping Attention Network(FSA-Net)is proposed to generate photoreal-istic face swapping.The existing face-swapping methods ignore the blending attributes or mismatch the facial keypoint(cheek,mouth,eye,nose,etc.),which causes artifacts and makes the generated face silhouette non-realistic.To address this problem,a novel reinforced multi-aware attention module,referred to as RMAA,is proposed for handling facial fusion and expression occlusion flaws.The framework includes two stages.In the first stage,a novel attribute encoder is proposed to extract multiple levels of target face attributes and integrate identities and attributes when synthesizing swapped faces.In the second stage,a novel Stochastic Error Refinement(SRE)module is designed to solve the problem of facial occlusion,which is used to repair occlusion regions in a semi-supervised way without any post-processing.The proposed method is then compared with the current state-of-the-art methods.The obtained results demonstrate the qualitative and quantitative outperformance of the proposed method.More details are provided at the footnote link and at https://sites.google.com/view/fsa-net-official. 展开更多
关键词 Attention face-swapping neural network face manipulation identity swap image translation
下载PDF
A Survey of Image Synthesis and Editing with Generative Adversarial Networks 被引量:19
9
作者 Xian Wu Kun Xu Peter Hall 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第6期660-674,共15页
This paper presents a survey of image synthesis and editing with Generative Adversarial Networks(GANs). GANs consist of two deep networks, a generator and a discriminator, which are trained in a competitive way. Due... This paper presents a survey of image synthesis and editing with Generative Adversarial Networks(GANs). GANs consist of two deep networks, a generator and a discriminator, which are trained in a competitive way. Due to the power of deep networks and the competitive training manner, GANs are capable of producing reasonable and realistic images, and have shown great capability in many image synthesis and editing applications.This paper surveys recent GAN papers regarding topics including, but not limited to, texture synthesis, image inpainting, image-to-image translation, and image editing. 展开更多
关键词 image synthesis image editing constrained image synthesis generative adversarial networks imageto-image translation
原文传递
Mask guided diverse face image synthesis
10
作者 Song SUN Bo ZHAO +2 位作者 Muhammad MATEEN Xin CHEN Junhao WEN 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第3期67-75,共9页
Recent studies have shown remarkable success in face image generation task.However,existing approaches have limited diversity,quality and controllability in generating results.To address these issues,we propose a nove... Recent studies have shown remarkable success in face image generation task.However,existing approaches have limited diversity,quality and controllability in generating results.To address these issues,we propose a novel end-to-end learning framework to generate diverse,realistic and controllable face images guided by face masks.The face mask provides a good geometric constraint for a face by specifying the size and location of different components of the face,such as eyes,nose and mouse.The framework consists of four components:style encoder,style decoder,generator and discriminator.The style encoder generates a style code which represents the style of the result face;the generator translate the input face mask into a real face based on the style code;the style decoder learns to reconstruct the style code from the generated face image;and the discriminator classifies an input face image as real or fake.With the style code,the proposed model can generate different face images matching the input face mask,and by manipulating the face mask,we can finely control the generated face image.We empirically demonstrate the effectiveness of our approach on mask guided face image synthesis task. 展开更多
关键词 face image generation image translation generative adversarial networks
原文传递
Virtual Fluorescence Translation for Biological Tissue by Conditional Generative Adversarial Network
11
作者 Xin Liu Boyi Li +1 位作者 Chengcheng Liu Dean Ta 《Phenomics》 2023年第4期408-420,共13页
Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues,playing a crucial role in the field of histopathology.However,when labeling and imaging biological tissues,there ... Fluorescence labeling and imaging provide an opportunity to observe the structure of biological tissues,playing a crucial role in the field of histopathology.However,when labeling and imaging biological tissues,there are still some challenges,e.g.,time-consuming tissue preparation steps,expensive reagents,and signal bias due to photobleaching.To overcome these limitations,we present a deep-learning-based method for fluorescence translation of tissue sections,which is achieved by conditional generative adversarial network(cGAN).Experimental results from mouse kidney tissues demonstrate that the proposed method can predict the other types of fluorescence images from one raw fluorescence image,and implement the virtual multi-label fluorescent staining by merging the generated different fluorescence images as well.Moreover,this proposed method can also effectively reduce the time-consuming and laborious preparation in imaging processes,and further saves the cost and time. 展开更多
关键词 Virtual fluorescence labeling image translation Tissues section Generative adversarial network
原文传递
Fluo-Fluo translation based on deep learning
12
作者 Zhengfen Jiang Boyi Li +3 位作者 Tho N.H.T.Tran Jiehui Jiang Xin Liu Dean Ta 《Chinese Optics Letters》 SCIE EI CAS CSCD 2022年第3期82-88,共7页
Fluorescence microscopy technology uses fluorescent dyes to provide highly specific visualization of cell components,which plays an important role in understanding the subcellular structure.However,fluorescence micros... Fluorescence microscopy technology uses fluorescent dyes to provide highly specific visualization of cell components,which plays an important role in understanding the subcellular structure.However,fluorescence microscopy has some limitations such as the risk of non-specific cross labeling in multi-labeled fluorescent staining and limited number of fluo-rescence labels due to spectral overlap.This paper proposes a deep learning-based fluorescence to fluorescence[Flu0-Fluo]translation method,which uses a conditional generative adversarial network to predict a fluorescence image from another fluorescence image and further realizes the multi-label fluorescent staining.The cell types used include human motor neurons,human breast cancer cells,rat cortical neurons,and rat cardiomyocytes.The effectiveness of the method is verified by successfully generating virtual fluorescence images highly similar to the true fluorescence images.This study shows that a deep neural network can implement Fluo-Fluo translation and describe the localization relationship between subcellular structures labeled with different fluorescent markers.The proposed Fluo-Fluo method can avoid non-specific cross labeling in multi-label fluorescence staining and is free from spectral overlaps.In theory,an unlimited number of fluorescence images can be predicted from a single fluorescence image to characterize cells. 展开更多
关键词 deep learning conditional generative adversarial network fluorescence image image translation
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部