Sentiment Analysis, an un-abating research area in text mining, requires a computational method for extracting useful information from text. In recent days, social media has become a really rich source to get informat...Sentiment Analysis, an un-abating research area in text mining, requires a computational method for extracting useful information from text. In recent days, social media has become a really rich source to get information about the behavioral state of people(opinion) through reviews and comments. Numerous techniques have been aimed to analyze the sentiment of the text, however, they were unable to come up to the complexity of the sentiments. The complexity requires novel approach for deep analysis of sentiments for more accurate prediction. This research presents a three-step Sentiment Analysis and Prediction(SAP) solution of Text Trend through K-Nearest Neighbor(KNN). At first, sentences are transformed into tokens and stop words are removed. Secondly, polarity of the sentence, paragraph and text is calculated through contributing weighted words, intensity clauses and sentiment shifters. The resulting features extracted in this step played significant role to improve the results. Finally, the trend of the input text has been predicted using KNN classifier based on extracted features. The training and testing of the model has been performed on publically available datasets of twitter and movie reviews. Experiments results illustrated the satisfactory improvement as compared to existing solutions. In addition, GUI(Hello World) based text analysis framework has been designed to perform the text analytics.展开更多
Tissue segmentation is a fundamental and important task in nasopharyngeal images analysis.However,it is a challenging task to accurately and quickly segment various tissues in the nasopharynx region due to the small d...Tissue segmentation is a fundamental and important task in nasopharyngeal images analysis.However,it is a challenging task to accurately and quickly segment various tissues in the nasopharynx region due to the small difference in gray value between tissues in the nasopharyngeal image and the complexity of the tissue structure.In this paper,we propose a novel tissue segmentation approach based on a two-stage learning framework and U-Net.In the proposed methodology,the network consists of two segmentation modules.The first module performs rough segmentation and the second module performs accurate segmentation.Considering the training time and the limitation of computing resources,the structure of the second module is simpler and the number of network layers is less.In addition,our segmentation module is based on U-Net and incorporates a skip structure,which can make full use of the original features of the data and avoid feature loss.We evaluated our proposed method on the nasopharyngeal dataset provided by West China Hospital of Sichuan University.The experimental results show that the proposed method is superior to many standard segmentation structures and the recently proposed nasopharyngeal tissue segmentation method,and can be easily generalized across different tissue types in various organs.展开更多
Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural...Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural and blurry outpainting results in most cases.To solve this issue,we propose a perceptual image outpainting method,which effectively takes the advantage of low-level feature fusion and multi-patch discriminator.Specifically,we first fuse the texture information in the low-level feature map of encoder,and simultaneously incorporate these aggregated features reusability with semantic(or structural)information of deep feature map such that we could utilizemore sophisticated texture information to generate more authentic outpainting images.Then we also introduce a multi-patch discriminator to enhance the generated texture,which effectively judges the generated image from the different level features and concurrently impels our network to produce more natural and clearer outpainting results.Moreover,we further introduce perceptual loss and style loss to effectively improve the texture and style of outpainting images.Compared with the existing methods,our method could produce finer outpainting results.Experimental results on Places2 and Paris StreetView datasets illustrated the effectiveness of our method for image outpainting.展开更多
文摘Sentiment Analysis, an un-abating research area in text mining, requires a computational method for extracting useful information from text. In recent days, social media has become a really rich source to get information about the behavioral state of people(opinion) through reviews and comments. Numerous techniques have been aimed to analyze the sentiment of the text, however, they were unable to come up to the complexity of the sentiments. The complexity requires novel approach for deep analysis of sentiments for more accurate prediction. This research presents a three-step Sentiment Analysis and Prediction(SAP) solution of Text Trend through K-Nearest Neighbor(KNN). At first, sentences are transformed into tokens and stop words are removed. Secondly, polarity of the sentence, paragraph and text is calculated through contributing weighted words, intensity clauses and sentiment shifters. The resulting features extracted in this step played significant role to improve the results. Finally, the trend of the input text has been predicted using KNN classifier based on extracted features. The training and testing of the model has been performed on publically available datasets of twitter and movie reviews. Experiments results illustrated the satisfactory improvement as compared to existing solutions. In addition, GUI(Hello World) based text analysis framework has been designed to perform the text analytics.
基金This work was supported by the National Natural Science Foundation of China(Grant No.61602066)the Scientific Research Foundation(KYTZ201608)of CUIT+1 种基金the major Project of Education Department in Sichuan(17ZA0063 and 2017JQ0030)partially supported by the Sichuan international science and technology cooperation and exchange research program(2016HH0018).
文摘Tissue segmentation is a fundamental and important task in nasopharyngeal images analysis.However,it is a challenging task to accurately and quickly segment various tissues in the nasopharynx region due to the small difference in gray value between tissues in the nasopharyngeal image and the complexity of the tissue structure.In this paper,we propose a novel tissue segmentation approach based on a two-stage learning framework and U-Net.In the proposed methodology,the network consists of two segmentation modules.The first module performs rough segmentation and the second module performs accurate segmentation.Considering the training time and the limitation of computing resources,the structure of the second module is simpler and the number of network layers is less.In addition,our segmentation module is based on U-Net and incorporates a skip structure,which can make full use of the original features of the data and avoid feature loss.We evaluated our proposed method on the nasopharyngeal dataset provided by West China Hospital of Sichuan University.The experimental results show that the proposed method is superior to many standard segmentation structures and the recently proposed nasopharyngeal tissue segmentation method,and can be easily generalized across different tissue types in various organs.
基金This work was supported by the Sichuan Science and Technology program(2019JDJQ0002,2019YFG0496,2021016,2020JDTD0020)partially supported by National Science Foundation of China 42075142.
文摘Recently,deep learning-based image outpainting has made greatly notable improvements in computer vision field.However,due to the lack of fully extracting image information,the existing methods often generate unnatural and blurry outpainting results in most cases.To solve this issue,we propose a perceptual image outpainting method,which effectively takes the advantage of low-level feature fusion and multi-patch discriminator.Specifically,we first fuse the texture information in the low-level feature map of encoder,and simultaneously incorporate these aggregated features reusability with semantic(or structural)information of deep feature map such that we could utilizemore sophisticated texture information to generate more authentic outpainting images.Then we also introduce a multi-patch discriminator to enhance the generated texture,which effectively judges the generated image from the different level features and concurrently impels our network to produce more natural and clearer outpainting results.Moreover,we further introduce perceptual loss and style loss to effectively improve the texture and style of outpainting images.Compared with the existing methods,our method could produce finer outpainting results.Experimental results on Places2 and Paris StreetView datasets illustrated the effectiveness of our method for image outpainting.