Extracting valuable information frombiomedical texts is one of the current research hotspots of concern to a wide range of scholars.The biomedical corpus contains numerous complex long sentences and overlapping relati...Extracting valuable information frombiomedical texts is one of the current research hotspots of concern to a wide range of scholars.The biomedical corpus contains numerous complex long sentences and overlapping relational triples,making most generalized domain joint modeling methods difficult to apply effectively in this field.For a complex semantic environment in biomedical texts,in this paper,we propose a novel perspective to perform joint entity and relation extraction;existing studies divide the relation triples into several steps or modules.However,the three elements in the relation triples are interdependent and inseparable,so we regard joint extraction as a tripartite classification problem.At the same time,fromthe perspective of triple classification,we design amulti-granularity 2D convolution to refine the word pair table and better utilize the dependencies between biomedical word pairs.Finally,we use a biaffine predictor to assist in predicting the labels of word pairs for relation extraction.Our model(MCTPL)Multi-granularity Convolutional Tokens Pairs of Labeling better utilizes the elements of triples and improves the ability to extract overlapping triples compared to previous approaches.Finally,we evaluated our model on two publicly accessible datasets.The experimental results show that our model’s ability to extract relation triples on the CPI dataset improves the F1 score by 2.34%compared to the current optimal model.On the DDI dataset,the F1 value improves the F1 value by 1.68%compared to the current optimal model.Our model achieved state-of-the-art performance compared to other baseline models in biomedical text entity relation extraction.展开更多
To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates ...To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.展开更多
Knowledge graphs are involved in more and more applications to further improve intelligence.Owing to the inherent incompleteness of knowledge graphs resulted from data updating and missing,a number of knowledge graph ...Knowledge graphs are involved in more and more applications to further improve intelligence.Owing to the inherent incompleteness of knowledge graphs resulted from data updating and missing,a number of knowledge graph completion models are proposed in succession.To obtain better performance,many methods are of high complexity,making it time-consuming for training and inference.This paper proposes a simple but e®ective model using only shallow neural networks,which combines enhanced feature interaction and multi-subspace information integration.In the enhanced feature interaction module,entity and relation embeddings are almost peer-to-peer interacted via multi-channel 2D convolution.In the multi-subspace information integration module,entity and relation embeddings are projected to multiple subspaces to extract multi-view information to further boost performance.Extensive experiments on widely used datasets show that the proposed model outperforms a series of strong baselines.And ablation studies demonstrate the e®ectiveness of each submodule in the model.展开更多
The prevalence of melanoma skin cancer has increased in recent decades.The greatest risk from melanoma is its ability to broadly spread throughout the body by means of lymphatic vessels and veins.Thus,the early diagno...The prevalence of melanoma skin cancer has increased in recent decades.The greatest risk from melanoma is its ability to broadly spread throughout the body by means of lymphatic vessels and veins.Thus,the early diagnosis of melanoma is a key factor in improving the prognosis of the disease.Deep learning makes it possible to design and develop intelligent systems that can be used in detecting and classifying skin lesions from visible-light images.Such systems can provide early and accurate diagnoses of melanoma and other types of skin diseases.This paper proposes a new method which can be used for both skin lesion segmentation and classification problems.This solution makes use of Convolutional neural networks(CNN)with the architecture two-dimensional(Conv2D)using three phases:feature extraction,classification and detection.The proposed method is mainly designed for skin cancer detection and diagnosis.Using the public dataset International Skin Imaging Collaboration(ISIC),the impact of the proposed segmentation method on the performance of the classification accuracy was investigated.The obtained results showed that the proposed skin cancer detection and classification method had a good performance with an accuracy of 94%,sensitivity of 92%and specificity of 96%.Also comparing with the related work using the same dataset,i.e.,ISIC,showed a better performance of the proposed method.展开更多
People who have trouble communicating verbally are often dependent on sign language,which can be difficult for most people to understand,making interaction with them a difficult endeavor.The Sign Language Recognition(...People who have trouble communicating verbally are often dependent on sign language,which can be difficult for most people to understand,making interaction with them a difficult endeavor.The Sign Language Recognition(SLR)system takes an input expression from a hearing or speaking-impaired person and outputs it in the form of text or voice to a normal person.The existing study related to the Sign Language Recognition system has some drawbacks,such as a lack of large datasets and datasets with a range of backgrounds,skin tones,and ages.This research efficiently focuses on Sign Language Recognition to overcome previous limitations.Most importantly,we use our proposed Convolutional Neural Network(CNN)model,“ConvNeural”,in order to train our dataset.Additionally,we develop our own datasets,“BdSL_OPSA22_STATIC1”and“BdSL_OPSA22_STATIC2”,both of which have ambiguous backgrounds.“BdSL_OPSA22_STATIC1”and“BdSL_OPSA22_STATIC2”both include images of Bangla characters and numerals,a total of 24,615 and 8437 images,respectively.The“ConvNeural”model outperforms the pre-trained models with accuracy of 98.38%for“BdSL_OPSA22_STATIC1”and 92.78%for“BdSL_OPSA22_STATIC2”.For“BdSL_OPSA22_STATIC1”dataset,we get precision,recall,F1-score,sensitivity and specificity of 96%,95%,95%,99.31%,and 95.78%respectively.Moreover,in case of“BdSL_OPSA22_STATIC2”dataset,we achieve precision,recall,F1-score,sensitivity and specificity of 90%,88%,88%,100%,and 100%respectively.展开更多
基金supported by the National Natural Science Foundation of China(Nos.62002206 and 62202373)the open topic of the Green Development Big Data Decision-Making Key Laboratory(DM202003).
文摘Extracting valuable information frombiomedical texts is one of the current research hotspots of concern to a wide range of scholars.The biomedical corpus contains numerous complex long sentences and overlapping relational triples,making most generalized domain joint modeling methods difficult to apply effectively in this field.For a complex semantic environment in biomedical texts,in this paper,we propose a novel perspective to perform joint entity and relation extraction;existing studies divide the relation triples into several steps or modules.However,the three elements in the relation triples are interdependent and inseparable,so we regard joint extraction as a tripartite classification problem.At the same time,fromthe perspective of triple classification,we design amulti-granularity 2D convolution to refine the word pair table and better utilize the dependencies between biomedical word pairs.Finally,we use a biaffine predictor to assist in predicting the labels of word pairs for relation extraction.Our model(MCTPL)Multi-granularity Convolutional Tokens Pairs of Labeling better utilizes the elements of triples and improves the ability to extract overlapping triples compared to previous approaches.Finally,we evaluated our model on two publicly accessible datasets.The experimental results show that our model’s ability to extract relation triples on the CPI dataset improves the F1 score by 2.34%compared to the current optimal model.On the DDI dataset,the F1 value improves the F1 value by 1.68%compared to the current optimal model.Our model achieved state-of-the-art performance compared to other baseline models in biomedical text entity relation extraction.
基金the National Natural Science Foundation of China(No.81830052)the Shanghai Natural Science Foundation of China(No.20ZR1438300)the Shanghai Science and Technology Support Project(No.18441900500),China。
文摘To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.
基金the National Natural Science Foundation of China under Grant No.61991412the Program for HUST Academic Frontier Youth Team under Grant No.2018QYTD07.
文摘Knowledge graphs are involved in more and more applications to further improve intelligence.Owing to the inherent incompleteness of knowledge graphs resulted from data updating and missing,a number of knowledge graph completion models are proposed in succession.To obtain better performance,many methods are of high complexity,making it time-consuming for training and inference.This paper proposes a simple but e®ective model using only shallow neural networks,which combines enhanced feature interaction and multi-subspace information integration.In the enhanced feature interaction module,entity and relation embeddings are almost peer-to-peer interacted via multi-channel 2D convolution.In the multi-subspace information integration module,entity and relation embeddings are projected to multiple subspaces to extract multi-view information to further boost performance.Extensive experiments on widely used datasets show that the proposed model outperforms a series of strong baselines.And ablation studies demonstrate the e®ectiveness of each submodule in the model.
基金The authors would like to thank the deanship of scientific research and Re-search Center for engineering and applied sciences,Majmaah University,Saudi Arabia,for their support and encouragementthe authors would like also to express deep thanks to our College(College of Science at Zulfi City,Majmaah University,AL-Majmaah 11952,Saudi Arabia)Project No.31-1439.
文摘The prevalence of melanoma skin cancer has increased in recent decades.The greatest risk from melanoma is its ability to broadly spread throughout the body by means of lymphatic vessels and veins.Thus,the early diagnosis of melanoma is a key factor in improving the prognosis of the disease.Deep learning makes it possible to design and develop intelligent systems that can be used in detecting and classifying skin lesions from visible-light images.Such systems can provide early and accurate diagnoses of melanoma and other types of skin diseases.This paper proposes a new method which can be used for both skin lesion segmentation and classification problems.This solution makes use of Convolutional neural networks(CNN)with the architecture two-dimensional(Conv2D)using three phases:feature extraction,classification and detection.The proposed method is mainly designed for skin cancer detection and diagnosis.Using the public dataset International Skin Imaging Collaboration(ISIC),the impact of the proposed segmentation method on the performance of the classification accuracy was investigated.The obtained results showed that the proposed skin cancer detection and classification method had a good performance with an accuracy of 94%,sensitivity of 92%and specificity of 96%.Also comparing with the related work using the same dataset,i.e.,ISIC,showed a better performance of the proposed method.
文摘People who have trouble communicating verbally are often dependent on sign language,which can be difficult for most people to understand,making interaction with them a difficult endeavor.The Sign Language Recognition(SLR)system takes an input expression from a hearing or speaking-impaired person and outputs it in the form of text or voice to a normal person.The existing study related to the Sign Language Recognition system has some drawbacks,such as a lack of large datasets and datasets with a range of backgrounds,skin tones,and ages.This research efficiently focuses on Sign Language Recognition to overcome previous limitations.Most importantly,we use our proposed Convolutional Neural Network(CNN)model,“ConvNeural”,in order to train our dataset.Additionally,we develop our own datasets,“BdSL_OPSA22_STATIC1”and“BdSL_OPSA22_STATIC2”,both of which have ambiguous backgrounds.“BdSL_OPSA22_STATIC1”and“BdSL_OPSA22_STATIC2”both include images of Bangla characters and numerals,a total of 24,615 and 8437 images,respectively.The“ConvNeural”model outperforms the pre-trained models with accuracy of 98.38%for“BdSL_OPSA22_STATIC1”and 92.78%for“BdSL_OPSA22_STATIC2”.For“BdSL_OPSA22_STATIC1”dataset,we get precision,recall,F1-score,sensitivity and specificity of 96%,95%,95%,99.31%,and 95.78%respectively.Moreover,in case of“BdSL_OPSA22_STATIC2”dataset,we achieve precision,recall,F1-score,sensitivity and specificity of 90%,88%,88%,100%,and 100%respectively.