Various land use and land cover(LULC)products have been produced over the past decade with the development of remote sensing technology.Despite the differences in LULC classification schemes,there is a lack of researc...Various land use and land cover(LULC)products have been produced over the past decade with the development of remote sensing technology.Despite the differences in LULC classification schemes,there is a lack of research on assessing the accuracy of their application to croplands in a unified framework.Thus,this study evaluated the spatial and area accuracies of cropland classification for four commonly used global LULC products(i.e.,MCD12Q1V6,GlobCover2009,FROM-GLC and GlobeLand30)based on the harmonised FAO criterion,and quantified the relationships between four factors(i.e.,slope,elevation,field size and crop system)and cropland classification agreement.The validation results indicated that MCD12Q1 and GlobeLand30 performed well in cropland classification regarding spatial consistency,with overall accuracies of 94.90 and 93.52%,respectively.The FROMGLC showed the worst performance,with an overall accuracy of 83.17%.Overlaying the cropland generated by the four global LULC products,we found the proportions of complete agreement and disagreement were 15.51 and 44.72% for the cropland classification,respectively.High consistency was mainly observed in the Northeast China Plain,the Huang-Huai-Hai Plain and the northern part of the Middle-lower Yangtze Plain,China.In contrast,low consistency was detected primarily on the eastern edge of the northern and semiarid region,the Yunnan-Guizhou Plateau and southern China.Field size was the most important factor for mapping cropland.For area accuracy,compared with China Statistical Yearbook data at the provincial scale,the accuracies of different products in descending order were:GlobeLand30,FROM-GLC,MCD12Q1,and GlobCover2009.The cropland classification schemes mainly caused large area deviations among the four products,and they also resulted in the different ranks of spatial accuracy and area accuracy among the four products.Our results can provide valuable suggestions for selecting cropland products at the national or provincial scale and help cropland mapping and reconstruction,which is essential for food security and crop management,so they can also contribute to achieving the Sustainable Development Goals issued by the United Nations.展开更多
Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to...Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to investigate the imaging features of intrahepatic portal vein in adult patients with CTPV and establish the relationship between the manifestations of intrahepatic portal vein and the progression of CTPV. Methods: We retrospectively analyzed 14 CTPV patients in Beijing Tsinghua Changgung Hospital. All patients underwent both direct portal venography(DPV) and computed tomography angiography(CTA) to reveal the manifestations of the portal venous system. The vessels measured included the left portal vein(LPV), right portal vein(RPV), main portal vein(MPV) and the portal vein bifurcation(PVB). Results: Nine males and 5 females, with a median age of 40.5 years, were included in the study. No significant difference was found in the diameters of the LPV or RPV measured by DPV and CTA. The visualization in terms of LPV, RPV and PVB measured by DPV was higher than that by CTA. There was a significant association between LPV/RPV and PVB/MPV in term of visibility revealed with DPV( P = 0.01), while this association was not observed with CTA. According to the imaging features of the portal vein measured by DPV, CTPV was classified into three categories to facilitate the diagnosis and treatment. Conclusions: DPV was more accurate than CTA for revealing the course of the intrahepatic portal vein in patients with CTPV. The classification of CTPV, that originated from the imaging features of the portal vein revealed by DPV, may provide a new perspective for the diagnosis and treatment of CTPV.展开更多
Artificial intelligence technologies are being studied to provide scientific evidence in the medical field and developed for use as diagnostic tools.This study focused on deep learning models to classify degenerative ...Artificial intelligence technologies are being studied to provide scientific evidence in the medical field and developed for use as diagnostic tools.This study focused on deep learning models to classify degenerative arthritis into Kellgren–Lawrence grades.Specifically,degenerative arthritis was assessed by X-ray radiographic images and classified into five classes.Subsequently,the use of various deep learning models was investigated for automating the degenerative arthritis classification process.Although research on the classification of osteoarthritis using deep learning has been conducted in previous studies,only local models have been used,and an ensemble of deep learning models has never been applied to obtain more accurate results.To address this issue,this study compared the classification performance of deep learning models,includingVGGNet,DenseNet,ResNet,TinyNet,EfficientNet,MobileNet,Xception,and ViT,on a dataset commonly used for osteoarthritis classification tasks.Our experimental results verified that even without applying a separate methodology,the performance of the ensemble was comparable to that of existing studies that only used the latest deep learning model and changed the learning method.From the trained models,two ensembles were created and evaluated:weight and specialist.The weight ensemble showed an improvement in accuracy of 1%,and the proposed specialist ensemble improved accuracy,precision,recall,and F1 score by 5%,6%,6%,and 6%,respectively,compared with the results of prior studies.展开更多
Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on t...Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on the benchmark datasets have been proposed for multi-label classification task in the literature.Furthermore,several open-source tools implementing these approaches have also been developed.However,the characteristics of real-world multi-label patent and publication datasets are not completely in line with those of benchmark ones.Therefore,the main purpose of this paper is to evaluate comprehensively seven multi-label classification methods on real-world datasets.Research limitations:Three real-world datasets differ in the following aspects:statement,data quality,and purposes.Additionally,open-source tools designed for multi-label classification also have intrinsic differences in their approaches for data processing and feature selection,which in turn impacts the performance of a multi-label classification approach.In the near future,we will enhance experimental precision and reinforce the validity of conclusions by employing more rigorous control over variables through introducing expanded parameter settings.Practical implications:The observed Macro F1 and Micro F1 scores on real-world datasets typically fall short of those achieved on benchmark datasets,underscoring the complexity of real-world multi-label classification tasks.Approaches leveraging deep learning techniques offer promising solutions by accommodating the hierarchical relationships and interdependencies among labels.With ongoing enhancements in deep learning algorithms and large-scale models,it is expected that the efficacy of multi-label classification tasks will be significantly improved,reaching a level of practical utility in the foreseeable future.Originality/value:(1)Seven multi-label classification methods are comprehensively compared on three real-world datasets.(2)The TextCNN and TextRCNN models perform better on small-scale datasets with more complex hierarchical structure of labels and more balanced document-label distribution.(3)The MLkNN method works better on the larger-scale dataset with more unbalanced document-label distribution.展开更多
The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailb...The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailboat during sailing for the best sailing effect.Normally it is difficult for sailors to keep an eye for a long time on the tell sail for accurate judging its changes,affected by strong sunlight and visual fatigue.In this case,we adopt computer vision technology in hope of helping the sailors judge the changes of the tell tail in ease with ease.This paper proposes for the first time a method to classify sailboat tell tails based on deep learning and an expert guidance system,supported by a sailboat tell tail classification data set on the expert guidance system of interpreting the tell tails states in different sea wind conditions,including the feature extraction performance.Considering the expression capabilities that vary with the computational features in different visual tasks,the paper focuses on five tell tail computing features,which are recoded by an automatic encoder and classified by a SVM classifier.All experimental samples were randomly divided into five groups,and four groups were selected from each group as the training set to train the classifier.The remaining one group was used as the test set for testing.The highest resolution value of the ResNet network was 80.26%.To achieve better operational results on the basis of deep computing features obtained through the ResNet network in the experiments.The method can be used to assist the sailors in making better judgement about the tell tail changes during sailing.展开更多
Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been dev...Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.展开更多
Among central nervous system-associated malignancies,glioblastoma(GBM)is the most common and has the highest mortality rate.The high heterogeneity of GBM cell types and the complex tumor microenvironment frequently le...Among central nervous system-associated malignancies,glioblastoma(GBM)is the most common and has the highest mortality rate.The high heterogeneity of GBM cell types and the complex tumor microenvironment frequently lead to tumor recurrence and sudden relapse in patients treated with temozolomide.In precision medicine,research on GBM treatment is increasingly focusing on molecular subtyping to precisely characterize the cellular and molecular heterogeneity,as well as the refractory nature of GBM toward therapy.Deep understanding of the different molecular expression patterns of GBM subtypes is critical.Researchers have recently proposed tetra fractional or tripartite methods for detecting GBM molecular subtypes.The various molecular subtypes of GBM show significant differences in gene expression patterns and biological behaviors.These subtypes also exhibit high plasticity in their regulatory pathways,oncogene expression,tumor microenvironment alterations,and differential responses to standard therapy.Herein,we summarize the current molecular typing scheme of GBM and the major molecular/genetic characteristics of each subtype.Furthermore,we review the mesenchymal transition mechanisms of GBM under various regulators.展开更多
BACKGROUND Helicobacter pylori(H.pylori)infection has been well-established as a significant risk factor for several gastrointestinal disorders.The urea breath test(UBT)has emerged as a leading non-invasive method for...BACKGROUND Helicobacter pylori(H.pylori)infection has been well-established as a significant risk factor for several gastrointestinal disorders.The urea breath test(UBT)has emerged as a leading non-invasive method for detecting H.pylori.Despite numerous studies confirming its substantial accuracy,the reliability of UBT results is often compromised by inherent limitations.These findings underscore the need for a rigorous statistical synthesis to clarify and reconcile the diagnostic accuracy of the UBT for the diagnosis of H.pylori infection.AIM To determine and compare the diagnostic accuracy of 13C-UBT and 14C-UBT for H.pylori infection in adult patients with dyspepsia.METHODS We conducted an independent search of the PubMed/MEDLINE,EMBASE,and Cochrane Central databases until April 2022.Our search included diagnostic accuracy studies that evaluated at least one of the index tests(^(13)C-UBT or ^(14)C-UBT)against a reference standard.We used the QUADAS-2 tool to assess the methodo-logical quality of the studies.We utilized the bivariate random-effects model to calculate sensitivity,specificity,positive and negative test likelihood ratios(LR+and LR-),as well as the diagnostic odds ratio(DOR),and their 95%confidence intervals.We conducted subgroup analyses based on urea dosing,time after urea administration,and assessment technique.To investigate a possible threshold effect,we conducted Spearman correlation analysis,and we generated summary receiver operating characteristic(SROC)curves to assess heterogeneity.Finally,we visually inspected a funnel plot and used Egger’s test to evaluate publication bias.endorsing both as reliable diagnostic tools in clinical practice.CONCLUSION In summary,our study has demonstrated that ^(13)C-UBT has been found to outperform the ^(14)C-UBT,making it the preferred diagnostic approach.Additionally,our results emphasize the significance of carefully considering urea dosage,assessment timing,and measurement techniques for both tests to enhance diagnostic precision.Nevertheless,it is crucial for researchers and clinicians to evaluate the strengths and limitations of our findings before implementing them in practice.展开更多
The classification of functional data has drawn much attention in recent years.The main challenge is representing infinite-dimensional functional data by finite-dimensional features while utilizing those features to a...The classification of functional data has drawn much attention in recent years.The main challenge is representing infinite-dimensional functional data by finite-dimensional features while utilizing those features to achieve better classification accuracy.In this paper,we propose a mean-variance-based(MV)feature weighting method for classifying functional data or functional curves.In the feature extraction stage,each sample curve is approximated by B-splines to transfer features to the coefficients of the spline basis.After that,a feature weighting approach based on statistical principles is introduced by comprehensively considering the between-class differences and within-class variations of the coefficients.We also introduce a scaling parameter to adjust the gap between the weights of features.The new feature weighting approach can adaptively enhance noteworthy local features while mitigating the impact of confusing features.The algorithms for feature weighted K-nearest neighbor and support vector machine classifiers are both provided.Moreover,the new approach can be well integrated into existing functional data classifiers,such as the generalized functional linear model and functional linear discriminant analysis,resulting in a more accurate classification.The performance of the mean-variance-based classifiers is evaluated by simulation studies and real data.The results show that the newfeatureweighting approach significantly improves the classification accuracy for complex functional data.展开更多
Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intr...Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.展开更多
The software development process mostly depends on accurately identifying both essential and optional features.Initially,user needs are typically expressed in free-form language,requiring significant time and human re...The software development process mostly depends on accurately identifying both essential and optional features.Initially,user needs are typically expressed in free-form language,requiring significant time and human resources to translate these into clear functional and non-functional requirements.To address this challenge,various machine learning(ML)methods have been explored to automate the understanding of these requirements,aiming to reduce time and human effort.However,existing techniques often struggle with complex instructions and large-scale projects.In our study,we introduce an innovative approach known as the Functional and Non-functional Requirements Classifier(FNRC).By combining the traditional random forest algorithm with the Accuracy Sliding Window(ASW)technique,we develop optimal sub-ensembles that surpass the initial classifier’s accuracy while using fewer trees.Experimental results demonstrate that our FNRC methodology performs robustly across different datasets,achieving a balanced Precision of 75%on the PROMISE dataset and an impressive Recall of 85%on the CCHIT dataset.Both datasets consistently maintain an F-measure around 64%,highlighting FNRC’s ability to effectively balance precision and recall in diverse scenarios.These findings contribute to more accurate and efficient software development processes,increasing the probability of achieving successful project outcomes.展开更多
Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major ...Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances.展开更多
Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)in...Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)information between the labelled and unlabelled sample features.Most FA methods use the feature mean as the class prototype and calculate the correlation between prototype and unlabelled features to learn an alignment strategy.However,mean prototypes tend to degenerate informative features because spatial features at the same position may not be equally important for the final classification,leading to inaccurate correlation calculations.Therefore,the authors propose an effective intraclass FA strategy that aggregates semantically similar spatial features from an adaptive reference prototype in low‐dimensional feature space to obtain an informative prototype feature map for precise correlation computation.Moreover,a dual correlation module to learn the hard and soft correlations was developed by the authors.This module combines the correlation information between the prototype and unlabelled features in both the original and learnable feature spaces,aiming to produce a comprehensive cross‐correlation between the prototypes and unlabelled features.Using both FA and cross‐attention modules,our model can maintain informative class features and capture important shared features for classification.Experimental results on three few‐shot classification benchmarks show that the proposed method outperformed related methods and resulted in a 3%performance boost in the 1‐shot setting by inserting the proposed module into the related methods.展开更多
Objective To assess the diagnostic accuracy of bowel sound analysis for irritable bowel syndrome(IBS)with a systematic review and meta-analysis.Methods We searched MEDLINE,Embase,the Cochrane Library,Web of Science,an...Objective To assess the diagnostic accuracy of bowel sound analysis for irritable bowel syndrome(IBS)with a systematic review and meta-analysis.Methods We searched MEDLINE,Embase,the Cochrane Library,Web of Science,and IEEE Xplore databases until September 2023.Cross-sectional and case-control studies on diagnostic accuracy of bowel sound analysis for IBS were identified.We estimated the pooled sensitivity,specificity,positive likelihood ratio,negative likeli-hood ratio,and diagnostic odds ratio with a 95% confidence interval(CI),and plotted a summary receiver operat-ing characteristic curve and evaluated the area under the curve.Results Four studies were included.The pooled diagnostic sensitivity,specificity,positive likelihood ratio,nega-tive likelihood ratio,and diagnostic odds ratio were 0.94(95%CI,0.87‒0.97),0.89(95%CI,0.81‒0.94),8.43(95%CI,4.81‒14.78),0.07(95%CI,0.03‒0.15),and 118.86(95%CI,44.18‒319.75),respectively,with an area under the curve of 0.97(95%CI,0.95‒0.98).Conclusions Computerized bowel sound analysis is a promising tool for IBS.However,limited high-quality data make the results'validity and applicability questionable.There is a need for more diagnostic test accuracy studies and better wearable devices for monitoring and analysis of IBS.展开更多
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier...Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.展开更多
Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification...Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.展开更多
●AIM:To establish a classification for congenital cataracts that can facilitate individualized treatment and help identify individuals with a high likelihood of different visual outcomes.●METHODS:Consecutive patient...●AIM:To establish a classification for congenital cataracts that can facilitate individualized treatment and help identify individuals with a high likelihood of different visual outcomes.●METHODS:Consecutive patients diagnosed with congenital cataracts and undergoing surgery between January 2005 and November 2021 were recruited.Data on visual outcomes and the phenotypic characteristics of ocular biometry and the anterior and posterior segments were extracted from the patients’medical records.A hierarchical cluster analysis was performed.The main outcome measure was the identification of distinct clusters of eyes with congenital cataracts.●RESULTS:A total of 164 children(299 eyes)were divided into two clusters based on their ocular features.Cluster 1(96 eyes)had a shorter axial length(mean±SD,19.44±1.68 mm),a low prevalence of macular abnormalities(1.04%),and no retinal abnormalities or posterior cataracts.Cluster 2(203 eyes)had a greater axial length(mean±SD,20.42±2.10 mm)and a higher prevalence of macular abnormalities(8.37%),retinal abnormalities(98.52%),and posterior cataracts(4.93%).Compared with the eyes in Cluster 2(57.14%),those in Cluster 1(71.88%)had a 2.2 times higher chance of good best-corrected visual acuity[<0.7 logMAR;OR(95%CI),2.20(1.25–3.81);P=0.006].●CONCLUSION:This retrospective study categorizes congenital cataracts into two distinct clusters,each associated with a different likelihood of visual outcomes.This innovative classification may enable the personalization and prioritization of early interventions for patients who may gain the greatest benefit,thereby making strides toward precision medicine in the field of congenital cataracts.展开更多
Bitcoin is widely used as the most classic electronic currency for various electronic services such as exchanges,gambling,marketplaces,and also scams such as high-yield investment projects.Identifying the services ope...Bitcoin is widely used as the most classic electronic currency for various electronic services such as exchanges,gambling,marketplaces,and also scams such as high-yield investment projects.Identifying the services operated by a Bitcoin address can help determine the risk level of that address and build an alert model accordingly.Feature engineering can also be used to flesh out labeled addresses and to analyze the current state of Bitcoin in a small way.In this paper,we address the problem of identifying multiple classes of Bitcoin services,and for the poor classification of individual addresses that do not have significant features,we propose a Bitcoin address identification scheme based on joint multi-model prediction using the mapping relationship between addresses and entities.The innovation of the method is to(1)Extract as many valuable features as possible when an address is given to facilitate the multi-class service identification task.(2)Unlike the general supervised model approach,this paper proposes a joint prediction scheme for multiple learners based on address-entity mapping relationships.Specifically,after obtaining the overall features,the address classification and entity clustering tasks are performed separately,and the results are subjected to graph-basedmaximization consensus.The final result ismade to baseline the individual address classification results while satisfying the constraint of having similarly behaving entities as far as possible.By testing and evaluating over 26,000 Bitcoin addresses,our feature extraction method captures more useful features.In addition,the combined multi-learner model obtained results that exceeded the baseline classifier reaching an accuracy of 77.4%.展开更多
Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying stead...Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying steady locomotion states.However,it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states.Due to the similarities between the information of the transitions and their adjacent steady states.Furthermore,most of these methods rely solely on data and overlook the objective laws between physical activities,resulting in lower accuracy,particularly when encountering complex locomotion modes such as transitions.To address the existing deficiencies,we propose the locomotion rule embedding long short-term memory(LSTM)network with Attention(LREAL)for human locomotor intent classification,with a particular focus on transitions,using data from fewer sensors(two inertial measurement units and four goniometers).The LREAL network consists of two levels:One responsible for distinguishing between steady states and transitions,and the other for the accurate identification of locomotor intent.Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism.To introduce real-world motion rules and apply constraints to the network,a prior knowledge was added to the network via a rule-modulating block.The method was tested on the ENABL3S dataset,which contains continuous locomotion date for seven steady and twelve transitions states.Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03%and 96.52%for the steady and transitions states,respectively.It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18%compared to other state-of-the-art network,while using data from fewer sensors.展开更多
We apply stochastic seismic inversion and Bayesian facies classification for porosity modeling and igneous rock identification in the presalt interval of the Santos Basin. This integration of seismic and well-derived ...We apply stochastic seismic inversion and Bayesian facies classification for porosity modeling and igneous rock identification in the presalt interval of the Santos Basin. This integration of seismic and well-derived information enhances reservoir characterization. Stochastic inversion and Bayesian classification are powerful tools because they permit addressing the uncertainties in the model. We used the ES-MDA algorithm to achieve the realizations equivalent to the percentiles P10, P50, and P90 of acoustic impedance, a novel method for acoustic inversion in presalt. The facies were divided into five: reservoir 1,reservoir 2, tight carbonates, clayey rocks, and igneous rocks. To deal with the overlaps in acoustic impedance values of facies, we included geological information using a priori probability, indicating that structural highs are reservoir-dominated. To illustrate our approach, we conducted porosity modeling using facies-related rock-physics models for rock-physics inversion in an area with a well drilled in a coquina bank and evaluated the thickness and extension of an igneous intrusion near the carbonate-salt interface. The modeled porosity and the classified seismic facies are in good agreement with the ones observed in the wells. Notably, the coquinas bank presents an improvement in the porosity towards the top. The a priori probability model was crucial for limiting the clayey rocks to the structural lows. In Well B, the hit rate of the igneous rock in the three scenarios is higher than 60%, showing an excellent thickness-prediction capability.展开更多
基金supported by the National Key Research and Development Program of China(2022YFB3903503)the National Natural Science Foundation of China(U1901601)the Science and Technology Project of the Department of Education of Jiangxi Province,China(GJJ210541)。
文摘Various land use and land cover(LULC)products have been produced over the past decade with the development of remote sensing technology.Despite the differences in LULC classification schemes,there is a lack of research on assessing the accuracy of their application to croplands in a unified framework.Thus,this study evaluated the spatial and area accuracies of cropland classification for four commonly used global LULC products(i.e.,MCD12Q1V6,GlobCover2009,FROM-GLC and GlobeLand30)based on the harmonised FAO criterion,and quantified the relationships between four factors(i.e.,slope,elevation,field size and crop system)and cropland classification agreement.The validation results indicated that MCD12Q1 and GlobeLand30 performed well in cropland classification regarding spatial consistency,with overall accuracies of 94.90 and 93.52%,respectively.The FROMGLC showed the worst performance,with an overall accuracy of 83.17%.Overlaying the cropland generated by the four global LULC products,we found the proportions of complete agreement and disagreement were 15.51 and 44.72% for the cropland classification,respectively.High consistency was mainly observed in the Northeast China Plain,the Huang-Huai-Hai Plain and the northern part of the Middle-lower Yangtze Plain,China.In contrast,low consistency was detected primarily on the eastern edge of the northern and semiarid region,the Yunnan-Guizhou Plateau and southern China.Field size was the most important factor for mapping cropland.For area accuracy,compared with China Statistical Yearbook data at the provincial scale,the accuracies of different products in descending order were:GlobeLand30,FROM-GLC,MCD12Q1,and GlobCover2009.The cropland classification schemes mainly caused large area deviations among the four products,and they also resulted in the different ranks of spatial accuracy and area accuracy among the four products.Our results can provide valuable suggestions for selecting cropland products at the national or provincial scale and help cropland mapping and reconstruction,which is essential for food security and crop management,so they can also contribute to achieving the Sustainable Development Goals issued by the United Nations.
文摘Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to investigate the imaging features of intrahepatic portal vein in adult patients with CTPV and establish the relationship between the manifestations of intrahepatic portal vein and the progression of CTPV. Methods: We retrospectively analyzed 14 CTPV patients in Beijing Tsinghua Changgung Hospital. All patients underwent both direct portal venography(DPV) and computed tomography angiography(CTA) to reveal the manifestations of the portal venous system. The vessels measured included the left portal vein(LPV), right portal vein(RPV), main portal vein(MPV) and the portal vein bifurcation(PVB). Results: Nine males and 5 females, with a median age of 40.5 years, were included in the study. No significant difference was found in the diameters of the LPV or RPV measured by DPV and CTA. The visualization in terms of LPV, RPV and PVB measured by DPV was higher than that by CTA. There was a significant association between LPV/RPV and PVB/MPV in term of visibility revealed with DPV( P = 0.01), while this association was not observed with CTA. According to the imaging features of the portal vein measured by DPV, CTPV was classified into three categories to facilitate the diagnosis and treatment. Conclusions: DPV was more accurate than CTA for revealing the course of the intrahepatic portal vein in patients with CTPV. The classification of CTPV, that originated from the imaging features of the portal vein revealed by DPV, may provide a new perspective for the diagnosis and treatment of CTPV.
基金support of the Korea Research Foundation and was funded by the Ministry of Education of Korea in 2020 (No.2020R1A6A1A03040583)Kyonggi University’s Graduate Research Assistantship 2022.
文摘Artificial intelligence technologies are being studied to provide scientific evidence in the medical field and developed for use as diagnostic tools.This study focused on deep learning models to classify degenerative arthritis into Kellgren–Lawrence grades.Specifically,degenerative arthritis was assessed by X-ray radiographic images and classified into five classes.Subsequently,the use of various deep learning models was investigated for automating the degenerative arthritis classification process.Although research on the classification of osteoarthritis using deep learning has been conducted in previous studies,only local models have been used,and an ensemble of deep learning models has never been applied to obtain more accurate results.To address this issue,this study compared the classification performance of deep learning models,includingVGGNet,DenseNet,ResNet,TinyNet,EfficientNet,MobileNet,Xception,and ViT,on a dataset commonly used for osteoarthritis classification tasks.Our experimental results verified that even without applying a separate methodology,the performance of the ensemble was comparable to that of existing studies that only used the latest deep learning model and changed the learning method.From the trained models,two ensembles were created and evaluated:weight and specialist.The weight ensemble showed an improvement in accuracy of 1%,and the proposed specialist ensemble improved accuracy,precision,recall,and F1 score by 5%,6%,6%,and 6%,respectively,compared with the results of prior studies.
基金the Natural Science Foundation of China(Grant Numbers 72074014 and 72004012).
文摘Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on the benchmark datasets have been proposed for multi-label classification task in the literature.Furthermore,several open-source tools implementing these approaches have also been developed.However,the characteristics of real-world multi-label patent and publication datasets are not completely in line with those of benchmark ones.Therefore,the main purpose of this paper is to evaluate comprehensively seven multi-label classification methods on real-world datasets.Research limitations:Three real-world datasets differ in the following aspects:statement,data quality,and purposes.Additionally,open-source tools designed for multi-label classification also have intrinsic differences in their approaches for data processing and feature selection,which in turn impacts the performance of a multi-label classification approach.In the near future,we will enhance experimental precision and reinforce the validity of conclusions by employing more rigorous control over variables through introducing expanded parameter settings.Practical implications:The observed Macro F1 and Micro F1 scores on real-world datasets typically fall short of those achieved on benchmark datasets,underscoring the complexity of real-world multi-label classification tasks.Approaches leveraging deep learning techniques offer promising solutions by accommodating the hierarchical relationships and interdependencies among labels.With ongoing enhancements in deep learning algorithms and large-scale models,it is expected that the efficacy of multi-label classification tasks will be significantly improved,reaching a level of practical utility in the foreseeable future.Originality/value:(1)Seven multi-label classification methods are comprehensively compared on three real-world datasets.(2)The TextCNN and TextRCNN models perform better on small-scale datasets with more complex hierarchical structure of labels and more balanced document-label distribution.(3)The MLkNN method works better on the larger-scale dataset with more unbalanced document-label distribution.
基金supported by the Shandong Provin-cial Key Research Project of Undergraduate Teaching Reform(No.Z2022218)the Fundamental Research Funds for the Central University(No.202113028)+1 种基金the Graduate Education Promotion Program of Ocean University of China(No.HDJG20006)supported by the Sailing Laboratory of Ocean University of China.
文摘The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailboat during sailing for the best sailing effect.Normally it is difficult for sailors to keep an eye for a long time on the tell sail for accurate judging its changes,affected by strong sunlight and visual fatigue.In this case,we adopt computer vision technology in hope of helping the sailors judge the changes of the tell tail in ease with ease.This paper proposes for the first time a method to classify sailboat tell tails based on deep learning and an expert guidance system,supported by a sailboat tell tail classification data set on the expert guidance system of interpreting the tell tails states in different sea wind conditions,including the feature extraction performance.Considering the expression capabilities that vary with the computational features in different visual tasks,the paper focuses on five tell tail computing features,which are recoded by an automatic encoder and classified by a SVM classifier.All experimental samples were randomly divided into five groups,and four groups were selected from each group as the training set to train the classifier.The remaining one group was used as the test set for testing.The highest resolution value of the ResNet network was 80.26%.To achieve better operational results on the basis of deep computing features obtained through the ResNet network in the experiments.The method can be used to assist the sailors in making better judgement about the tell tail changes during sailing.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RP23044).
文摘Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.
基金supported by grants from the National Natural Science Foundation of China(Grant No.82172660)Hebei Province Graduate Student Innovation Project(Grant No.CXZZBS2023001)Baoding Natural Science Foundation(Grant No.H2272P015).
文摘Among central nervous system-associated malignancies,glioblastoma(GBM)is the most common and has the highest mortality rate.The high heterogeneity of GBM cell types and the complex tumor microenvironment frequently lead to tumor recurrence and sudden relapse in patients treated with temozolomide.In precision medicine,research on GBM treatment is increasingly focusing on molecular subtyping to precisely characterize the cellular and molecular heterogeneity,as well as the refractory nature of GBM toward therapy.Deep understanding of the different molecular expression patterns of GBM subtypes is critical.Researchers have recently proposed tetra fractional or tripartite methods for detecting GBM molecular subtypes.The various molecular subtypes of GBM show significant differences in gene expression patterns and biological behaviors.These subtypes also exhibit high plasticity in their regulatory pathways,oncogene expression,tumor microenvironment alterations,and differential responses to standard therapy.Herein,we summarize the current molecular typing scheme of GBM and the major molecular/genetic characteristics of each subtype.Furthermore,we review the mesenchymal transition mechanisms of GBM under various regulators.
基金Supported by Scientific Initiation Scholarship Programme(PIBIC)of the Bahia State Research Support Foundationthe Doctorate Scholarship Program of the Coordination of Improvement of Higher Education Personnel+1 种基金the Scientific Initiation Scholarship Programme(PIBIC)of the National Council for Scientific and Technological Developmentand the CNPq Research Productivity Fellowship.
文摘BACKGROUND Helicobacter pylori(H.pylori)infection has been well-established as a significant risk factor for several gastrointestinal disorders.The urea breath test(UBT)has emerged as a leading non-invasive method for detecting H.pylori.Despite numerous studies confirming its substantial accuracy,the reliability of UBT results is often compromised by inherent limitations.These findings underscore the need for a rigorous statistical synthesis to clarify and reconcile the diagnostic accuracy of the UBT for the diagnosis of H.pylori infection.AIM To determine and compare the diagnostic accuracy of 13C-UBT and 14C-UBT for H.pylori infection in adult patients with dyspepsia.METHODS We conducted an independent search of the PubMed/MEDLINE,EMBASE,and Cochrane Central databases until April 2022.Our search included diagnostic accuracy studies that evaluated at least one of the index tests(^(13)C-UBT or ^(14)C-UBT)against a reference standard.We used the QUADAS-2 tool to assess the methodo-logical quality of the studies.We utilized the bivariate random-effects model to calculate sensitivity,specificity,positive and negative test likelihood ratios(LR+and LR-),as well as the diagnostic odds ratio(DOR),and their 95%confidence intervals.We conducted subgroup analyses based on urea dosing,time after urea administration,and assessment technique.To investigate a possible threshold effect,we conducted Spearman correlation analysis,and we generated summary receiver operating characteristic(SROC)curves to assess heterogeneity.Finally,we visually inspected a funnel plot and used Egger’s test to evaluate publication bias.endorsing both as reliable diagnostic tools in clinical practice.CONCLUSION In summary,our study has demonstrated that ^(13)C-UBT has been found to outperform the ^(14)C-UBT,making it the preferred diagnostic approach.Additionally,our results emphasize the significance of carefully considering urea dosage,assessment timing,and measurement techniques for both tests to enhance diagnostic precision.Nevertheless,it is crucial for researchers and clinicians to evaluate the strengths and limitations of our findings before implementing them in practice.
基金the National Social Science Foundation of China(Grant No.22BTJ035).
文摘The classification of functional data has drawn much attention in recent years.The main challenge is representing infinite-dimensional functional data by finite-dimensional features while utilizing those features to achieve better classification accuracy.In this paper,we propose a mean-variance-based(MV)feature weighting method for classifying functional data or functional curves.In the feature extraction stage,each sample curve is approximated by B-splines to transfer features to the coefficients of the spline basis.After that,a feature weighting approach based on statistical principles is introduced by comprehensively considering the between-class differences and within-class variations of the coefficients.We also introduce a scaling parameter to adjust the gap between the weights of features.The new feature weighting approach can adaptively enhance noteworthy local features while mitigating the impact of confusing features.The algorithms for feature weighted K-nearest neighbor and support vector machine classifiers are both provided.Moreover,the new approach can be well integrated into existing functional data classifiers,such as the generalized functional linear model and functional linear discriminant analysis,resulting in a more accurate classification.The performance of the mean-variance-based classifiers is evaluated by simulation studies and real data.The results show that the newfeatureweighting approach significantly improves the classification accuracy for complex functional data.
文摘Intrusion detection is a predominant task that monitors and protects the network infrastructure.Therefore,many datasets have been published and investigated by researchers to analyze and understand the problem of intrusion prediction and detection.In particular,the Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD)is an extensively used benchmark dataset for evaluating intrusion detection systems(IDSs)as it incorporates various network traffic attacks.It is worth mentioning that a large number of studies have tackled the problem of intrusion detection using machine learning models,but the performance of these models often decreases when evaluated on new attacks.This has led to the utilization of deep learning techniques,which have showcased significant potential for processing large datasets and therefore improving detection accuracy.For that reason,this paper focuses on the role of stacking deep learning models,including convolution neural network(CNN)and deep neural network(DNN)for improving the intrusion detection rate of the NSL-KDD dataset.Each base model is trained on the NSL-KDD dataset to extract significant features.Once the base models have been trained,the stacking process proceeds to the second stage,where a simple meta-model has been trained on the predictions generated from the proposed base models.The combination of the predictions allows the meta-model to distinguish different classes of attacks and increase the detection rate.Our experimental evaluations using the NSL-KDD dataset have shown the efficacy of stacking deep learning models for intrusion detection.The performance of the ensemble of base models,combined with the meta-model,exceeds the performance of individual models.Our stacking model has attained an accuracy of 99%and an average F1-score of 93%for the multi-classification scenario.Besides,the training time of the proposed ensemble model is lower than the training time of benchmark techniques,demonstrating its efficiency and robustness.
基金This work is supported by EIAS(Emerging Intelligent Autonomous Systems)Data Science Lab,Prince Sultan University,Kingdom of Saudi Arabia,by paying the APC.
文摘The software development process mostly depends on accurately identifying both essential and optional features.Initially,user needs are typically expressed in free-form language,requiring significant time and human resources to translate these into clear functional and non-functional requirements.To address this challenge,various machine learning(ML)methods have been explored to automate the understanding of these requirements,aiming to reduce time and human effort.However,existing techniques often struggle with complex instructions and large-scale projects.In our study,we introduce an innovative approach known as the Functional and Non-functional Requirements Classifier(FNRC).By combining the traditional random forest algorithm with the Accuracy Sliding Window(ASW)technique,we develop optimal sub-ensembles that surpass the initial classifier’s accuracy while using fewer trees.Experimental results demonstrate that our FNRC methodology performs robustly across different datasets,achieving a balanced Precision of 75%on the PROMISE dataset and an impressive Recall of 85%on the CCHIT dataset.Both datasets consistently maintain an F-measure around 64%,highlighting FNRC’s ability to effectively balance precision and recall in diverse scenarios.These findings contribute to more accurate and efficient software development processes,increasing the probability of achieving successful project outcomes.
文摘Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances.
基金Institute of Information&Communications Technology Planning&Evaluation,Grant/Award Number:2022-0-00074。
文摘Few‐shot image classification is the task of classifying novel classes using extremely limited labelled samples.To perform classification using the limited samples,one solution is to learn the feature alignment(FA)information between the labelled and unlabelled sample features.Most FA methods use the feature mean as the class prototype and calculate the correlation between prototype and unlabelled features to learn an alignment strategy.However,mean prototypes tend to degenerate informative features because spatial features at the same position may not be equally important for the final classification,leading to inaccurate correlation calculations.Therefore,the authors propose an effective intraclass FA strategy that aggregates semantically similar spatial features from an adaptive reference prototype in low‐dimensional feature space to obtain an informative prototype feature map for precise correlation computation.Moreover,a dual correlation module to learn the hard and soft correlations was developed by the authors.This module combines the correlation information between the prototype and unlabelled features in both the original and learnable feature spaces,aiming to produce a comprehensive cross‐correlation between the prototypes and unlabelled features.Using both FA and cross‐attention modules,our model can maintain informative class features and capture important shared features for classification.Experimental results on three few‐shot classification benchmarks show that the proposed method outperformed related methods and resulted in a 3%performance boost in the 1‐shot setting by inserting the proposed module into the related methods.
基金funded by the National Natural Science Foundation of China(No.32170788)National High Level Hospital Clinical Research Funding(No.2022-PUMCH-B-023)Beijing Natural Science Foundation(No.7232123).
文摘Objective To assess the diagnostic accuracy of bowel sound analysis for irritable bowel syndrome(IBS)with a systematic review and meta-analysis.Methods We searched MEDLINE,Embase,the Cochrane Library,Web of Science,and IEEE Xplore databases until September 2023.Cross-sectional and case-control studies on diagnostic accuracy of bowel sound analysis for IBS were identified.We estimated the pooled sensitivity,specificity,positive likelihood ratio,negative likeli-hood ratio,and diagnostic odds ratio with a 95% confidence interval(CI),and plotted a summary receiver operat-ing characteristic curve and evaluated the area under the curve.Results Four studies were included.The pooled diagnostic sensitivity,specificity,positive likelihood ratio,nega-tive likelihood ratio,and diagnostic odds ratio were 0.94(95%CI,0.87‒0.97),0.89(95%CI,0.81‒0.94),8.43(95%CI,4.81‒14.78),0.07(95%CI,0.03‒0.15),and 118.86(95%CI,44.18‒319.75),respectively,with an area under the curve of 0.97(95%CI,0.95‒0.98).Conclusions Computerized bowel sound analysis is a promising tool for IBS.However,limited high-quality data make the results'validity and applicability questionable.There is a need for more diagnostic test accuracy studies and better wearable devices for monitoring and analysis of IBS.
基金Major Program of National Natural Science Foundation of China(NSFC12292980,NSFC12292984)National Key R&D Program of China(2023YFA1009000,2023YFA1009004,2020YFA0712203,2020YFA0712201)+2 种基金Major Program of National Natural Science Foundation of China(NSFC12031016)Beijing Natural Science Foundation(BNSFZ210003)Department of Science,Technology and Information of the Ministry of Education(8091B042240).
文摘Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.
基金National Natural Science Foundation of China(No.62201457)Natural Science Foundation of Shaanxi Province(Nos.2022JQ-668,2022JQ-588)。
文摘Convolutional neural network(CNN)has excellent ability to model locally contextual information.However,CNNs face challenges for descripting long-range semantic features,which will lead to relatively low classification accuracy of hyperspectral images.To address this problem,this article proposes an algorithm based on multiscale fusion and transformer network for hyperspectral image classification.Firstly,the low-level spatial-spectral features are extracted by multi-scale residual structure.Secondly,an attention module is introduced to focus on the more important spatialspectral information.Finally,high-level semantic features are represented and learned by a token learner and an improved transformer encoder.The proposed algorithm is compared with six classical hyperspectral classification algorithms on real hyperspectral images.The experimental results show that the proposed algorithm effectively improves the land cover classification accuracy of hyperspectral images.
基金Supported by the Municipal Government and School(Hospital)Joint Funding Programme of Guangzhou(No.2023A03J0174,No.2023A03J0188)the State Key Laboratories’Youth Program of China(No.83000-32030003).
文摘●AIM:To establish a classification for congenital cataracts that can facilitate individualized treatment and help identify individuals with a high likelihood of different visual outcomes.●METHODS:Consecutive patients diagnosed with congenital cataracts and undergoing surgery between January 2005 and November 2021 were recruited.Data on visual outcomes and the phenotypic characteristics of ocular biometry and the anterior and posterior segments were extracted from the patients’medical records.A hierarchical cluster analysis was performed.The main outcome measure was the identification of distinct clusters of eyes with congenital cataracts.●RESULTS:A total of 164 children(299 eyes)were divided into two clusters based on their ocular features.Cluster 1(96 eyes)had a shorter axial length(mean±SD,19.44±1.68 mm),a low prevalence of macular abnormalities(1.04%),and no retinal abnormalities or posterior cataracts.Cluster 2(203 eyes)had a greater axial length(mean±SD,20.42±2.10 mm)and a higher prevalence of macular abnormalities(8.37%),retinal abnormalities(98.52%),and posterior cataracts(4.93%).Compared with the eyes in Cluster 2(57.14%),those in Cluster 1(71.88%)had a 2.2 times higher chance of good best-corrected visual acuity[<0.7 logMAR;OR(95%CI),2.20(1.25–3.81);P=0.006].●CONCLUSION:This retrospective study categorizes congenital cataracts into two distinct clusters,each associated with a different likelihood of visual outcomes.This innovative classification may enable the personalization and prioritization of early interventions for patients who may gain the greatest benefit,thereby making strides toward precision medicine in the field of congenital cataracts.
基金sponsored by the National Natural Science Foundation of China Nos.62172353,62302114 and U20B2046Future Network Scientific Research Fund Project No.FNSRFP-2021-YB-48Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1221045。
文摘Bitcoin is widely used as the most classic electronic currency for various electronic services such as exchanges,gambling,marketplaces,and also scams such as high-yield investment projects.Identifying the services operated by a Bitcoin address can help determine the risk level of that address and build an alert model accordingly.Feature engineering can also be used to flesh out labeled addresses and to analyze the current state of Bitcoin in a small way.In this paper,we address the problem of identifying multiple classes of Bitcoin services,and for the poor classification of individual addresses that do not have significant features,we propose a Bitcoin address identification scheme based on joint multi-model prediction using the mapping relationship between addresses and entities.The innovation of the method is to(1)Extract as many valuable features as possible when an address is given to facilitate the multi-class service identification task.(2)Unlike the general supervised model approach,this paper proposes a joint prediction scheme for multiple learners based on address-entity mapping relationships.Specifically,after obtaining the overall features,the address classification and entity clustering tasks are performed separately,and the results are subjected to graph-basedmaximization consensus.The final result ismade to baseline the individual address classification results while satisfying the constraint of having similarly behaving entities as far as possible.By testing and evaluating over 26,000 Bitcoin addresses,our feature extraction method captures more useful features.In addition,the combined multi-learner model obtained results that exceeded the baseline classifier reaching an accuracy of 77.4%.
基金funded by the National Natural Science Foundation of China(Nos.62072212,62302218)the Development Project of Jilin Province of China(Nos.20220508125RC,20230201065GX,20240101364JC)+1 种基金National Key R&D Program(No.2018YFC2001302)the Jilin Provincial Key Laboratory of Big Data Intelligent Cognition(No.20210504003GH).
文摘Locomotor intent classification has become a research hotspot due to its importance to the development of assistive robotics and wearable devices.Previous work have achieved impressive performance in classifying steady locomotion states.However,it remains challenging for these methods to attain high accuracy when facing transitions between steady locomotion states.Due to the similarities between the information of the transitions and their adjacent steady states.Furthermore,most of these methods rely solely on data and overlook the objective laws between physical activities,resulting in lower accuracy,particularly when encountering complex locomotion modes such as transitions.To address the existing deficiencies,we propose the locomotion rule embedding long short-term memory(LSTM)network with Attention(LREAL)for human locomotor intent classification,with a particular focus on transitions,using data from fewer sensors(two inertial measurement units and four goniometers).The LREAL network consists of two levels:One responsible for distinguishing between steady states and transitions,and the other for the accurate identification of locomotor intent.Each classifier in these levels is composed of multiple-LSTM layers and an attention mechanism.To introduce real-world motion rules and apply constraints to the network,a prior knowledge was added to the network via a rule-modulating block.The method was tested on the ENABL3S dataset,which contains continuous locomotion date for seven steady and twelve transitions states.Experimental results showed that the LREAL network could recognize locomotor intents with an average accuracy of 99.03%and 96.52%for the steady and transitions states,respectively.It is worth noting that the LREAL network accuracy for transition-state recognition improved by 0.18%compared to other state-of-the-art network,while using data from fewer sensors.
基金Equinor for financing the R&D projectthe Institute of Science and Technology of Petroleum Geophysics of Brazil for supporting this research。
文摘We apply stochastic seismic inversion and Bayesian facies classification for porosity modeling and igneous rock identification in the presalt interval of the Santos Basin. This integration of seismic and well-derived information enhances reservoir characterization. Stochastic inversion and Bayesian classification are powerful tools because they permit addressing the uncertainties in the model. We used the ES-MDA algorithm to achieve the realizations equivalent to the percentiles P10, P50, and P90 of acoustic impedance, a novel method for acoustic inversion in presalt. The facies were divided into five: reservoir 1,reservoir 2, tight carbonates, clayey rocks, and igneous rocks. To deal with the overlaps in acoustic impedance values of facies, we included geological information using a priori probability, indicating that structural highs are reservoir-dominated. To illustrate our approach, we conducted porosity modeling using facies-related rock-physics models for rock-physics inversion in an area with a well drilled in a coquina bank and evaluated the thickness and extension of an igneous intrusion near the carbonate-salt interface. The modeled porosity and the classified seismic facies are in good agreement with the ones observed in the wells. Notably, the coquinas bank presents an improvement in the porosity towards the top. The a priori probability model was crucial for limiting the clayey rocks to the structural lows. In Well B, the hit rate of the igneous rock in the three scenarios is higher than 60%, showing an excellent thickness-prediction capability.