Mueller matrix imaging is emerging for the quantitative characterization of pathological microstructures and is especially sensitive to fibrous structures.Liver fibrosis is a characteristic of many types of chronic li...Mueller matrix imaging is emerging for the quantitative characterization of pathological microstructures and is especially sensitive to fibrous structures.Liver fibrosis is a characteristic of many types of chronic liver diseases.The clinical diagnosis of liver fibrosis requires time-consuming multiple staining processes that specifically target on fibrous structures.The staining proficiency of technicians and the subjective visualization of pathologists may bring inconsistency to clinical diagnosis.Mueller matrix imaging can reduce the multiple staining processes and provide quantitative diagnostic indicators to characterize liver fibrosis tissues.In this study,a fibersensitive polarization feature parameter(PFP)was derived through the forward sequential feature selection(SFS)and linear discriminant analysis(LDA)to target on the identification of fibrous structures.Then,the Pearson correlation coeffcients and the statistical T-tests between the fiber-sensitive PFP image textures and the liver fibrosis tissues were calculated.The results show the gray level run length matrix(GLRLM)-based run entropy that measures the heterogeneity of the PFP image was most correlated to the changes of liver fibrosis tissues at four stages with a Pearson correlation of 0.6919.The results also indicate the highest Pearson correlation of 0.9996 was achieved through the linear regression predictions of the combination of the PFP image textures.This study demonstrates the potential of deriving a fiber-sensitive PFP to reduce the multiple staining process and provide textures-based quantitative diagnostic indicators for the staging of liver fibrosis.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconst...Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.展开更多
The technology of drilling tests makes it possible to obtain the strength parameter of rock accurately in situ. In this paper, a new rock cutting analysis model that considers the influence of the rock crushing zone(R...The technology of drilling tests makes it possible to obtain the strength parameter of rock accurately in situ. In this paper, a new rock cutting analysis model that considers the influence of the rock crushing zone(RCZ) is built. The formula for an ultimate cutting force is established based on the limit equilibrium principle. The relationship between digital drilling parameters(DDP) and the c-φ parameter(DDP-cφ formula, where c refers to the cohesion and φ refers to the internal friction angle) is derived, and the response of drilling parameters and cutting ratio to the strength parameters is analyzed. The drillingbased measuring method for the c-φ parameter of rock is constructed. The laboratory verification test is then completed, and the difference in results between the drilling test and the compression test is less than 6%. On this basis, in-situ rock drilling tests in a traffic tunnel and a coal mine roadway are carried out, and the strength parameters of the surrounding rock are effectively tested. The average difference ratio of the results is less than 11%, which verifies the effectiveness of the proposed method for obtaining the strength parameters based on digital drilling. This study provides methodological support for field testing of rock strength parameters.展开更多
Traumatic spinal cord injury is potentially catastrophic and can lead to permanent disability or even death.China has the largest population of patients with traumatic spinal cord injury.Previous studies of traumatic ...Traumatic spinal cord injury is potentially catastrophic and can lead to permanent disability or even death.China has the largest population of patients with traumatic spinal cord injury.Previous studies of traumatic spinal cord injury in China have mostly been regional in scope;national-level studies have been rare.To the best of our knowledge,no national-level study of treatment status and economic burden has been performed.This retrospective study aimed to examine the epidemiological and clinical features,treatment status,and economic burden of traumatic spinal cord injury in China at the national level.We included 13,465 traumatic spinal cord injury patients who were injured between January 2013 and December 2018 and treated in 30 hospitals in 11 provinces/municipalities representing all geographical divisions of China.Patient epidemiological and clinical features,treatment status,and total and daily costs were recorded.Trends in the percentage of traumatic spinal cord injuries among all hospitalized patients and among patients hospitalized in the orthopedic department and cost of care were assessed by annual percentage change using the Joinpoint Regression Program.The percentage of traumatic spinal cord injuries among all hospitalized patients and among patients hospitalized in the orthopedic department did not significantly change overall(annual percentage change,-0.5%and 2.1%,respectively).A total of 10,053(74.7%)patients underwent surgery.Only 2.8%of patients who underwent surgery did so within 24 hours of injury.A total of 2005(14.9%)patients were treated with high-dose(≥500 mg)methylprednisolone sodium succinate/methylprednisolone(MPSS/MP);615(4.6%)received it within 8 hours.The total cost for acute traumatic spinal cord injury decreased over the study period(-4.7%),while daily cost did not significantly change(1.0%increase).Our findings indicate that public health initiatives should aim at improving hospitals’ability to complete early surgery within 24 hours,which is associated with improved sensorimotor recovery,increasing the awareness rate of clinical guidelines related to high-dose MPSS/MP to reduce the use of the treatment with insufficient evidence.展开更多
BACKGROUND Left ventricular(LV)remodeling and diastolic function in people with heart failure(HF)are correlated with iron status;however,the causality is uncertain.This Mendelian randomization(MR)study investigated th...BACKGROUND Left ventricular(LV)remodeling and diastolic function in people with heart failure(HF)are correlated with iron status;however,the causality is uncertain.This Mendelian randomization(MR)study investigated the bidirectional causal relationship between systemic iron parameters and LV structure and function in a preserved ejection fraction population.METHODS Transferrin saturation(TSAT),total iron binding capacity(TIBC),and serum iron and ferritin levels were extracted as instrumental variables for iron parameters from meta-analyses of public genome-wide association studies.Individuals without myocardial infarction history,HF,or LV ejection fraction(LVEF)<50%(n=16,923)in the UK Biobank Cardiovascular Magnetic Resonance Imaging Study constituted the outcome dataset.The dataset included LV end-diastolic volume,LV endsystolic volume,LV mass(LVM),and LVM-to-end-diastolic volume ratio(LVMVR).We used a two-sample bidirectional MR study with inverse variance weighting(IVW)as the primary analysis method and estimation methods using different algorithms to improve the robustness of the results.RESULTS In the IVW analysis,one standard deviation(SD)increased in TSAT significantly correlated with decreased LVMVR(β=-0.1365;95%confidence interval[CI]:-0.2092 to-0.0638;P=0.0002)after Bonferroni adjustment.Conversely,no significant relationships were observed between other iron and LV parameters.After Bonferroni correction,reverse MR analysis showed that one SD increase in LVEF significantly correlated with decreased TSAT(β=-0.0699;95%CI:-0.1087 to-0.0311;P=0.0004).No heterogeneity or pleiotropic effects evidence was observed in the analysis.CONCLUSIONS We demonstrated a causal relationship between TSAT and LV remodeling and function in a preserved ejection fraction population.展开更多
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to est...Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.展开更多
BACKGROUND Gastric cystica profunda(GCP)represents a rare condition characterized by cystic dilation of gastric glands within the mucosal and/or submucosal layers.GCP is often linked to,or may progress into,early gast...BACKGROUND Gastric cystica profunda(GCP)represents a rare condition characterized by cystic dilation of gastric glands within the mucosal and/or submucosal layers.GCP is often linked to,or may progress into,early gastric cancer(EGC).AIM To provide a comprehensive evaluation of the endoscopic features of GCP while assessing the efficacy of endoscopic treatment,thereby offering guidance for diagnosis and treatment.METHODS This retrospective study involved 104 patients with GCP who underwent endoscopic resection.Alongside demographic and clinical data,regular patient followups were conducted to assess local recurrence.RESULTS Among the 104 patients diagnosed with GCP who underwent endoscopic resection,12.5%had a history of previous gastric procedures.The primary site predominantly affected was the cardia(38.5%,n=40).GCP commonly exhibited intraluminal growth(99%),regular presentation(74.0%),and ulcerative mucosa(61.5%).The leading endoscopic feature was the mucosal lesion type(59.6%,n=62).The average maximum diameter was 20.9±15.3 mm,with mucosal involvement in 60.6%(n=63).Procedures lasted 73.9±57.5 min,achieving complete resection in 91.3%(n=95).Recurrence(4.8%)was managed via either surgical intervention(n=1)or through endoscopic resection(n=4).Final pathology confirmed that 59.6%of GCP cases were associated with EGC.Univariate analysis indicated that elderly males were more susceptible to GCP associated with EGC.Conversely,multivariate analysis identified lesion morphology and endoscopic features as significant risk factors.Survival analysis demonstrated no statistically significant difference in recurrence between GCP with and without EGC(P=0.72).CONCLUSION The findings suggested that endoscopic resection might serve as an effective and minimally invasive treatment for GCP with or without EGC.展开更多
A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have ...A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field.展开更多
While single-modal visible light images or infrared images provide limited information,infrared light captures significant thermal radiation data,whereas visible light excels in presenting detailed texture information...While single-modal visible light images or infrared images provide limited information,infrared light captures significant thermal radiation data,whereas visible light excels in presenting detailed texture information.Com-bining images obtained from both modalities allows for leveraging their respective strengths and mitigating individual limitations,resulting in high-quality images with enhanced contrast and rich texture details.Such capabilities hold promising applications in advanced visual tasks including target detection,instance segmentation,military surveillance,pedestrian detection,among others.This paper introduces a novel approach,a dual-branch decomposition fusion network based on AutoEncoder(AE),which decomposes multi-modal features into intensity and texture information for enhanced fusion.Local contrast enhancement module(CEM)and texture detail enhancement module(DEM)are devised to process the decomposed images,followed by image fusion through the decoder.The proposed loss function ensures effective retention of key information from the source images of both modalities.Extensive comparisons and generalization experiments demonstrate the superior performance of our network in preserving pixel intensity distribution and retaining texture details.From the qualitative results,we can see the advantages of fusion details and local contrast.In the quantitative experiments,entropy(EN),mutual information(MI),structural similarity(SSIM)and other results have improved and exceeded the SOTA(State of the Art)model as a whole.展开更多
In classification problems,datasets often contain a large amount of features,but not all of them are relevant for accurate classification.In fact,irrelevant features may even hinder classification accuracy.Feature sel...In classification problems,datasets often contain a large amount of features,but not all of them are relevant for accurate classification.In fact,irrelevant features may even hinder classification accuracy.Feature selection aims to alleviate this issue by minimizing the number of features in the subset while simultaneously minimizing the classification error rate.Single-objective optimization approaches employ an evaluation function designed as an aggregate function with a parameter,but the results obtained depend on the value of the parameter.To eliminate this parameter’s influence,the problem can be reformulated as a multi-objective optimization problem.The Whale Optimization Algorithm(WOA)is widely used in optimization problems because of its simplicity and easy implementation.In this paper,we propose a multi-strategy assisted multi-objective WOA(MSMOWOA)to address feature selection.To enhance the algorithm’s search ability,we integrate multiple strategies such as Levy flight,Grey Wolf Optimizer,and adaptive mutation into it.Additionally,we utilize an external repository to store non-dominant solution sets and grid technology is used to maintain diversity.Results on fourteen University of California Irvine(UCI)datasets demonstrate that our proposed method effectively removes redundant features and improves classification performance.The source code can be accessed from the website:https://github.com/zc0315/MSMOWOA.展开更多
We propose a newmethod to generate surface quadrilateralmesh by calculating a globally defined parameterization with feature constraints.In the field of quadrilateral generation with features,the cross field methods a...We propose a newmethod to generate surface quadrilateralmesh by calculating a globally defined parameterization with feature constraints.In the field of quadrilateral generation with features,the cross field methods are wellknown because of their superior performance in feature preservation.The methods based on metrics are popular due to their sound theoretical basis,especially the Ricci flow algorithm.The cross field methods’major part,the Poisson equation,is challenging to solve in three dimensions directly.When it comes to cases with a large number of elements,the computational costs are expensive while the methods based on metrics are on the contrary.In addition,an appropriate initial value plays a positive role in the solution of the Poisson equation,and this initial value can be obtained from the Ricci flow algorithm.So we combine the methods based on metric with the cross field methods.We use the discrete dynamic Ricci flow algorithm to generate an initial value for the Poisson equation,which speeds up the solution of the equation and ensures the convergence of the computation.Numerical experiments show that our method is effective in generating a quadrilateral mesh for models with features,and the quality of the quadrilateral mesh is reliable.展开更多
Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduct...Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduction is undoubtedly necessary for line drawings.However,most existing methods for artifact drawing rely on the principles of orthographic projection that always cannot avoid angle occlusion and data overlapping while the surface of cultural relics is complex.Therefore,conformal mapping was introduced as a dimensionality reduction way to compensate for the limitation of orthographic projection.Based on the given criteria for assessing surface complexity,this paper proposed a three-dimensional feature guideline extraction method for complex cultural relic surfaces.A 2D and 3D combined factor that measured the importance of points on describing surface features,vertex weight,was designed.Then the selection threshold for feature guideline extraction was determined based on the differences between vertex weight and shape index distributions.The feasibility and stability were verified through experiments conducted on real cultural relic surface data.Results demonstrated the ability of the method to address the challenges associated with the automatic generation of line drawings for complex surfaces.The extraction method and the obtained results will be useful for line graphic drawing,displaying and propaganda of cultural relics.展开更多
Reviewing the empirical and theoretical parameter relationships between various parameters is a good way to understand more about contact binary systems.In this investigation,two-dimensional(2D)relationships for P–MV...Reviewing the empirical and theoretical parameter relationships between various parameters is a good way to understand more about contact binary systems.In this investigation,two-dimensional(2D)relationships for P–MV(system),P–L1,2,M1,2–L1,2,and q–Lratiowere revisited.The sample used is related to 118 contact binary systems with an orbital period shorter than 0.6 days whose absolute parameters were estimated based on the Gaia Data Release 3 parallax.We reviewed previous studies on 2D relationships and updated six parameter relationships.Therefore,Markov chain Monte Carlo and Machine Learning methods were used,and the outcomes were compared.We selected 22 contact binary systems from eight previous studies for comparison,which had light curve solutions using spectroscopic data.The results show that the systems are in good agreement with the results of this study.展开更多
BACKGROUND The severity of nonalcoholic fatty liver disease(NAFLD)and lipid metabolism are related to the occurrence of colorectal polyps.Liver-controlled attenuation parameters(liver-CAPs)have been established to pre...BACKGROUND The severity of nonalcoholic fatty liver disease(NAFLD)and lipid metabolism are related to the occurrence of colorectal polyps.Liver-controlled attenuation parameters(liver-CAPs)have been established to predict the prognosis of hepatic steatosis patients.AIM To explore the risk factors associated with colorectal polyps in patients with NAFLD by analyzing liver-CAPs and establishing a diagnostic model.METHODS Patients who were diagnosed with colorectal polyps in the Department of Gastroenterology of our hospital between June 2021 and April 2022 composed the case group,and those with no important abnormalities composed the control group.The area under the receiver operating characteristic curve was used to predict the diagnostic efficiency.Differences were considered statistically significant when P<0.05.RESULTS The median triglyceride(TG)and liver-CAP in the case group were significantly greater than those in the control group(mmol/L,1.74 vs 1.05;dB/m,282 vs 254,P<0.05).TG and liver-CAP were found to be independent risk factors for colorectal polyps,with ORs of 2.338(95%CI:1.154–4.733)and 1.019(95%CI:1.006–1.033),respectively(P<0.05).And there was no difference in the diagnostic efficacy between liver-CAP and TG combined with liver-CAP(TG+CAP)(P>0.05).When the liver-CAP was greater than 291 dB/m,colorectal polyps were more likely to occur.CONCLUSION The levels of TG and liver-CAP in patients with colorectal polyps are significantly greater than those patients without polyps.Liver-CAP alone can be used to diagnose NAFLD with colorectal polyps.展开更多
Heart disease is a primary cause of death worldwide and is notoriously difficult to cure without a proper diagnosis.Hence,machine learning(ML)can reduce and better understand symptoms associated with heart disease.Thi...Heart disease is a primary cause of death worldwide and is notoriously difficult to cure without a proper diagnosis.Hence,machine learning(ML)can reduce and better understand symptoms associated with heart disease.This study aims to develop a framework for the automatic and accurate classification of heart disease utilizing machine learning algorithms,grid search(GS),and the Aquila optimization algorithm.In the proposed approach,feature selection is used to identify characteristics of heart disease by using a method for dimensionality reduction.First,feature selection is accomplished with the help of the Aquila algorithm.Then,the optimal combination of the hyperparameters is selected using grid search.The experiments were conducted with three datasets from Kaggle:The Heart Failure Prediction Dataset,Heart Disease Binary Classification,and Heart Disease Dataset.Two classes can be distinguished:diseased and healthy(i.e.,uninfected).The Histogram Gradient Boosting(HGB)classifier produced the highest Weighted Sum Metric(WSM)scores of 98.65%concerning the Heart Failure Prediction Dataset.In contrast,the Decision Tree(DT)machine learning classifier had the highest WSM scores of 87.64%concerning the Heart Disease Health Indicators Dataset.Measures of accuracy,specificity,sensitivity,and other metrics are used to evaluate the proposed approach.The presented method demonstrates superior performance compared to different state-ofthe-art algorithms.展开更多
In modern war,radar countermeasure is becoming increasingly fierce,and the enemy jamming time and pattern are changing more randomly.It is challenging for the radar to efficiently identify jamming and obtain precise p...In modern war,radar countermeasure is becoming increasingly fierce,and the enemy jamming time and pattern are changing more randomly.It is challenging for the radar to efficiently identify jamming and obtain precise parameter information,particularly in low signal-to-noise ratio(SNR)situations.In this paper,an approach to intelligent recognition and complex jamming parameter estimate based on joint time-frequency distribution features is proposed to address this challenging issue.Firstly,a joint algorithm based on YOLOv5 convolutional neural networks(CNNs)is proposed,which is used to achieve the jamming signal classification and preliminary parameter estimation.Furthermore,an accurate jamming key parameters estimation algorithm is constructed by comprehensively utilizing chi-square statistical test,feature region search,position regression,spectrum interpolation,etc.,which realizes the accurate estimation of jamming carrier frequency,relative delay,Doppler frequency shift,and other parameters.Finally,the approach has improved performance for complex jamming recognition and parameter estimation under low SNR,and the recognition rate can reach 98%under−15 dB SNR,according to simulation and real data verification results.展开更多
With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature t...With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.展开更多
Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesio...Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesion anatomical and functional information effectively and improve the network segmentation performance are key questions.To solve the problem,the Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network(Guide-YNet)is proposed in this paper.Firstly,a double-encoder single-decoder U-Net is used as the backbone in this model,a single-coder single-decoder U-Net is used to generate the saliency guided feature using PET image and transmit it into the skip connection of the backbone,and the high sensitivity of PET images to tumors is used to guide the network to accurately locate lesions.Secondly,a Cross Scale Feature Enhancement Module(CSFEM)is designed to extract multi-scale fusion features after downsampling.Thirdly,a Cross-Layer Interactive Feature Enhancement Module(CIFEM)is designed in the encoder to enhance the spatial position information and semantic information.Finally,a Cross-Dimension Cross-Layer Feature Enhancement Module(CCFEM)is proposed in the decoder,which effectively extractsmultimodal image features through global attention and multi-dimension local attention.The proposed method is verified on the lung multimodal medical image datasets,and the results showthat theMean Intersection overUnion(MIoU),Accuracy(Acc),Dice Similarity Coefficient(Dice),Volumetric overlap error(Voe),Relative volume difference(Rvd)of the proposed method on lung lesion segmentation are 87.27%,93.08%,97.77%,95.92%,89.28%,and 88.68%,respectively.It is of great significance for computer-aided diagnosis.展开更多
Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the f...Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the features in lung X-ray images.A pneumonia classification model based on multi-scale directional feature enhancement MSD-Net is proposed in this paper.The main innovations are as follows:Firstly,the Multi-scale Residual Feature Extraction Module(MRFEM)is designed to effectively extract multi-scale features.The MRFEM uses dilated convolutions with different expansion rates to increase the receptive field and extract multi-scale features effectively.Secondly,the Multi-scale Directional Feature Perception Module(MDFPM)is designed,which uses a three-branch structure of different sizes convolution to transmit direction feature layer by layer,and focuses on the target region to enhance the feature information.Thirdly,the Axial Compression Former Module(ACFM)is designed to perform global calculations to enhance the perception ability of global features in different directions.To verify the effectiveness of the MSD-Net,comparative experiments and ablation experiments are carried out.In the COVID-19 RADIOGRAPHY DATABASE,the Accuracy,Recall,Precision,F1 Score,and Specificity of MSD-Net are 97.76%,95.57%,95.52%,95.52%,and 98.51%,respectively.In the chest X-ray dataset,the Accuracy,Recall,Precision,F1 Score and Specificity of MSD-Net are 97.78%,95.22%,96.49%,95.58%,and 98.11%,respectively.This model improves the accuracy of lung image recognition effectively and provides an important clinical reference to pneumonia Computer-Aided Diagnosis.展开更多
基金supported by the National Natural Science Foundation of China(NSFC)(Nos.11974206 and 61527826).
文摘Mueller matrix imaging is emerging for the quantitative characterization of pathological microstructures and is especially sensitive to fibrous structures.Liver fibrosis is a characteristic of many types of chronic liver diseases.The clinical diagnosis of liver fibrosis requires time-consuming multiple staining processes that specifically target on fibrous structures.The staining proficiency of technicians and the subjective visualization of pathologists may bring inconsistency to clinical diagnosis.Mueller matrix imaging can reduce the multiple staining processes and provide quantitative diagnostic indicators to characterize liver fibrosis tissues.In this study,a fibersensitive polarization feature parameter(PFP)was derived through the forward sequential feature selection(SFS)and linear discriminant analysis(LDA)to target on the identification of fibrous structures.Then,the Pearson correlation coeffcients and the statistical T-tests between the fiber-sensitive PFP image textures and the liver fibrosis tissues were calculated.The results show the gray level run length matrix(GLRLM)-based run entropy that measures the heterogeneity of the PFP image was most correlated to the changes of liver fibrosis tissues at four stages with a Pearson correlation of 0.6919.The results also indicate the highest Pearson correlation of 0.9996 was achieved through the linear regression predictions of the combination of the PFP image textures.This study demonstrates the potential of deriving a fiber-sensitive PFP to reduce the multiple staining process and provide textures-based quantitative diagnostic indicators for the staging of liver fibrosis.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
基金supported in part by the National Natural Science Foundation of China(Grants 62376172,62006163,62376043)in part by the National Postdoctoral Program for Innovative Talents(Grant BX20200226)in part by Sichuan Science and Technology Planning Project(Grants 2022YFSY0047,2022YFQ0014,2023ZYD0143,2022YFH0021,2023YFQ0020,24QYCX0354,24NSFTD0025).
文摘Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection.
基金supported by the National Key Research and Development Program of China(No.2023YFC2907600)the National Natural Science Foundation of China(Nos.42077267,42277174 and 52074164)+2 种基金the Natural Science Foundation of Shandong Province,China(No.ZR2020JQ23)the Opening Project of State Key Laboratory of Explosion Science and Technology,Beijing Institute of Technology(No.KFJJ21-02Z)the Fundamental Research Funds for the Central Universities,China(No.2022JCCXSB03).
文摘The technology of drilling tests makes it possible to obtain the strength parameter of rock accurately in situ. In this paper, a new rock cutting analysis model that considers the influence of the rock crushing zone(RCZ) is built. The formula for an ultimate cutting force is established based on the limit equilibrium principle. The relationship between digital drilling parameters(DDP) and the c-φ parameter(DDP-cφ formula, where c refers to the cohesion and φ refers to the internal friction angle) is derived, and the response of drilling parameters and cutting ratio to the strength parameters is analyzed. The drillingbased measuring method for the c-φ parameter of rock is constructed. The laboratory verification test is then completed, and the difference in results between the drilling test and the compression test is less than 6%. On this basis, in-situ rock drilling tests in a traffic tunnel and a coal mine roadway are carried out, and the strength parameters of the surrounding rock are effectively tested. The average difference ratio of the results is less than 11%, which verifies the effectiveness of the proposed method for obtaining the strength parameters based on digital drilling. This study provides methodological support for field testing of rock strength parameters.
基金supported by the National Key Research and Development Project,No.2019YFA0112100(to SF).
文摘Traumatic spinal cord injury is potentially catastrophic and can lead to permanent disability or even death.China has the largest population of patients with traumatic spinal cord injury.Previous studies of traumatic spinal cord injury in China have mostly been regional in scope;national-level studies have been rare.To the best of our knowledge,no national-level study of treatment status and economic burden has been performed.This retrospective study aimed to examine the epidemiological and clinical features,treatment status,and economic burden of traumatic spinal cord injury in China at the national level.We included 13,465 traumatic spinal cord injury patients who were injured between January 2013 and December 2018 and treated in 30 hospitals in 11 provinces/municipalities representing all geographical divisions of China.Patient epidemiological and clinical features,treatment status,and total and daily costs were recorded.Trends in the percentage of traumatic spinal cord injuries among all hospitalized patients and among patients hospitalized in the orthopedic department and cost of care were assessed by annual percentage change using the Joinpoint Regression Program.The percentage of traumatic spinal cord injuries among all hospitalized patients and among patients hospitalized in the orthopedic department did not significantly change overall(annual percentage change,-0.5%and 2.1%,respectively).A total of 10,053(74.7%)patients underwent surgery.Only 2.8%of patients who underwent surgery did so within 24 hours of injury.A total of 2005(14.9%)patients were treated with high-dose(≥500 mg)methylprednisolone sodium succinate/methylprednisolone(MPSS/MP);615(4.6%)received it within 8 hours.The total cost for acute traumatic spinal cord injury decreased over the study period(-4.7%),while daily cost did not significantly change(1.0%increase).Our findings indicate that public health initiatives should aim at improving hospitals’ability to complete early surgery within 24 hours,which is associated with improved sensorimotor recovery,increasing the awareness rate of clinical guidelines related to high-dose MPSS/MP to reduce the use of the treatment with insufficient evidence.
基金funded by the Key Research and Development of the Gansu Province(No.20YF8FA 079)the Construction Project of the Gansu Clinical Medical Research Center(No.18JR2FA003).
文摘BACKGROUND Left ventricular(LV)remodeling and diastolic function in people with heart failure(HF)are correlated with iron status;however,the causality is uncertain.This Mendelian randomization(MR)study investigated the bidirectional causal relationship between systemic iron parameters and LV structure and function in a preserved ejection fraction population.METHODS Transferrin saturation(TSAT),total iron binding capacity(TIBC),and serum iron and ferritin levels were extracted as instrumental variables for iron parameters from meta-analyses of public genome-wide association studies.Individuals without myocardial infarction history,HF,or LV ejection fraction(LVEF)<50%(n=16,923)in the UK Biobank Cardiovascular Magnetic Resonance Imaging Study constituted the outcome dataset.The dataset included LV end-diastolic volume,LV endsystolic volume,LV mass(LVM),and LVM-to-end-diastolic volume ratio(LVMVR).We used a two-sample bidirectional MR study with inverse variance weighting(IVW)as the primary analysis method and estimation methods using different algorithms to improve the robustness of the results.RESULTS In the IVW analysis,one standard deviation(SD)increased in TSAT significantly correlated with decreased LVMVR(β=-0.1365;95%confidence interval[CI]:-0.2092 to-0.0638;P=0.0002)after Bonferroni adjustment.Conversely,no significant relationships were observed between other iron and LV parameters.After Bonferroni correction,reverse MR analysis showed that one SD increase in LVEF significantly correlated with decreased TSAT(β=-0.0699;95%CI:-0.1087 to-0.0311;P=0.0004).No heterogeneity or pleiotropic effects evidence was observed in the analysis.CONCLUSIONS We demonstrated a causal relationship between TSAT and LV remodeling and function in a preserved ejection fraction population.
基金supported in part by the Nationa Natural Science Foundation of China (61876011)the National Key Research and Development Program of China (2022YFB4703700)+1 种基金the Key Research and Development Program 2020 of Guangzhou (202007050002)the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
文摘Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
基金Supported by the 74th General Support of China Postdoctoral Science Foundation,No.2023M740675the National Natural Science Foundation of China,No.82170555+2 种基金Shanghai Academic/Technology Research Leader,No.22XD1422400Shuguang Program of Shanghai Education Development Foundation and Shanghai Municipal Education Commission,No.2022SG06Shanghai"Rising Stars of Medical Talent"Youth Development Program,No.20224Z0005.
文摘BACKGROUND Gastric cystica profunda(GCP)represents a rare condition characterized by cystic dilation of gastric glands within the mucosal and/or submucosal layers.GCP is often linked to,or may progress into,early gastric cancer(EGC).AIM To provide a comprehensive evaluation of the endoscopic features of GCP while assessing the efficacy of endoscopic treatment,thereby offering guidance for diagnosis and treatment.METHODS This retrospective study involved 104 patients with GCP who underwent endoscopic resection.Alongside demographic and clinical data,regular patient followups were conducted to assess local recurrence.RESULTS Among the 104 patients diagnosed with GCP who underwent endoscopic resection,12.5%had a history of previous gastric procedures.The primary site predominantly affected was the cardia(38.5%,n=40).GCP commonly exhibited intraluminal growth(99%),regular presentation(74.0%),and ulcerative mucosa(61.5%).The leading endoscopic feature was the mucosal lesion type(59.6%,n=62).The average maximum diameter was 20.9±15.3 mm,with mucosal involvement in 60.6%(n=63).Procedures lasted 73.9±57.5 min,achieving complete resection in 91.3%(n=95).Recurrence(4.8%)was managed via either surgical intervention(n=1)or through endoscopic resection(n=4).Final pathology confirmed that 59.6%of GCP cases were associated with EGC.Univariate analysis indicated that elderly males were more susceptible to GCP associated with EGC.Conversely,multivariate analysis identified lesion morphology and endoscopic features as significant risk factors.Survival analysis demonstrated no statistically significant difference in recurrence between GCP with and without EGC(P=0.72).CONCLUSION The findings suggested that endoscopic resection might serve as an effective and minimally invasive treatment for GCP with or without EGC.
文摘A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field.
基金supported in part by the National Natural Science Foundation of China(Grant No.61971078)Chongqing Education Commission Science and Technology Major Project(No.KJZD-M202301901).
文摘While single-modal visible light images or infrared images provide limited information,infrared light captures significant thermal radiation data,whereas visible light excels in presenting detailed texture information.Com-bining images obtained from both modalities allows for leveraging their respective strengths and mitigating individual limitations,resulting in high-quality images with enhanced contrast and rich texture details.Such capabilities hold promising applications in advanced visual tasks including target detection,instance segmentation,military surveillance,pedestrian detection,among others.This paper introduces a novel approach,a dual-branch decomposition fusion network based on AutoEncoder(AE),which decomposes multi-modal features into intensity and texture information for enhanced fusion.Local contrast enhancement module(CEM)and texture detail enhancement module(DEM)are devised to process the decomposed images,followed by image fusion through the decoder.The proposed loss function ensures effective retention of key information from the source images of both modalities.Extensive comparisons and generalization experiments demonstrate the superior performance of our network in preserving pixel intensity distribution and retaining texture details.From the qualitative results,we can see the advantages of fusion details and local contrast.In the quantitative experiments,entropy(EN),mutual information(MI),structural similarity(SSIM)and other results have improved and exceeded the SOTA(State of the Art)model as a whole.
基金supported in part by the Natural Science Youth Foundation of Hebei Province under Grant F2019403207in part by the PhD Research Startup Foundation of Hebei GEO University under Grant BQ2019055+3 种基金in part by the Open Research Project of the Hubei Key Laboratory of Intelligent Geo-Information Processing under Grant KLIGIP-2021A06in part by the Fundamental Research Funds for the Universities in Hebei Province under Grant QN202220in part by the Science and Technology Research Project for Universities of Hebei under Grant ZD2020344in part by the Guangxi Natural Science Fund General Project under Grant 2021GXNSFAA075029.
文摘In classification problems,datasets often contain a large amount of features,but not all of them are relevant for accurate classification.In fact,irrelevant features may even hinder classification accuracy.Feature selection aims to alleviate this issue by minimizing the number of features in the subset while simultaneously minimizing the classification error rate.Single-objective optimization approaches employ an evaluation function designed as an aggregate function with a parameter,but the results obtained depend on the value of the parameter.To eliminate this parameter’s influence,the problem can be reformulated as a multi-objective optimization problem.The Whale Optimization Algorithm(WOA)is widely used in optimization problems because of its simplicity and easy implementation.In this paper,we propose a multi-strategy assisted multi-objective WOA(MSMOWOA)to address feature selection.To enhance the algorithm’s search ability,we integrate multiple strategies such as Levy flight,Grey Wolf Optimizer,and adaptive mutation into it.Additionally,we utilize an external repository to store non-dominant solution sets and grid technology is used to maintain diversity.Results on fourteen University of California Irvine(UCI)datasets demonstrate that our proposed method effectively removes redundant features and improves classification performance.The source code can be accessed from the website:https://github.com/zc0315/MSMOWOA.
基金supported by NSFC Nos.61907005,61720106005,61936002,62272080.
文摘We propose a newmethod to generate surface quadrilateralmesh by calculating a globally defined parameterization with feature constraints.In the field of quadrilateral generation with features,the cross field methods are wellknown because of their superior performance in feature preservation.The methods based on metrics are popular due to their sound theoretical basis,especially the Ricci flow algorithm.The cross field methods’major part,the Poisson equation,is challenging to solve in three dimensions directly.When it comes to cases with a large number of elements,the computational costs are expensive while the methods based on metrics are on the contrary.In addition,an appropriate initial value plays a positive role in the solution of the Poisson equation,and this initial value can be obtained from the Ricci flow algorithm.So we combine the methods based on metric with the cross field methods.We use the discrete dynamic Ricci flow algorithm to generate an initial value for the Poisson equation,which speeds up the solution of the equation and ensures the convergence of the computation.Numerical experiments show that our method is effective in generating a quadrilateral mesh for models with features,and the quality of the quadrilateral mesh is reliable.
基金National Natural Science Foundation of China(Nos.42071444,42101444)。
文摘Cultural relics line graphic serves as a crucial form of traditional artifact information documentation,which is a simple and intuitive product with low cost of displaying compared with 3D models.Dimensionality reduction is undoubtedly necessary for line drawings.However,most existing methods for artifact drawing rely on the principles of orthographic projection that always cannot avoid angle occlusion and data overlapping while the surface of cultural relics is complex.Therefore,conformal mapping was introduced as a dimensionality reduction way to compensate for the limitation of orthographic projection.Based on the given criteria for assessing surface complexity,this paper proposed a three-dimensional feature guideline extraction method for complex cultural relic surfaces.A 2D and 3D combined factor that measured the importance of points on describing surface features,vertex weight,was designed.Then the selection threshold for feature guideline extraction was determined based on the differences between vertex weight and shape index distributions.The feasibility and stability were verified through experiments conducted on real cultural relic surface data.Results demonstrated the ability of the method to address the challenges associated with the automatic generation of line drawings for complex surfaces.The extraction method and the obtained results will be useful for line graphic drawing,displaying and propaganda of cultural relics.
基金The Binary Systems of South and North(BSN)project(https://bsnp.info/)。
文摘Reviewing the empirical and theoretical parameter relationships between various parameters is a good way to understand more about contact binary systems.In this investigation,two-dimensional(2D)relationships for P–MV(system),P–L1,2,M1,2–L1,2,and q–Lratiowere revisited.The sample used is related to 118 contact binary systems with an orbital period shorter than 0.6 days whose absolute parameters were estimated based on the Gaia Data Release 3 parallax.We reviewed previous studies on 2D relationships and updated six parameter relationships.Therefore,Markov chain Monte Carlo and Machine Learning methods were used,and the outcomes were compared.We selected 22 contact binary systems from eight previous studies for comparison,which had light curve solutions using spectroscopic data.The results show that the systems are in good agreement with the results of this study.
基金Supported by the Special Research Project of the Capital’s Health Development,No.2024-3-7037and the Beijing Clinical Key Specialty Project.
文摘BACKGROUND The severity of nonalcoholic fatty liver disease(NAFLD)and lipid metabolism are related to the occurrence of colorectal polyps.Liver-controlled attenuation parameters(liver-CAPs)have been established to predict the prognosis of hepatic steatosis patients.AIM To explore the risk factors associated with colorectal polyps in patients with NAFLD by analyzing liver-CAPs and establishing a diagnostic model.METHODS Patients who were diagnosed with colorectal polyps in the Department of Gastroenterology of our hospital between June 2021 and April 2022 composed the case group,and those with no important abnormalities composed the control group.The area under the receiver operating characteristic curve was used to predict the diagnostic efficiency.Differences were considered statistically significant when P<0.05.RESULTS The median triglyceride(TG)and liver-CAP in the case group were significantly greater than those in the control group(mmol/L,1.74 vs 1.05;dB/m,282 vs 254,P<0.05).TG and liver-CAP were found to be independent risk factors for colorectal polyps,with ORs of 2.338(95%CI:1.154–4.733)and 1.019(95%CI:1.006–1.033),respectively(P<0.05).And there was no difference in the diagnostic efficacy between liver-CAP and TG combined with liver-CAP(TG+CAP)(P>0.05).When the liver-CAP was greater than 291 dB/m,colorectal polyps were more likely to occur.CONCLUSION The levels of TG and liver-CAP in patients with colorectal polyps are significantly greater than those patients without polyps.Liver-CAP alone can be used to diagnose NAFLD with colorectal polyps.
文摘Heart disease is a primary cause of death worldwide and is notoriously difficult to cure without a proper diagnosis.Hence,machine learning(ML)can reduce and better understand symptoms associated with heart disease.This study aims to develop a framework for the automatic and accurate classification of heart disease utilizing machine learning algorithms,grid search(GS),and the Aquila optimization algorithm.In the proposed approach,feature selection is used to identify characteristics of heart disease by using a method for dimensionality reduction.First,feature selection is accomplished with the help of the Aquila algorithm.Then,the optimal combination of the hyperparameters is selected using grid search.The experiments were conducted with three datasets from Kaggle:The Heart Failure Prediction Dataset,Heart Disease Binary Classification,and Heart Disease Dataset.Two classes can be distinguished:diseased and healthy(i.e.,uninfected).The Histogram Gradient Boosting(HGB)classifier produced the highest Weighted Sum Metric(WSM)scores of 98.65%concerning the Heart Failure Prediction Dataset.In contrast,the Decision Tree(DT)machine learning classifier had the highest WSM scores of 87.64%concerning the Heart Disease Health Indicators Dataset.Measures of accuracy,specificity,sensitivity,and other metrics are used to evaluate the proposed approach.The presented method demonstrates superior performance compared to different state-ofthe-art algorithms.
基金supported by Shandong Provincial Natural Science Foundation(ZR2020MF015)Aerospace Technology Group Stability Support Project(ZY0110020009).
文摘In modern war,radar countermeasure is becoming increasingly fierce,and the enemy jamming time and pattern are changing more randomly.It is challenging for the radar to efficiently identify jamming and obtain precise parameter information,particularly in low signal-to-noise ratio(SNR)situations.In this paper,an approach to intelligent recognition and complex jamming parameter estimate based on joint time-frequency distribution features is proposed to address this challenging issue.Firstly,a joint algorithm based on YOLOv5 convolutional neural networks(CNNs)is proposed,which is used to achieve the jamming signal classification and preliminary parameter estimation.Furthermore,an accurate jamming key parameters estimation algorithm is constructed by comprehensively utilizing chi-square statistical test,feature region search,position regression,spectrum interpolation,etc.,which realizes the accurate estimation of jamming carrier frequency,relative delay,Doppler frequency shift,and other parameters.Finally,the approach has improved performance for complex jamming recognition and parameter estimation under low SNR,and the recognition rate can reach 98%under−15 dB SNR,according to simulation and real data verification results.
基金The authors are highly thankful to the National Social Science Foundation of China(20BXW101,18XXW015)Innovation Research Project for the Cultivation of High-Level Scientific and Technological Talents(Top-Notch Talents of theDiscipline)(ZZKY2022303)+3 种基金National Natural Science Foundation of China(Nos.62102451,62202496)Basic Frontier Innovation Project of Engineering University of People’s Armed Police(WJX202316)This work is also supported by National Natural Science Foundation of China(No.62172436)Engineering University of PAP’s Funding for Scientific Research Innovation Team,Engineering University of PAP’s Funding for Basic Scientific Research,and Engineering University of PAP’s Funding for Education and Teaching.Natural Science Foundation of Shaanxi Province(No.2023-JCYB-584).
文摘With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.
基金supported in part by the National Natural Science Foundation of China(Grant No.62062003)Natural Science Foundation of Ningxia(Grant No.2023AAC03293).
文摘Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesion anatomical and functional information effectively and improve the network segmentation performance are key questions.To solve the problem,the Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network(Guide-YNet)is proposed in this paper.Firstly,a double-encoder single-decoder U-Net is used as the backbone in this model,a single-coder single-decoder U-Net is used to generate the saliency guided feature using PET image and transmit it into the skip connection of the backbone,and the high sensitivity of PET images to tumors is used to guide the network to accurately locate lesions.Secondly,a Cross Scale Feature Enhancement Module(CSFEM)is designed to extract multi-scale fusion features after downsampling.Thirdly,a Cross-Layer Interactive Feature Enhancement Module(CIFEM)is designed in the encoder to enhance the spatial position information and semantic information.Finally,a Cross-Dimension Cross-Layer Feature Enhancement Module(CCFEM)is proposed in the decoder,which effectively extractsmultimodal image features through global attention and multi-dimension local attention.The proposed method is verified on the lung multimodal medical image datasets,and the results showthat theMean Intersection overUnion(MIoU),Accuracy(Acc),Dice Similarity Coefficient(Dice),Volumetric overlap error(Voe),Relative volume difference(Rvd)of the proposed method on lung lesion segmentation are 87.27%,93.08%,97.77%,95.92%,89.28%,and 88.68%,respectively.It is of great significance for computer-aided diagnosis.
基金supported in part by the National Natural Science Foundation of China(Grant No.62062003)Natural Science Foundation of Ningxia(Grant No.2023AAC03293).
文摘Computer-aided diagnosis of pneumonia based on deep learning is a research hotspot.However,there are some problems that the features of different sizes and different directions are not sufficient when extracting the features in lung X-ray images.A pneumonia classification model based on multi-scale directional feature enhancement MSD-Net is proposed in this paper.The main innovations are as follows:Firstly,the Multi-scale Residual Feature Extraction Module(MRFEM)is designed to effectively extract multi-scale features.The MRFEM uses dilated convolutions with different expansion rates to increase the receptive field and extract multi-scale features effectively.Secondly,the Multi-scale Directional Feature Perception Module(MDFPM)is designed,which uses a three-branch structure of different sizes convolution to transmit direction feature layer by layer,and focuses on the target region to enhance the feature information.Thirdly,the Axial Compression Former Module(ACFM)is designed to perform global calculations to enhance the perception ability of global features in different directions.To verify the effectiveness of the MSD-Net,comparative experiments and ablation experiments are carried out.In the COVID-19 RADIOGRAPHY DATABASE,the Accuracy,Recall,Precision,F1 Score,and Specificity of MSD-Net are 97.76%,95.57%,95.52%,95.52%,and 98.51%,respectively.In the chest X-ray dataset,the Accuracy,Recall,Precision,F1 Score and Specificity of MSD-Net are 97.78%,95.22%,96.49%,95.58%,and 98.11%,respectively.This model improves the accuracy of lung image recognition effectively and provides an important clinical reference to pneumonia Computer-Aided Diagnosis.