The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of ...The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.展开更多
With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged...With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged in this field.Most methods identify and classify traffic by extracting spatiotemporal characteristics of data flows or byte-levelfeatures of packets. However, due to changes in data transmission mediums, such as fiber optics and satellites,temporal features can exhibit significant variations due to changes in communication links and transmissionquality. Additionally, partial spatial features can change due to reasons like data reordering and retransmission.Faced with these challenges, identifying encrypted traffic solely based on packet byte-level features is significantlydifficult. To address this, we propose a universal packet-level encrypted traffic identification method, ComboPacket. This method utilizes convolutional neural networks to extract deep features of the current packet andits contextual information and employs spatial and channel attention mechanisms to select and locate effectivefeatures. Experimental data shows that Combo Packet can effectively distinguish between encrypted traffic servicecategories (e.g., File Transfer Protocol, FTP, and Peer-to-Peer, P2P) and encrypted traffic application categories (e.g.,BitTorrent and Skype). Validated on the ISCX VPN-non VPN dataset, it achieves classification accuracies of 97.0%and 97.1% for service and application categories, respectively. It also provides shorter training times and higherrecognition speeds. The performance and recognition capabilities of Combo Packet are significantly superior tothe existing classification methods mentioned.展开更多
In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local ...In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local information, LIViT. The method replaces image patch sequence with polarimetric feature sequence in the feature embedding, and uses convolution for mapping to preserve image spatial detail information. On the other hand, the addition of the wavelet transform branch enables the network to pay more attention to the shape and edge information of the feature target and improves the extraction of local edge information. The results in Wuhan, China and Flevoland, Netherlands show that considering local information when using Vision Transformer for PolSAR image classification effectively improves the image classification accuracy and shows better advantages in PolSAR image classification.展开更多
Objective: Accurate detection and classification of breast lesions in early stage is crucial to timely formulate effective treatments for patients. We aim to develop a fully automatic system to detect and classify bre...Objective: Accurate detection and classification of breast lesions in early stage is crucial to timely formulate effective treatments for patients. We aim to develop a fully automatic system to detect and classify breast lesions using multiple contrast-enhanced mammography(CEM) images.Methods: In this study, a total of 1,903 females who underwent CEM examination from three hospitals were enrolled as the training set, internal testing set, pooled external testing set and prospective testing set. Here we developed a CEM-based multiprocess detection and classification system(MDCS) to perform the task of detection and classification of breast lesions. In this system, we introduced an innovative auxiliary feature fusion(AFF)algorithm that could intelligently incorporates multiple types of information from CEM images. The average freeresponse receiver operating characteristic score(AFROC-Score) was presented to validate system’s detection performance, and the performance of classification was evaluated by area under the receiver operating characteristic curve(AUC). Furthermore, we assessed the diagnostic value of MDCS through visual analysis of disputed cases,comparing its performance and efficiency with that of radiologists and exploring whether it could augment radiologists’ performance.Results: On the pooled external and prospective testing sets, MDCS always maintained a high standalone performance, with AFROC-Scores of 0.953 and 0.963 for detection task, and AUCs for classification were 0.909[95% confidence interval(95% CI): 0.822-0.996] and 0.912(95% CI: 0.840-0.985), respectively. It also achieved higher sensitivity than all senior radiologists and higher specificity than all junior radiologists on pooled external and prospective testing sets. Moreover, MDCS performed superior diagnostic efficiency with an average reading time of 5 seconds, compared to the radiologists’ average reading time of 3.2 min. The average performance of all radiologists was also improved to varying degrees with MDCS assistance.Conclusions: MDCS demonstrated excellent performance in the detection and classification of breast lesions,and greatly enhanced the overall performance of radiologists.展开更多
It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limit...It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.展开更多
Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to est...Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.展开更多
When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to ...When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.展开更多
BACKGROUND Acute liver failure(ALF)has a high mortality with widespread hepatocyte death involving ferroptosis and pyroptosis.The silent information regulator sirtuin 1(SIRT1)-mediated deacetylation affects multiple b...BACKGROUND Acute liver failure(ALF)has a high mortality with widespread hepatocyte death involving ferroptosis and pyroptosis.The silent information regulator sirtuin 1(SIRT1)-mediated deacetylation affects multiple biological processes,including cellular senescence,apoptosis,sugar and lipid metabolism,oxidative stress,and inflammation.AIM To investigate the association between ferroptosis and pyroptosis and the upstream regulatory mechanisms.METHODS This study included 30 patients with ALF and 30 healthy individuals who underwent serum alanine aminotransferase(ALT)and aspartate aminotransferase(AST)testing.C57BL/6 mice were also intraperitoneally pretreated with SIRT1,p53,or glutathione peroxidase 4(GPX4)inducers and inhibitors and injected with lipopolysaccharide(LPS)/D-galactosamine(D-GalN)to induce ALF.Gasdermin D(GSDMD)^(-/-)mice were used as an experimental group.Histological changes in liver tissue were monitored by hematoxylin and eosin staining.ALT,AST,glutathione,reactive oxygen species,and iron levels were measured using commercial kits.Ferroptosis-and pyroptosis-related protein and mRNA expression was detected by western blot and quantitative real-time polymerase chain reaction.SIRT1,p53,and GSDMD were assessed by immunofluorescence analysis.RESULTS Serum AST and ALT levels were elevated in patients with ALF.SIRT1,solute carrier family 7a member 11(SLC7A11),and GPX4 protein expression was decreased and acetylated p5,p53,GSDMD,and acyl-CoA synthetase long-chain family member 4(ACSL4)protein levels were elevated in human ALF liver tissue.In the p53 and ferroptosis inhibitor-treated and GSDMD^(-/-)groups,serum interleukin(IL)-1β,tumour necrosis factor alpha,IL-6,IL-2 and C-C motif ligand 2 levels were decreased and hepatic impairment was mitigated.In mice with GSDMD knockout,p53 was reduced,GPX4 was increased,and ferroptotic events(depletion of SLC7A11,elevation of ACSL4,and iron accumulation)were detected.In vitro,knockdown of p53 and overexpression of GPX4 reduced AST and ALT levels,the cytostatic rate,and GSDMD expression,restoring SLC7A11 depletion.Moreover,SIRT1 agonist and overexpression of SIRT1 alleviated acute liver injury and decreased iron deposition compared with results in the model group,accompanied by reduced p53,GSDMD,and ACSL4,and increased SLC7A11 and GPX4.Inactivation of SIRT1 exacerbated ferroptotic and pyroptotic cell death and aggravated liver injury in LPS/D-GalNinduced in vitro and in vivo models.CONCLUSION SIRT1 activation attenuates LPS/D-GalN-induced ferroptosis and pyroptosis by inhibiting the p53/GPX4/GSDMD signaling pathway in ALF.展开更多
BACKGROUND As one of the fatal diseases with high incidence,lung cancer has seriously endangered public health and safety.Elderly patients usually have poor self-care and are more likely to show a series of psychologi...BACKGROUND As one of the fatal diseases with high incidence,lung cancer has seriously endangered public health and safety.Elderly patients usually have poor self-care and are more likely to show a series of psychological problems.AIM To investigate the effectiveness of the initial check,information exchange,final accuracy check,reaction(IIFAR)information care model on the mental health status of elderly patients with lung cancer.METHODS This study is a single-centre study.We randomly recruited 60 elderly patients with lung cancer who attended our hospital from January 2021 to January 2022.These elderly patients with lung cancer were randomly divided into two groups,with the control group taking the conventional propaganda and education and the observation group taking the IIFAR information care model based on the conventional care protocol.The differences in psychological distress,anxiety and depression,life quality,fatigue,and the locus of control in psychology were compared between these two groups,and the causes of psychological distress were analyzed.RESULTS After the intervention,Distress Thermometer,Hospital Anxiety and Depression Scale(HADS)for anxiety and the HADS for depression,Revised Piper’s Fatigue Scale,and Chance Health Locus of Control scores were lower in the observation group compared to the pre-intervention period in the same group and were significantly lower in the observation group compared to those of the control group(P<0.05).After the intervention,Quality of Life Questionnaire Core 30(QLQ-C30),Internal Health Locus of Control,and Powerful Others Health Locus of Control scores were significantly higher in the observation and the control groups compared to the pre-intervention period in their same group,and QLQ-C30 scores were significantly higher in the observation group compared to those of the control group(P<0.05).CONCLUSION The IIFAR information care model can help elderly patients with lung cancer by reducing their anxiety and depression,psychological distress,and fatigue,improving their tendencies on the locus of control in psychology,and enhancing their life qualities.展开更多
Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to...Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to investigate the imaging features of intrahepatic portal vein in adult patients with CTPV and establish the relationship between the manifestations of intrahepatic portal vein and the progression of CTPV. Methods: We retrospectively analyzed 14 CTPV patients in Beijing Tsinghua Changgung Hospital. All patients underwent both direct portal venography(DPV) and computed tomography angiography(CTA) to reveal the manifestations of the portal venous system. The vessels measured included the left portal vein(LPV), right portal vein(RPV), main portal vein(MPV) and the portal vein bifurcation(PVB). Results: Nine males and 5 females, with a median age of 40.5 years, were included in the study. No significant difference was found in the diameters of the LPV or RPV measured by DPV and CTA. The visualization in terms of LPV, RPV and PVB measured by DPV was higher than that by CTA. There was a significant association between LPV/RPV and PVB/MPV in term of visibility revealed with DPV( P = 0.01), while this association was not observed with CTA. According to the imaging features of the portal vein measured by DPV, CTPV was classified into three categories to facilitate the diagnosis and treatment. Conclusions: DPV was more accurate than CTA for revealing the course of the intrahepatic portal vein in patients with CTPV. The classification of CTPV, that originated from the imaging features of the portal vein revealed by DPV, may provide a new perspective for the diagnosis and treatment of CTPV.展开更多
The subversive nature of information war lies not only in the information itself, but also in the circulation and application of information. It has always been a challenge to quantitatively analyze the function and e...The subversive nature of information war lies not only in the information itself, but also in the circulation and application of information. It has always been a challenge to quantitatively analyze the function and effect of information flow through command, control, communications, computer, kill, intelligence,surveillance, reconnaissance (C4KISR) system. In this work, we propose a framework of force of information influence and the methods for calculating the force of information influence between C4KISR nodes of sensing, intelligence processing,decision making and fire attack. Specifically, the basic concept of force of information influence between nodes in C4KISR system is formally proposed and its mathematical definition is provided. Then, based on the information entropy theory, the model of force of information influence between C4KISR system nodes is constructed. Finally, the simulation experiments have been performed under an air defense and attack scenario. The experimental results show that, with the proposed force of information influence framework, we can effectively evaluate the contribution of information circulation through different C4KISR system nodes to the corresponding tasks. Our framework of force of information influence can also serve as an effective tool for the design and dynamic reconfiguration of C4KISR system architecture.展开更多
Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on t...Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on the benchmark datasets have been proposed for multi-label classification task in the literature.Furthermore,several open-source tools implementing these approaches have also been developed.However,the characteristics of real-world multi-label patent and publication datasets are not completely in line with those of benchmark ones.Therefore,the main purpose of this paper is to evaluate comprehensively seven multi-label classification methods on real-world datasets.Research limitations:Three real-world datasets differ in the following aspects:statement,data quality,and purposes.Additionally,open-source tools designed for multi-label classification also have intrinsic differences in their approaches for data processing and feature selection,which in turn impacts the performance of a multi-label classification approach.In the near future,we will enhance experimental precision and reinforce the validity of conclusions by employing more rigorous control over variables through introducing expanded parameter settings.Practical implications:The observed Macro F1 and Micro F1 scores on real-world datasets typically fall short of those achieved on benchmark datasets,underscoring the complexity of real-world multi-label classification tasks.Approaches leveraging deep learning techniques offer promising solutions by accommodating the hierarchical relationships and interdependencies among labels.With ongoing enhancements in deep learning algorithms and large-scale models,it is expected that the efficacy of multi-label classification tasks will be significantly improved,reaching a level of practical utility in the foreseeable future.Originality/value:(1)Seven multi-label classification methods are comprehensively compared on three real-world datasets.(2)The TextCNN and TextRCNN models perform better on small-scale datasets with more complex hierarchical structure of labels and more balanced document-label distribution.(3)The MLkNN method works better on the larger-scale dataset with more unbalanced document-label distribution.展开更多
Positional information encoded in spatial concentration patterns is crucial for the development of multicellular organisms.However,it is still unclear how such information is affected by the physically dissipative dif...Positional information encoded in spatial concentration patterns is crucial for the development of multicellular organisms.However,it is still unclear how such information is affected by the physically dissipative diffusion process.Here we study one-dimensional patterning systems with analytical derivation and numerical simulations.We find that the diffusion constant of the patterning molecules exhibits a nonmonotonic effect on the readout of the positional information from the concentration patterns.Specifically,there exists an optimal diffusion constant that maximizes the positional information.Moreover,we find that the energy dissipation due to the physical diffusion imposes a fundamental upper limit on the positional information.展开更多
[Objective] This study aimed to improve the accuracy of remote sensing classification for Dongting Lake Wetland.[Method] Based on the TM data and ground GIS information of Donting Lake,the decision tree classification...[Objective] This study aimed to improve the accuracy of remote sensing classification for Dongting Lake Wetland.[Method] Based on the TM data and ground GIS information of Donting Lake,the decision tree classification method was established through the expert classification knowledge base.The images of Dongting Lake wetland were classified into water area,mudflat,protection forest beach,Carem spp beach,Phragmites beach,Carex beach and other water body according to decision tree layers.[Result] The accuracy of decision tree classification reached 80.29%,which was much higher than the traditional method,and the total Kappa coefficient was 0.883 9,indicating that the data accuracy of this method could fulfill the requirements of actual practice.In addition,the image classification results based on knowledge could solve some classification mistakes.[Conclusion] Compared with the traditional method,the decision tree classification based on rules could classify the images by using various conditions,which reduced the data processing time and improved the classification accuracy.展开更多
As a novel paradigm,semantic communication provides an effective solution for breaking through the future development dilemma of classical communication systems.However,it remains an unsolved problem of how to measure...As a novel paradigm,semantic communication provides an effective solution for breaking through the future development dilemma of classical communication systems.However,it remains an unsolved problem of how to measure the information transmission capability for a given semantic communication method and subsequently compare it with the classical communication method.In this paper,we first present a review of the semantic communication system,including its system model and the two typical coding and transmission methods for its implementations.To address the unsolved issue of the information transmission capability measure for semantic communication methods,we propose a new universal performance measure called Information Conductivity.We provide the definition and the physical significance to state its effectiveness in representing the information transmission capabilities of the semantic communication systems and present elaborations including its measure methods,degrees of freedom,and progressive analysis.Experimental results in image transmission scenarios validate its practical applicability.展开更多
To solve the problem of delayed update of spectrum information(SI) in the database assisted dynamic spectrum management(DB-DSM), this paper studies a novel dynamic update scheme of SI in DB-DSM. Firstly, a dynamic upd...To solve the problem of delayed update of spectrum information(SI) in the database assisted dynamic spectrum management(DB-DSM), this paper studies a novel dynamic update scheme of SI in DB-DSM. Firstly, a dynamic update mechanism of SI based on spectrum opportunity incentive is established, in which spectrum users are encouraged to actively assist the database to update SI in real time. Secondly, the information update contribution(IUC) of spectrum opportunity is defined to describe the cost of accessing spectrum opportunity for heterogeneous spectrum users, and the profit of SI update obtained by the database from spectrum allocation. The process that the database determines the IUC of spectrum opportunity and spectrum user selects spectrum opportunity is mapped to a Hotelling model. Thirdly, the process of determining the IUC of spectrum opportunities is further modelled as a Stackelberg game by establishing multiple virtual spectrum resource providers(VSRPs) in the database. It is proved that there is a Nash Equilibrium in the game of determining the IUC of spectrum opportunities by VSRPs. Finally, an algorithm of determining the IUC based on a genetic algorithm is designed to achieve the optimal IUC. The-oretical analysis and simulation results show that the proposed method can quickly find the optimal solution of the IUC, and ensure that the spectrum resource provider can obtain the optimal profit of SI update.展开更多
Small-scale farming accounts for 78% of total agricultural production in Kenya and contributes to 23.5% of the country’s GDP. Their crop production activities are mostly rainfed subsistence with any surplus being sol...Small-scale farming accounts for 78% of total agricultural production in Kenya and contributes to 23.5% of the country’s GDP. Their crop production activities are mostly rainfed subsistence with any surplus being sold to bring in some income. Timely decisions on farm practices such as farm preparation and planting are critical determinants of the seasonal outcomes. In Kenya, most small-scale farmers have no reliable source of information that would help them make timely and accurate decisions. County governments have extension officers who are mandated with giving farmers advisory services to farmers but they are not able to reach most farmers due to facilitation constraints. The mode and format of sharing information is also critical since it’s important to ensure that it’s timely, well-understood and usable. This study sought to assess access to geospatial derived and other crop production information by farmers in four selected counties of Kenya. Specific objectives were to determine the profile of small-scale farmers in terms of age, education and farm size;to determine the type of information that is made available to them by County and Sub-County extension officers including the format and mode of provision;and to determine if the information provided was useful in terms of accuracy, timeliness and adequacy. The results indicated that over 80% of the farmers were over 35 years of age and over 56% were male. Majority had attained primary education (34%) or secondary education (29%) and most farmers in all the counties grew maize (71%). Notably, fellow farmers were a source of information (71%) with the frequency of sharing information being mostly seasonal (37%) and when information was available (43%). Over 66% of interviewed farmers indicating that they faced challenges while using provided information. The results from the study are insightful and helpful in determining effective ways of providing farmers with useful information to ensure maximum benefits.展开更多
The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailb...The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailboat during sailing for the best sailing effect.Normally it is difficult for sailors to keep an eye for a long time on the tell sail for accurate judging its changes,affected by strong sunlight and visual fatigue.In this case,we adopt computer vision technology in hope of helping the sailors judge the changes of the tell tail in ease with ease.This paper proposes for the first time a method to classify sailboat tell tails based on deep learning and an expert guidance system,supported by a sailboat tell tail classification data set on the expert guidance system of interpreting the tell tails states in different sea wind conditions,including the feature extraction performance.Considering the expression capabilities that vary with the computational features in different visual tasks,the paper focuses on five tell tail computing features,which are recoded by an automatic encoder and classified by a SVM classifier.All experimental samples were randomly divided into five groups,and four groups were selected from each group as the training set to train the classifier.The remaining one group was used as the test set for testing.The highest resolution value of the ResNet network was 80.26%.To achieve better operational results on the basis of deep computing features obtained through the ResNet network in the experiments.The method can be used to assist the sailors in making better judgement about the tell tail changes during sailing.展开更多
Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been dev...Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.展开更多
Among central nervous system-associated malignancies,glioblastoma(GBM)is the most common and has the highest mortality rate.The high heterogeneity of GBM cell types and the complex tumor microenvironment frequently le...Among central nervous system-associated malignancies,glioblastoma(GBM)is the most common and has the highest mortality rate.The high heterogeneity of GBM cell types and the complex tumor microenvironment frequently lead to tumor recurrence and sudden relapse in patients treated with temozolomide.In precision medicine,research on GBM treatment is increasingly focusing on molecular subtyping to precisely characterize the cellular and molecular heterogeneity,as well as the refractory nature of GBM toward therapy.Deep understanding of the different molecular expression patterns of GBM subtypes is critical.Researchers have recently proposed tetra fractional or tripartite methods for detecting GBM molecular subtypes.The various molecular subtypes of GBM show significant differences in gene expression patterns and biological behaviors.These subtypes also exhibit high plasticity in their regulatory pathways,oncogene expression,tumor microenvironment alterations,and differential responses to standard therapy.Herein,we summarize the current molecular typing scheme of GBM and the major molecular/genetic characteristics of each subtype.Furthermore,we review the mesenchymal transition mechanisms of GBM under various regulators.展开更多
基金This research was funded by Prince Sattam bin Abdulaziz University(Project Number PSAU/2023/01/25387).
文摘The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.
基金the National Natural Science Foundation of China Youth Project(62302520).
文摘With the increasing proportion of encrypted traffic in cyberspace, the classification of encrypted traffic has becomea core key technology in network supervision. In recent years, many different solutions have emerged in this field.Most methods identify and classify traffic by extracting spatiotemporal characteristics of data flows or byte-levelfeatures of packets. However, due to changes in data transmission mediums, such as fiber optics and satellites,temporal features can exhibit significant variations due to changes in communication links and transmissionquality. Additionally, partial spatial features can change due to reasons like data reordering and retransmission.Faced with these challenges, identifying encrypted traffic solely based on packet byte-level features is significantlydifficult. To address this, we propose a universal packet-level encrypted traffic identification method, ComboPacket. This method utilizes convolutional neural networks to extract deep features of the current packet andits contextual information and employs spatial and channel attention mechanisms to select and locate effectivefeatures. Experimental data shows that Combo Packet can effectively distinguish between encrypted traffic servicecategories (e.g., File Transfer Protocol, FTP, and Peer-to-Peer, P2P) and encrypted traffic application categories (e.g.,BitTorrent and Skype). Validated on the ISCX VPN-non VPN dataset, it achieves classification accuracies of 97.0%and 97.1% for service and application categories, respectively. It also provides shorter training times and higherrecognition speeds. The performance and recognition capabilities of Combo Packet are significantly superior tothe existing classification methods mentioned.
文摘In response to the problem of inadequate utilization of local information in PolSAR image classification using Vision Transformer in existing studies, this paper proposes a Vision Transformer method considering local information, LIViT. The method replaces image patch sequence with polarimetric feature sequence in the feature embedding, and uses convolution for mapping to preserve image spatial detail information. On the other hand, the addition of the wavelet transform branch enables the network to pay more attention to the shape and edge information of the feature target and improves the extraction of local edge information. The results in Wuhan, China and Flevoland, Netherlands show that considering local information when using Vision Transformer for PolSAR image classification effectively improves the image classification accuracy and shows better advantages in PolSAR image classification.
基金supported by the National Natural Science Foundation of China (No.82001775, 82371933)the Natural Science Foundation of Shandong Province of China (No.ZR2021MH120)+1 种基金the Special Fund for Breast Disease Research of Shandong Medical Association (No.YXH2021ZX055)the Taishan Scholar Foundation of Shandong Province of China (No.tsgn202211378)。
文摘Objective: Accurate detection and classification of breast lesions in early stage is crucial to timely formulate effective treatments for patients. We aim to develop a fully automatic system to detect and classify breast lesions using multiple contrast-enhanced mammography(CEM) images.Methods: In this study, a total of 1,903 females who underwent CEM examination from three hospitals were enrolled as the training set, internal testing set, pooled external testing set and prospective testing set. Here we developed a CEM-based multiprocess detection and classification system(MDCS) to perform the task of detection and classification of breast lesions. In this system, we introduced an innovative auxiliary feature fusion(AFF)algorithm that could intelligently incorporates multiple types of information from CEM images. The average freeresponse receiver operating characteristic score(AFROC-Score) was presented to validate system’s detection performance, and the performance of classification was evaluated by area under the receiver operating characteristic curve(AUC). Furthermore, we assessed the diagnostic value of MDCS through visual analysis of disputed cases,comparing its performance and efficiency with that of radiologists and exploring whether it could augment radiologists’ performance.Results: On the pooled external and prospective testing sets, MDCS always maintained a high standalone performance, with AFROC-Scores of 0.953 and 0.963 for detection task, and AUCs for classification were 0.909[95% confidence interval(95% CI): 0.822-0.996] and 0.912(95% CI: 0.840-0.985), respectively. It also achieved higher sensitivity than all senior radiologists and higher specificity than all junior radiologists on pooled external and prospective testing sets. Moreover, MDCS performed superior diagnostic efficiency with an average reading time of 5 seconds, compared to the radiologists’ average reading time of 3.2 min. The average performance of all radiologists was also improved to varying degrees with MDCS assistance.Conclusions: MDCS demonstrated excellent performance in the detection and classification of breast lesions,and greatly enhanced the overall performance of radiologists.
文摘It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.
基金supported in part by the Nationa Natural Science Foundation of China (61876011)the National Key Research and Development Program of China (2022YFB4703700)+1 种基金the Key Research and Development Program 2020 of Guangzhou (202007050002)the Key-Area Research and Development Program of Guangdong Province (2020B090921003)。
文摘Recently, there have been some attempts of Transformer in 3D point cloud classification. In order to reduce computations, most existing methods focus on local spatial attention,but ignore their content and fail to establish relationships between distant but relevant points. To overcome the limitation of local spatial attention, we propose a point content-based Transformer architecture, called PointConT for short. It exploits the locality of points in the feature space(content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class, thus enabling an effective trade-off between capturing long-range dependencies and computational complexity. We further introduce an inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately. Extensive experiments show that our PointConT model achieves a remarkable performance on point cloud shape classification. Especially, our method exhibits 90.3% Top-1 accuracy on the hardest setting of ScanObjectN N. Source code of this paper is available at https://github.com/yahuiliu99/PointC onT.
基金supported by the Yunnan Major Scientific and Technological Projects(Grant No.202302AD080001)the National Natural Science Foundation,China(No.52065033).
文摘When building a classification model,the scenario where the samples of one class are significantly more than those of the other class is called data imbalance.Data imbalance causes the trained classification model to be in favor of the majority class(usually defined as the negative class),which may do harm to the accuracy of the minority class(usually defined as the positive class),and then lead to poor overall performance of the model.A method called MSHR-FCSSVM for solving imbalanced data classification is proposed in this article,which is based on a new hybrid resampling approach(MSHR)and a new fine cost-sensitive support vector machine(CS-SVM)classifier(FCSSVM).The MSHR measures the separability of each negative sample through its Silhouette value calculated by Mahalanobis distance between samples,based on which,the so-called pseudo-negative samples are screened out to generate new positive samples(over-sampling step)through linear interpolation and are deleted finally(under-sampling step).This approach replaces pseudo-negative samples with generated new positive samples one by one to clear up the inter-class overlap on the borderline,without changing the overall scale of the dataset.The FCSSVM is an improved version of the traditional CS-SVM.It considers influences of both the imbalance of sample number and the class distribution on classification simultaneously,and through finely tuning the class cost weights by using the efficient optimization algorithm based on the physical phenomenon of rime-ice(RIME)algorithm with cross-validation accuracy as the fitness function to accurately adjust the classification borderline.To verify the effectiveness of the proposed method,a series of experiments are carried out based on 20 imbalanced datasets including both mildly and extremely imbalanced datasets.The experimental results show that the MSHR-FCSSVM method performs better than the methods for comparison in most cases,and both the MSHR and the FCSSVM played significant roles.
基金Supported by National Natural Science Foundation of China,No.82060123Doctoral Start-up Fund of Affiliated Hospital of Guizhou Medical University,No.gysybsky-2021-28+1 种基金Fund Project of Guizhou Provincial Science and Technology Department,No.[2020]1Y299Guizhou Provincial Health Commission,No.gzwjk2019-1-082。
文摘BACKGROUND Acute liver failure(ALF)has a high mortality with widespread hepatocyte death involving ferroptosis and pyroptosis.The silent information regulator sirtuin 1(SIRT1)-mediated deacetylation affects multiple biological processes,including cellular senescence,apoptosis,sugar and lipid metabolism,oxidative stress,and inflammation.AIM To investigate the association between ferroptosis and pyroptosis and the upstream regulatory mechanisms.METHODS This study included 30 patients with ALF and 30 healthy individuals who underwent serum alanine aminotransferase(ALT)and aspartate aminotransferase(AST)testing.C57BL/6 mice were also intraperitoneally pretreated with SIRT1,p53,or glutathione peroxidase 4(GPX4)inducers and inhibitors and injected with lipopolysaccharide(LPS)/D-galactosamine(D-GalN)to induce ALF.Gasdermin D(GSDMD)^(-/-)mice were used as an experimental group.Histological changes in liver tissue were monitored by hematoxylin and eosin staining.ALT,AST,glutathione,reactive oxygen species,and iron levels were measured using commercial kits.Ferroptosis-and pyroptosis-related protein and mRNA expression was detected by western blot and quantitative real-time polymerase chain reaction.SIRT1,p53,and GSDMD were assessed by immunofluorescence analysis.RESULTS Serum AST and ALT levels were elevated in patients with ALF.SIRT1,solute carrier family 7a member 11(SLC7A11),and GPX4 protein expression was decreased and acetylated p5,p53,GSDMD,and acyl-CoA synthetase long-chain family member 4(ACSL4)protein levels were elevated in human ALF liver tissue.In the p53 and ferroptosis inhibitor-treated and GSDMD^(-/-)groups,serum interleukin(IL)-1β,tumour necrosis factor alpha,IL-6,IL-2 and C-C motif ligand 2 levels were decreased and hepatic impairment was mitigated.In mice with GSDMD knockout,p53 was reduced,GPX4 was increased,and ferroptotic events(depletion of SLC7A11,elevation of ACSL4,and iron accumulation)were detected.In vitro,knockdown of p53 and overexpression of GPX4 reduced AST and ALT levels,the cytostatic rate,and GSDMD expression,restoring SLC7A11 depletion.Moreover,SIRT1 agonist and overexpression of SIRT1 alleviated acute liver injury and decreased iron deposition compared with results in the model group,accompanied by reduced p53,GSDMD,and ACSL4,and increased SLC7A11 and GPX4.Inactivation of SIRT1 exacerbated ferroptotic and pyroptotic cell death and aggravated liver injury in LPS/D-GalNinduced in vitro and in vivo models.CONCLUSION SIRT1 activation attenuates LPS/D-GalN-induced ferroptosis and pyroptosis by inhibiting the p53/GPX4/GSDMD signaling pathway in ALF.
文摘BACKGROUND As one of the fatal diseases with high incidence,lung cancer has seriously endangered public health and safety.Elderly patients usually have poor self-care and are more likely to show a series of psychological problems.AIM To investigate the effectiveness of the initial check,information exchange,final accuracy check,reaction(IIFAR)information care model on the mental health status of elderly patients with lung cancer.METHODS This study is a single-centre study.We randomly recruited 60 elderly patients with lung cancer who attended our hospital from January 2021 to January 2022.These elderly patients with lung cancer were randomly divided into two groups,with the control group taking the conventional propaganda and education and the observation group taking the IIFAR information care model based on the conventional care protocol.The differences in psychological distress,anxiety and depression,life quality,fatigue,and the locus of control in psychology were compared between these two groups,and the causes of psychological distress were analyzed.RESULTS After the intervention,Distress Thermometer,Hospital Anxiety and Depression Scale(HADS)for anxiety and the HADS for depression,Revised Piper’s Fatigue Scale,and Chance Health Locus of Control scores were lower in the observation group compared to the pre-intervention period in the same group and were significantly lower in the observation group compared to those of the control group(P<0.05).After the intervention,Quality of Life Questionnaire Core 30(QLQ-C30),Internal Health Locus of Control,and Powerful Others Health Locus of Control scores were significantly higher in the observation and the control groups compared to the pre-intervention period in their same group,and QLQ-C30 scores were significantly higher in the observation group compared to those of the control group(P<0.05).CONCLUSION The IIFAR information care model can help elderly patients with lung cancer by reducing their anxiety and depression,psychological distress,and fatigue,improving their tendencies on the locus of control in psychology,and enhancing their life qualities.
文摘Background: Cavernous transformation of the portal vein(CTPV) due to portal vein obstruction is a rare vascular anomaly defined as the formation of multiple collateral vessels in the hepatic hilum. This study aimed to investigate the imaging features of intrahepatic portal vein in adult patients with CTPV and establish the relationship between the manifestations of intrahepatic portal vein and the progression of CTPV. Methods: We retrospectively analyzed 14 CTPV patients in Beijing Tsinghua Changgung Hospital. All patients underwent both direct portal venography(DPV) and computed tomography angiography(CTA) to reveal the manifestations of the portal venous system. The vessels measured included the left portal vein(LPV), right portal vein(RPV), main portal vein(MPV) and the portal vein bifurcation(PVB). Results: Nine males and 5 females, with a median age of 40.5 years, were included in the study. No significant difference was found in the diameters of the LPV or RPV measured by DPV and CTA. The visualization in terms of LPV, RPV and PVB measured by DPV was higher than that by CTA. There was a significant association between LPV/RPV and PVB/MPV in term of visibility revealed with DPV( P = 0.01), while this association was not observed with CTA. According to the imaging features of the portal vein measured by DPV, CTPV was classified into three categories to facilitate the diagnosis and treatment. Conclusions: DPV was more accurate than CTA for revealing the course of the intrahepatic portal vein in patients with CTPV. The classification of CTPV, that originated from the imaging features of the portal vein revealed by DPV, may provide a new perspective for the diagnosis and treatment of CTPV.
基金supported by the Natural Science Foundation Research Plan of Shanxi Province (2023JCQN0728)。
文摘The subversive nature of information war lies not only in the information itself, but also in the circulation and application of information. It has always been a challenge to quantitatively analyze the function and effect of information flow through command, control, communications, computer, kill, intelligence,surveillance, reconnaissance (C4KISR) system. In this work, we propose a framework of force of information influence and the methods for calculating the force of information influence between C4KISR nodes of sensing, intelligence processing,decision making and fire attack. Specifically, the basic concept of force of information influence between nodes in C4KISR system is formally proposed and its mathematical definition is provided. Then, based on the information entropy theory, the model of force of information influence between C4KISR system nodes is constructed. Finally, the simulation experiments have been performed under an air defense and attack scenario. The experimental results show that, with the proposed force of information influence framework, we can effectively evaluate the contribution of information circulation through different C4KISR system nodes to the corresponding tasks. Our framework of force of information influence can also serve as an effective tool for the design and dynamic reconfiguration of C4KISR system architecture.
基金the Natural Science Foundation of China(Grant Numbers 72074014 and 72004012).
文摘Purpose:Many science,technology and innovation(STI)resources are attached with several different labels.To assign automatically the resulting labels to an interested instance,many approaches with good performance on the benchmark datasets have been proposed for multi-label classification task in the literature.Furthermore,several open-source tools implementing these approaches have also been developed.However,the characteristics of real-world multi-label patent and publication datasets are not completely in line with those of benchmark ones.Therefore,the main purpose of this paper is to evaluate comprehensively seven multi-label classification methods on real-world datasets.Research limitations:Three real-world datasets differ in the following aspects:statement,data quality,and purposes.Additionally,open-source tools designed for multi-label classification also have intrinsic differences in their approaches for data processing and feature selection,which in turn impacts the performance of a multi-label classification approach.In the near future,we will enhance experimental precision and reinforce the validity of conclusions by employing more rigorous control over variables through introducing expanded parameter settings.Practical implications:The observed Macro F1 and Micro F1 scores on real-world datasets typically fall short of those achieved on benchmark datasets,underscoring the complexity of real-world multi-label classification tasks.Approaches leveraging deep learning techniques offer promising solutions by accommodating the hierarchical relationships and interdependencies among labels.With ongoing enhancements in deep learning algorithms and large-scale models,it is expected that the efficacy of multi-label classification tasks will be significantly improved,reaching a level of practical utility in the foreseeable future.Originality/value:(1)Seven multi-label classification methods are comprehensively compared on three real-world datasets.(2)The TextCNN and TextRCNN models perform better on small-scale datasets with more complex hierarchical structure of labels and more balanced document-label distribution.(3)The MLkNN method works better on the larger-scale dataset with more unbalanced document-label distribution.
基金Project supported by the National Natural Science Foundation of China (Grant Nos.32271293 and 11875076)。
文摘Positional information encoded in spatial concentration patterns is crucial for the development of multicellular organisms.However,it is still unclear how such information is affected by the physically dissipative diffusion process.Here we study one-dimensional patterning systems with analytical derivation and numerical simulations.We find that the diffusion constant of the patterning molecules exhibits a nonmonotonic effect on the readout of the positional information from the concentration patterns.Specifically,there exists an optimal diffusion constant that maximizes the positional information.Moreover,we find that the energy dissipation due to the physical diffusion imposes a fundamental upper limit on the positional information.
文摘[Objective] This study aimed to improve the accuracy of remote sensing classification for Dongting Lake Wetland.[Method] Based on the TM data and ground GIS information of Donting Lake,the decision tree classification method was established through the expert classification knowledge base.The images of Dongting Lake wetland were classified into water area,mudflat,protection forest beach,Carem spp beach,Phragmites beach,Carex beach and other water body according to decision tree layers.[Result] The accuracy of decision tree classification reached 80.29%,which was much higher than the traditional method,and the total Kappa coefficient was 0.883 9,indicating that the data accuracy of this method could fulfill the requirements of actual practice.In addition,the image classification results based on knowledge could solve some classification mistakes.[Conclusion] Compared with the traditional method,the decision tree classification based on rules could classify the images by using various conditions,which reduced the data processing time and improved the classification accuracy.
基金supported by the National Natural Science Foundation of China(No.62293481,No.62071058)。
文摘As a novel paradigm,semantic communication provides an effective solution for breaking through the future development dilemma of classical communication systems.However,it remains an unsolved problem of how to measure the information transmission capability for a given semantic communication method and subsequently compare it with the classical communication method.In this paper,we first present a review of the semantic communication system,including its system model and the two typical coding and transmission methods for its implementations.To address the unsolved issue of the information transmission capability measure for semantic communication methods,we propose a new universal performance measure called Information Conductivity.We provide the definition and the physical significance to state its effectiveness in representing the information transmission capabilities of the semantic communication systems and present elaborations including its measure methods,degrees of freedom,and progressive analysis.Experimental results in image transmission scenarios validate its practical applicability.
文摘To solve the problem of delayed update of spectrum information(SI) in the database assisted dynamic spectrum management(DB-DSM), this paper studies a novel dynamic update scheme of SI in DB-DSM. Firstly, a dynamic update mechanism of SI based on spectrum opportunity incentive is established, in which spectrum users are encouraged to actively assist the database to update SI in real time. Secondly, the information update contribution(IUC) of spectrum opportunity is defined to describe the cost of accessing spectrum opportunity for heterogeneous spectrum users, and the profit of SI update obtained by the database from spectrum allocation. The process that the database determines the IUC of spectrum opportunity and spectrum user selects spectrum opportunity is mapped to a Hotelling model. Thirdly, the process of determining the IUC of spectrum opportunities is further modelled as a Stackelberg game by establishing multiple virtual spectrum resource providers(VSRPs) in the database. It is proved that there is a Nash Equilibrium in the game of determining the IUC of spectrum opportunities by VSRPs. Finally, an algorithm of determining the IUC based on a genetic algorithm is designed to achieve the optimal IUC. The-oretical analysis and simulation results show that the proposed method can quickly find the optimal solution of the IUC, and ensure that the spectrum resource provider can obtain the optimal profit of SI update.
文摘Small-scale farming accounts for 78% of total agricultural production in Kenya and contributes to 23.5% of the country’s GDP. Their crop production activities are mostly rainfed subsistence with any surplus being sold to bring in some income. Timely decisions on farm practices such as farm preparation and planting are critical determinants of the seasonal outcomes. In Kenya, most small-scale farmers have no reliable source of information that would help them make timely and accurate decisions. County governments have extension officers who are mandated with giving farmers advisory services to farmers but they are not able to reach most farmers due to facilitation constraints. The mode and format of sharing information is also critical since it’s important to ensure that it’s timely, well-understood and usable. This study sought to assess access to geospatial derived and other crop production information by farmers in four selected counties of Kenya. Specific objectives were to determine the profile of small-scale farmers in terms of age, education and farm size;to determine the type of information that is made available to them by County and Sub-County extension officers including the format and mode of provision;and to determine if the information provided was useful in terms of accuracy, timeliness and adequacy. The results indicated that over 80% of the farmers were over 35 years of age and over 56% were male. Majority had attained primary education (34%) or secondary education (29%) and most farmers in all the counties grew maize (71%). Notably, fellow farmers were a source of information (71%) with the frequency of sharing information being mostly seasonal (37%) and when information was available (43%). Over 66% of interviewed farmers indicating that they faced challenges while using provided information. The results from the study are insightful and helpful in determining effective ways of providing farmers with useful information to ensure maximum benefits.
基金supported by the Shandong Provin-cial Key Research Project of Undergraduate Teaching Reform(No.Z2022218)the Fundamental Research Funds for the Central University(No.202113028)+1 种基金the Graduate Education Promotion Program of Ocean University of China(No.HDJG20006)supported by the Sailing Laboratory of Ocean University of China.
文摘The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailboat during sailing for the best sailing effect.Normally it is difficult for sailors to keep an eye for a long time on the tell sail for accurate judging its changes,affected by strong sunlight and visual fatigue.In this case,we adopt computer vision technology in hope of helping the sailors judge the changes of the tell tail in ease with ease.This paper proposes for the first time a method to classify sailboat tell tails based on deep learning and an expert guidance system,supported by a sailboat tell tail classification data set on the expert guidance system of interpreting the tell tails states in different sea wind conditions,including the feature extraction performance.Considering the expression capabilities that vary with the computational features in different visual tasks,the paper focuses on five tell tail computing features,which are recoded by an automatic encoder and classified by a SVM classifier.All experimental samples were randomly divided into five groups,and four groups were selected from each group as the training set to train the classifier.The remaining one group was used as the test set for testing.The highest resolution value of the ResNet network was 80.26%.To achieve better operational results on the basis of deep computing features obtained through the ResNet network in the experiments.The method can be used to assist the sailors in making better judgement about the tell tail changes during sailing.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RP23044).
文摘Lung cancer is a leading cause of global mortality rates.Early detection of pulmonary tumors can significantly enhance the survival rate of patients.Recently,various Computer-Aided Diagnostic(CAD)methods have been developed to enhance the detection of pulmonary nodules with high accuracy.Nevertheless,the existing method-ologies cannot obtain a high level of specificity and sensitivity.The present study introduces a novel model for Lung Cancer Segmentation and Classification(LCSC),which incorporates two improved architectures,namely the improved U-Net architecture and the improved AlexNet architecture.The LCSC model comprises two distinct stages.The first stage involves the utilization of an improved U-Net architecture to segment candidate nodules extracted from the lung lobes.Subsequently,an improved AlexNet architecture is employed to classify lung cancer.During the first stage,the proposed model demonstrates a dice accuracy of 0.855,a precision of 0.933,and a recall of 0.789 for the segmentation of candidate nodules.The suggested improved AlexNet architecture attains 97.06%accuracy,a true positive rate of 96.36%,a true negative rate of 97.77%,a positive predictive value of 97.74%,and a negative predictive value of 96.41%for classifying pulmonary cancer as either benign or malignant.The proposed LCSC model is tested and evaluated employing the publically available dataset furnished by the Lung Image Database Consortium and Image Database Resource Initiative(LIDC-IDRI).This proposed technique exhibits remarkable performance compared to the existing methods by using various evaluation parameters.
基金supported by grants from the National Natural Science Foundation of China(Grant No.82172660)Hebei Province Graduate Student Innovation Project(Grant No.CXZZBS2023001)Baoding Natural Science Foundation(Grant No.H2272P015).
文摘Among central nervous system-associated malignancies,glioblastoma(GBM)is the most common and has the highest mortality rate.The high heterogeneity of GBM cell types and the complex tumor microenvironment frequently lead to tumor recurrence and sudden relapse in patients treated with temozolomide.In precision medicine,research on GBM treatment is increasingly focusing on molecular subtyping to precisely characterize the cellular and molecular heterogeneity,as well as the refractory nature of GBM toward therapy.Deep understanding of the different molecular expression patterns of GBM subtypes is critical.Researchers have recently proposed tetra fractional or tripartite methods for detecting GBM molecular subtypes.The various molecular subtypes of GBM show significant differences in gene expression patterns and biological behaviors.These subtypes also exhibit high plasticity in their regulatory pathways,oncogene expression,tumor microenvironment alterations,and differential responses to standard therapy.Herein,we summarize the current molecular typing scheme of GBM and the major molecular/genetic characteristics of each subtype.Furthermore,we review the mesenchymal transition mechanisms of GBM under various regulators.