This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactic...This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactical offensive threats. Conformal prediction is the principled and unified adaptive and learning framework used to design, develop, and deploy a multi-faceted?self-managing defensive shield to detect, disrupt, and deny intrusive attacks, hostile and malicious behavior, and subterfuge. Conformal prediction leverages apparent relationships between immunity and intrusion detection using non-conformity measures characteristic of affinity, a typicality, and surprise, to recognize patterns and messages as friend or foe and to respond to them accordingly. The solutions proffered throughout are built around active learning, meta-reasoning, randomness, distributed semantics and stratification, and most important and above all around adaptive Oracles. The motivation for using conformal prediction and its immediate off-spring, those of semi-supervised learning and transduction, comes from them first and foremost supporting discriminative and non-parametric methods characteristic of principled demarcation using cohorts and sensitivity analysis to hedge on the prediction outcomes including negative selection, on one side, and providing credibility and confidence indices that assist meta-reasoning and information fusion.展开更多
With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such disti...With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such distinguishing itself from the majority of research literature on the topic which is primarily focused on building ontologies from a vast array of different types of data sources, both structured and unstructured, to support various forms of AI, in particular, the Semantic Web as envisioned by Tim Berners-Lee. We first elaborate on mutually informing disciplines of philosophy and computer science, or more specifically the relationship between metaphysics, epistemology, ontology, computing and AI, followed by a technically in-depth discussion of DEBRA, our dependency tree based concept hierarchy constructor, which as its name alludes to, constructs a conceptual map in the form of a directed graph which illustrates the concepts, their respective relations, and the implied ontological structure of the concepts as encoded in the text, decoded with standard Python NLP libraries such as spaCy and NLTK. With this work we hope to both augment the Knowledge Representation literature with opportunities for intellectual advancement in AI with more intuitive, less analytical, and well-known forms of knowledge representation from the cognitive science community, as well as open up new areas of research between Computer Science and the Humanities with respect to the application of the latest in NLP tools and techniques upon literature of cultural significance, shedding light on existing methods of computation with respect to documents in semantic space that effectively allows for, at the very least, the comparison and evolution of texts through time, using vector space math.展开更多
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
The frequent occurrence of extreme weather events has rendered numerous landslides to a global natural disaster issue.It is crucial to rapidly and accurately determine the boundaries of landslides for geohazards evalu...The frequent occurrence of extreme weather events has rendered numerous landslides to a global natural disaster issue.It is crucial to rapidly and accurately determine the boundaries of landslides for geohazards evaluation and emergency response.Therefore,the Skip Connection DeepLab neural network(SCDnn),a deep learning model based on 770 optical remote sensing images of landslide,is proposed to improve the accuracy of landslide boundary detection.The SCDnn model is optimized for the over-segmentation issue which occurs in conventional deep learning models when there is a significant degree of similarity between topographical geomorphic features.SCDnn exhibits notable improvements in landslide feature extraction and semantic segmentation by combining an enhanced Atrous Spatial Pyramid Convolutional Block(ASPC)with a coding structure that reduces model complexity.The experimental results demonstrate that SCDnn can identify landslide boundaries in 119 images with MIoU values between 0.8and 0.9;while 52 images with MIoU values exceeding 0.9,which exceeds the identification accuracy of existing techniques.This work can offer a novel technique for the automatic extensive identification of landslide boundaries in remote sensing images in addition to establishing the groundwork for future inve stigations and applications in related domains.展开更多
Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However...Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferrin...CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferring information.A dynamic strategy,DevMLOps(Development Machine Learning Operations)used in automatic selections and tunings of MLTs result in significant performance differences.But,the scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.RFEs(Recursive Feature Eliminations)are computationally very expensive in its operations as it traverses through each feature without considering correlations between them.This problem can be overcome by the use of Wrappers as they select better features by accounting for test and train datasets.The aim of this paper is to use DevQLMLOps for automated tuning and selections based on orchestrations and messaging between containers.The proposed AKFA(Adaptive Kernel Firefly Algorithm)is for selecting features for CNM(Cloud Network Monitoring)operations.AKFA methodology is demonstrated using CNSD(Cloud Network Security Dataset)with satisfactory results in the performance metrics like precision,recall,F-measure and accuracy used.展开更多
The continuous development of cyberattacks is threatening digital transformation endeavors worldwide and leadsto wide losses for various organizations. These dangers have proven that signature-based approaches are ins...The continuous development of cyberattacks is threatening digital transformation endeavors worldwide and leadsto wide losses for various organizations. These dangers have proven that signature-based approaches are insufficientto prevent emerging and polymorphic attacks. Therefore, this paper is proposing a Robust Malicious ExecutableDetection (RMED) using Host-based Machine Learning Classifier to discover malicious Portable Executable (PE)files in hosts using Windows operating systems through collecting PE headers and applying machine learningmechanisms to detect unknown infected files. The authors have collected a novel reliable dataset containing 116,031benign files and 179,071 malware samples from diverse sources to ensure the efficiency of RMED approach.The most effective PE headers that can highly differentiate between benign and malware files were selected totrain the model on 15 PE features to speed up the classification process and achieve real-time detection formalicious executables. The evaluation results showed that RMED succeeded in shrinking the classification timeto 91 milliseconds for each file while reaching an accuracy of 98.42% with a false positive rate equal to 1.58. Inconclusion, this paper contributes to the field of cybersecurity by presenting a comprehensive framework thatleverages Artificial Intelligence (AI) methods to proactively detect and prevent cyber-attacks.展开更多
Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing...Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.展开更多
Multimodal monitoring(MMM)in the intensive care unit(ICU)has become increasingly sophisticated with the integration of neurophysical principles.However,the challenge remains to select and interpret the most appropriat...Multimodal monitoring(MMM)in the intensive care unit(ICU)has become increasingly sophisticated with the integration of neurophysical principles.However,the challenge remains to select and interpret the most appropriate combination of neuromonitoring modalities to optimize patient outcomes.This manuscript reviewed current neuromonitoring tools,focusing on intracranial pressure,cerebral electrical activity,metabolism,and invasive and noninvasive autoregulation moni-toring.In addition,the integration of advanced machine learning and data science tools within the ICU were discussed.Invasive monitoring includes analysis of intracranial pressure waveforms,jugular venous oximetry,monitoring of brain tissue oxygenation,thermal diffusion flowmetry,electrocorticography,depth electroencephalography,and cerebral microdialysis.Noninvasive measures include transcranial Doppler,tympanic membrane displacement,near-infrared spectroscopy,optic nerve sheath diameter,positron emission tomography,and systemic hemodynamic monitoring including heart rate variability analysis.The neurophysical basis and clinical relevance of each method within the ICU setting were examined.Machine learning algorithms have shown promise by helping to analyze and interpret data in real time from continuous MMM tools,helping clinicians make more accurate and timely decisions.These algorithms can integrate diverse data streams to generate predictive models for patient outcomes and optimize treatment strategies.MMM,grounded in neurophysics,offers a more nuanced understanding of cerebral physiology and disease in the ICU.Although each modality has its strengths and limitations,its integrated use,especially in combination with machine learning algorithms,can offer invaluable information for individualized patient care.展开更多
Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CS...Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CSF). TTR aids in sequestering of beta-amyloid peptides Aβ deposition, and protects the brain from trauma, ischemic stroke and Alzheimer disease (AD). Accordingly, hippocampal gene expression of TTR plays a significant role in learning and memory as well as in simulation of spatial memory tasks. TTR via interacting with transcription factor CREB regulates this process and decreased expression leads to memory deficits. By different signaling pathways, like MAPK, AKT, and ERK via Src, TTR provides tropical support through megalin receptor by promoting neurite outgrowth and protecting the neurons from traumatic brain injury. TTR is also responsible for the transient rise in intracellular Ca2+ via NMDA receptor, playing a dominant role under excitotoxic conditions. In this review, we tried to shed light on how TTR is involved in maintaining normal cognitive processes, its role in learning and memory, under memory deficit conditions;by which mechanisms it promotes neurite outgrowth;and how it protects the brain from Alzheimer disease (AD).展开更多
The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral...The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral markers of genetic or brain bases of language learning) predict reading and writing achievement in students with and without specific learning disabilities in written language (SLDs-WL). Results largely replicated prior findings that verbally gifted with dyslexia score higher on reading and writing achievement than those with average verbal ability but not on endophenotypes. The current study extended that research by comparing those with and without SLDs-WL with assessed verbal ability held constant. The verbally gifted without SLDs-WL (n = 14) scored higher than the verbally gifted with SLDs-WL (n = 27) on six language skills (oral sentence construction, best and fastest handwriting in copying, single real word oral reading accuracy, oral pseudoword reading accuracy and rate) and four endophenotypes (orthographic and morphological coding, orthographic loop, and switching attention). The verbally average without SLDs-WL (n = 6) scored higher than the verbally average with SLDs-WL (n = 22) on four language skills (best and fastest hand-writing in copying, oral pseudoword reading accuracy and rate) and two endophenotypes (orthographic coding and orthographic loop). Implications of results for translating interdisciplinary research into flexible definitions for assessment and instruction to serve students with varying verbal abilities and language learning and endophenotype profiles are discussed along with directions for future research.展开更多
Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unma...Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.展开更多
The freshness of fruits is considered to be one of the essential characteristics for consumers in determining their quality,flavor and nutritional value.The primary need for identifying rotten fruits is to ensure that...The freshness of fruits is considered to be one of the essential characteristics for consumers in determining their quality,flavor and nutritional value.The primary need for identifying rotten fruits is to ensure that only fresh and high-quality fruits are sold to consumers.The impact of rotten fruits can foster harmful bacteria,molds and other microorganisms that can cause food poisoning and other illnesses to the consumers.The overall purpose of the study is to classify rotten fruits,which can affect the taste,texture,and appearance of other fresh fruits,thereby reducing their shelf life.The agriculture and food industries are increasingly adopting computer vision technology to detect rotten fruits and forecast their shelf life.Hence,this research work mainly focuses on the Convolutional Neural Network’s(CNN)deep learning model,which helps in the classification of rotten fruits.The proposed methodology involves real-time analysis of a dataset of various types of fruits,including apples,bananas,oranges,papayas and guavas.Similarly,machine learningmodels such as GaussianNaïve Bayes(GNB)and random forest are used to predict the fruit’s shelf life.The results obtained from the various pre-trained models for rotten fruit detection are analysed based on an accuracy score to determine the best model.In comparison to other pre-trained models,the visual geometry group16(VGG16)obtained a higher accuracy score of 95%.Likewise,the random forest model delivers a better accuracy score of 88% when compared with GNB in forecasting the fruit’s shelf life.By developing an accurate classification model,only fresh and safe fruits reach consumers,reducing the risks associated with contaminated produce.Thereby,the proposed approach will have a significant impact on the food industry for efficient fruit distribution and also benefit customers to purchase fresh fruits.展开更多
Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machin...Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machine learning framework(AutoGluon).A total of 2241 landslides were identified from satellite images before and after the rainfall event,and 10 impact factors including elevation,slope,aspect,normalized difference vegetation index(NDVI),topographic wetness index(TWI),lithology,land cover,distance to roads,distance to rivers,and rainfall were selected as indicators.The WeightedEnsemble model,which is an ensemble of 13 basic machine learning models weighted together,was used to output the landslide hazard assessment results.The results indicate that landslides mainly occurred in the central part of the study area,especially in Hetian and Shanghu.Totally 102.44 s were spent to train all the models,and the ensemble model WeightedEnsemble has an Area Under the Curve(AUC)value of92.36%in the test set.In addition,14.95%of the study area was determined to be at very high hazard,with a landslide density of 12.02 per square kilometer.This study serves as a significant reference for the prevention and mitigation of geological hazards and land use planning in Luhe County.展开更多
Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Co...Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Conghua District,which is the most prone to landslide disasters in Guangzhou,was selected for landslide susceptibility evaluation.The evaluation factors were selected by using correlation analysis and variance expansion factor method.Applying four machine learning methods namely Logistic Regression(LR),Random Forest(RF),Support Vector Machines(SVM),and Extreme Gradient Boosting(XGB),landslide models were constructed.Comparative analysis and evaluation of the model were conducted through statistical indices and receiver operating characteristic(ROC)curves.The results showed that LR,RF,SVM,and XGB models have good predictive performance for landslide susceptibility,with the area under curve(AUC)values of 0.752,0.965,0.996,and 0.998,respectively.XGB model had the highest predictive ability,followed by RF model,SVM model,and LR model.The frequency ratio(FR)accuracy of LR,RF,SVM,and XGB models was 0.775,0.842,0.759,and 0.822,respectively.RF and XGB models were superior to LR and SVM models,indicating that the integrated algorithm has better predictive ability than a single classification algorithm in regional landslide classification problems.展开更多
The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Thera...The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Therapy is highly relevant to the treatment of Parkinson’s disease through deep brain stimulation. Originally wearable and wireless systems for quantifying Parkinson’s disease involved the use a smartphone to quantify hand tremor. Although originally novel, the smartphone has notable issues as a wearable application for quantifying movement disorder tremor. The smartphone has evolved in a pathway that has made the smartphone progressively more cumbersome to mount about the dorsum of the hand. Furthermore, the smartphone utilizes an inertial sensor package that is not certified for medical analysis, and the trial data access a provisional Cloud computing environment through an email account. These concerns are resolved with the recent development of a conformal wearable and wireless inertial sensor system. This conformal wearable and wireless system mounts to the hand with the profile of a bandage by adhesive and accesses a secure Cloud computing environment through a segmented wireless connectivity strategy involving a smartphone and tablet. Additionally, the conformal wearable and wireless system is certified by the FDA of the United States of America for ascertaining medical grade inertial sensor data. These characteristics make the conformal wearable and wireless system uniquely suited for the quantification of Parkinson’s disease treatment through deep brain stimulation. Preliminary evaluation of the conformal wearable and wireless system is demonstrated through the differentiation of deep brain stimulation set to “On” and “Off” status. Based on the robustness of the acceleration signal, this signal was selected to quantify hand tremor for the prescribed deep brain stimulation settings. Machine learning classification using the Waikato Environment for Knowledge Analysis (WEKA) was applied using the multilayer perceptron neural network. The multilayer perceptron neural network achieved considerable classification accuracy for distinguishing between the deep brain stimulation system set to “On” and “Off” status through the quantified acceleration signal data obtained by this recently developed conformal wearable and wireless system. The research achievement establishes a progressive pathway to the future objective of achieving deep brain stimulation capabilities that promote closed-loop acquisition of configuration parameters that are uniquely optimized to the individual through extrinsic means of a highly conformal wearable and wireless inertial sensor system and machine learning with access to Cloud computing resources.展开更多
Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malwar...Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.展开更多
Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified ne...Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment.展开更多
Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver dam...Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver damage. Prevalence of NAFLD (Nonalcoholic fatty liver disease) and resulting NASH (nonalcoholic steatohepatitis) are constantly increasing worldwide, creating challenges for screening as the diagnosis for NASH requires invasive liver biopsy. Key issues in NAFLD patients are the differentiation of NASH from simple steatosis and identification of advanced hepatic fibrosis. In this prospective study, the staging of three different lesions of the liver to diagnose fatty liver was analyzed using a proprietary ML algorithm LIVERFAStTM developed with a database of 2862 unique medical assessments of biomarkers, where 1027 assessments were used to train the algorithm and 1835 constituted the validation set. Data of 13,068 patients who underwent the LIVERFAStTM test for evaluation of fatty liver disease were analysed. Data evaluation revealed 11% of the patients exhibited significant fibrosis with fibrosis scores 0.6 - 1.00. Approximately 7% of the population had severe hepatic inflammation. Steatosis was observed in most patients, 63%, whereas severe steatosis S3 was observed in 20%. Using modified SAF (Steatosis, Activity and Fibrosis) scores obtained using the LIVERFAStTM algorithm, NAFLD was detected in 13.41% of the patients (Sx > 0, Ay 0). Approximately 1.91% (Sx > 0, Ay = 2, Fz > 0) of the patients showed NAFLD or NASH scorings while 1.08% had confirmed NASH (Sx > 0, Ay > 2, Fz = 1 - 2) and 1.49% had advanced NASH (Sx > 0, Ay > 2, Fz = 3 - 4). The modified SAF scoring system generated by LIVERFAStTM provides a simple and convenient evaluation of NAFLD and NASH in a cohort of Southeast Asians. This system may lead to the use of noninvasive liver tests in extended populations for more accurate diagnosis of liver pathology, prediction of clinical path of individuals at all stages of liver diseases, and provision of an efficient system for therapeutic interventions.展开更多
文摘This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactical offensive threats. Conformal prediction is the principled and unified adaptive and learning framework used to design, develop, and deploy a multi-faceted?self-managing defensive shield to detect, disrupt, and deny intrusive attacks, hostile and malicious behavior, and subterfuge. Conformal prediction leverages apparent relationships between immunity and intrusion detection using non-conformity measures characteristic of affinity, a typicality, and surprise, to recognize patterns and messages as friend or foe and to respond to them accordingly. The solutions proffered throughout are built around active learning, meta-reasoning, randomness, distributed semantics and stratification, and most important and above all around adaptive Oracles. The motivation for using conformal prediction and its immediate off-spring, those of semi-supervised learning and transduction, comes from them first and foremost supporting discriminative and non-parametric methods characteristic of principled demarcation using cohorts and sensitivity analysis to hedge on the prediction outcomes including negative selection, on one side, and providing credibility and confidence indices that assist meta-reasoning and information fusion.
文摘With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such distinguishing itself from the majority of research literature on the topic which is primarily focused on building ontologies from a vast array of different types of data sources, both structured and unstructured, to support various forms of AI, in particular, the Semantic Web as envisioned by Tim Berners-Lee. We first elaborate on mutually informing disciplines of philosophy and computer science, or more specifically the relationship between metaphysics, epistemology, ontology, computing and AI, followed by a technically in-depth discussion of DEBRA, our dependency tree based concept hierarchy constructor, which as its name alludes to, constructs a conceptual map in the form of a directed graph which illustrates the concepts, their respective relations, and the implied ontological structure of the concepts as encoded in the text, decoded with standard Python NLP libraries such as spaCy and NLTK. With this work we hope to both augment the Knowledge Representation literature with opportunities for intellectual advancement in AI with more intuitive, less analytical, and well-known forms of knowledge representation from the cognitive science community, as well as open up new areas of research between Computer Science and the Humanities with respect to the application of the latest in NLP tools and techniques upon literature of cultural significance, shedding light on existing methods of computation with respect to documents in semantic space that effectively allows for, at the very least, the comparison and evolution of texts through time, using vector space math.
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.
基金supported by the National Natural Science Foundation of China(Grant Nos.42090054,41931295)the Natural Science Foundation of Hubei Province of China(2022CFA002)。
文摘The frequent occurrence of extreme weather events has rendered numerous landslides to a global natural disaster issue.It is crucial to rapidly and accurately determine the boundaries of landslides for geohazards evaluation and emergency response.Therefore,the Skip Connection DeepLab neural network(SCDnn),a deep learning model based on 770 optical remote sensing images of landslide,is proposed to improve the accuracy of landslide boundary detection.The SCDnn model is optimized for the over-segmentation issue which occurs in conventional deep learning models when there is a significant degree of similarity between topographical geomorphic features.SCDnn exhibits notable improvements in landslide feature extraction and semantic segmentation by combining an enhanced Atrous Spatial Pyramid Convolutional Block(ASPC)with a coding structure that reduces model complexity.The experimental results demonstrate that SCDnn can identify landslide boundaries in 119 images with MIoU values between 0.8and 0.9;while 52 images with MIoU values exceeding 0.9,which exceeds the identification accuracy of existing techniques.This work can offer a novel technique for the automatic extensive identification of landslide boundaries in remote sensing images in addition to establishing the groundwork for future inve stigations and applications in related domains.
基金financially supported by the National Natural Science Foundation of China,No.81303115,81774042 (both to XC)the Pearl River S&T Nova Program of Guangzhou,No.201806010025 (to XC)+3 种基金the Specialty Program of Guangdong Province Hospital of Chinese Medicine of China,No.YN2018ZD07 (to XC)the Natural Science Foundatior of Guangdong Province of China,No.2023A1515012174 (to JL)the Science and Technology Program of Guangzhou of China,No.20210201 0268 (to XC),20210201 0339 (to JS)Guangdong Provincial Key Laboratory of Research on Emergency in TCM,Nos.2018-75,2019-140 (to JS)
文摘Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
文摘CC’s(Cloud Computing)networks are distributed and dynamic as signals appear/disappear or lose significance.MLTs(Machine learning Techniques)train datasets which sometime are inadequate in terms of sample for inferring information.A dynamic strategy,DevMLOps(Development Machine Learning Operations)used in automatic selections and tunings of MLTs result in significant performance differences.But,the scheme has many disadvantages including continuity in training,more samples and training time in feature selections and increased classification execution times.RFEs(Recursive Feature Eliminations)are computationally very expensive in its operations as it traverses through each feature without considering correlations between them.This problem can be overcome by the use of Wrappers as they select better features by accounting for test and train datasets.The aim of this paper is to use DevQLMLOps for automated tuning and selections based on orchestrations and messaging between containers.The proposed AKFA(Adaptive Kernel Firefly Algorithm)is for selecting features for CNM(Cloud Network Monitoring)operations.AKFA methodology is demonstrated using CNSD(Cloud Network Security Dataset)with satisfactory results in the performance metrics like precision,recall,F-measure and accuracy used.
文摘The continuous development of cyberattacks is threatening digital transformation endeavors worldwide and leadsto wide losses for various organizations. These dangers have proven that signature-based approaches are insufficientto prevent emerging and polymorphic attacks. Therefore, this paper is proposing a Robust Malicious ExecutableDetection (RMED) using Host-based Machine Learning Classifier to discover malicious Portable Executable (PE)files in hosts using Windows operating systems through collecting PE headers and applying machine learningmechanisms to detect unknown infected files. The authors have collected a novel reliable dataset containing 116,031benign files and 179,071 malware samples from diverse sources to ensure the efficiency of RMED approach.The most effective PE headers that can highly differentiate between benign and malware files were selected totrain the model on 15 PE features to speed up the classification process and achieve real-time detection formalicious executables. The evaluation results showed that RMED succeeded in shrinking the classification timeto 91 milliseconds for each file while reaching an accuracy of 98.42% with a false positive rate equal to 1.58. Inconclusion, this paper contributes to the field of cybersecurity by presenting a comprehensive framework thatleverages Artificial Intelligence (AI) methods to proactively detect and prevent cyber-attacks.
基金Macao Polytechnic University Grant(RP/FCSD-01/2022RP/FCA-05/2022)Science and Technology Development Fund of Macao(0105/2022/A).
文摘Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.
文摘Multimodal monitoring(MMM)in the intensive care unit(ICU)has become increasingly sophisticated with the integration of neurophysical principles.However,the challenge remains to select and interpret the most appropriate combination of neuromonitoring modalities to optimize patient outcomes.This manuscript reviewed current neuromonitoring tools,focusing on intracranial pressure,cerebral electrical activity,metabolism,and invasive and noninvasive autoregulation moni-toring.In addition,the integration of advanced machine learning and data science tools within the ICU were discussed.Invasive monitoring includes analysis of intracranial pressure waveforms,jugular venous oximetry,monitoring of brain tissue oxygenation,thermal diffusion flowmetry,electrocorticography,depth electroencephalography,and cerebral microdialysis.Noninvasive measures include transcranial Doppler,tympanic membrane displacement,near-infrared spectroscopy,optic nerve sheath diameter,positron emission tomography,and systemic hemodynamic monitoring including heart rate variability analysis.The neurophysical basis and clinical relevance of each method within the ICU setting were examined.Machine learning algorithms have shown promise by helping to analyze and interpret data in real time from continuous MMM tools,helping clinicians make more accurate and timely decisions.These algorithms can integrate diverse data streams to generate predictive models for patient outcomes and optimize treatment strategies.MMM,grounded in neurophysics,offers a more nuanced understanding of cerebral physiology and disease in the ICU.Although each modality has its strengths and limitations,its integrated use,especially in combination with machine learning algorithms,can offer invaluable information for individualized patient care.
文摘Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CSF). TTR aids in sequestering of beta-amyloid peptides Aβ deposition, and protects the brain from trauma, ischemic stroke and Alzheimer disease (AD). Accordingly, hippocampal gene expression of TTR plays a significant role in learning and memory as well as in simulation of spatial memory tasks. TTR via interacting with transcription factor CREB regulates this process and decreased expression leads to memory deficits. By different signaling pathways, like MAPK, AKT, and ERK via Src, TTR provides tropical support through megalin receptor by promoting neurite outgrowth and protecting the neurons from traumatic brain injury. TTR is also responsible for the transient rise in intracellular Ca2+ via NMDA receptor, playing a dominant role under excitotoxic conditions. In this review, we tried to shed light on how TTR is involved in maintaining normal cognitive processes, its role in learning and memory, under memory deficit conditions;by which mechanisms it promotes neurite outgrowth;and how it protects the brain from Alzheimer disease (AD).
文摘The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral markers of genetic or brain bases of language learning) predict reading and writing achievement in students with and without specific learning disabilities in written language (SLDs-WL). Results largely replicated prior findings that verbally gifted with dyslexia score higher on reading and writing achievement than those with average verbal ability but not on endophenotypes. The current study extended that research by comparing those with and without SLDs-WL with assessed verbal ability held constant. The verbally gifted without SLDs-WL (n = 14) scored higher than the verbally gifted with SLDs-WL (n = 27) on six language skills (oral sentence construction, best and fastest handwriting in copying, single real word oral reading accuracy, oral pseudoword reading accuracy and rate) and four endophenotypes (orthographic and morphological coding, orthographic loop, and switching attention). The verbally average without SLDs-WL (n = 6) scored higher than the verbally average with SLDs-WL (n = 22) on four language skills (best and fastest hand-writing in copying, oral pseudoword reading accuracy and rate) and two endophenotypes (orthographic coding and orthographic loop). Implications of results for translating interdisciplinary research into flexible definitions for assessment and instruction to serve students with varying verbal abilities and language learning and endophenotype profiles are discussed along with directions for future research.
文摘Recently,there has been a notable surge of interest in scientific research regarding spectral images.The potential of these images to revolutionize the digital photography industry,like aerial photography through Unmanned Aerial Vehicles(UAVs),has captured considerable attention.One encouraging aspect is their combination with machine learning and deep learning algorithms,which have demonstrated remarkable outcomes in image classification.As a result of this powerful amalgamation,the adoption of spectral images has experienced exponential growth across various domains,with agriculture being one of the prominent beneficiaries.This paper presents an extensive survey encompassing multispectral and hyperspectral images,focusing on their applications for classification challenges in diverse agricultural areas,including plants,grains,fruits,and vegetables.By meticulously examining primary studies,we delve into the specific agricultural domains where multispectral and hyperspectral images have found practical use.Additionally,our attention is directed towards utilizing machine learning techniques for effectively classifying hyperspectral images within the agricultural context.The findings of our investigation reveal that deep learning and support vector machines have emerged as widely employed methods for hyperspectral image classification in agriculture.Nevertheless,we also shed light on the various issues and limitations of working with spectral images.This comprehensive analysis aims to provide valuable insights into the current state of spectral imaging in agriculture and its potential for future advancements.
文摘The freshness of fruits is considered to be one of the essential characteristics for consumers in determining their quality,flavor and nutritional value.The primary need for identifying rotten fruits is to ensure that only fresh and high-quality fruits are sold to consumers.The impact of rotten fruits can foster harmful bacteria,molds and other microorganisms that can cause food poisoning and other illnesses to the consumers.The overall purpose of the study is to classify rotten fruits,which can affect the taste,texture,and appearance of other fresh fruits,thereby reducing their shelf life.The agriculture and food industries are increasingly adopting computer vision technology to detect rotten fruits and forecast their shelf life.Hence,this research work mainly focuses on the Convolutional Neural Network’s(CNN)deep learning model,which helps in the classification of rotten fruits.The proposed methodology involves real-time analysis of a dataset of various types of fruits,including apples,bananas,oranges,papayas and guavas.Similarly,machine learningmodels such as GaussianNaïve Bayes(GNB)and random forest are used to predict the fruit’s shelf life.The results obtained from the various pre-trained models for rotten fruit detection are analysed based on an accuracy score to determine the best model.In comparison to other pre-trained models,the visual geometry group16(VGG16)obtained a higher accuracy score of 95%.Likewise,the random forest model delivers a better accuracy score of 88% when compared with GNB in forecasting the fruit’s shelf life.By developing an accurate classification model,only fresh and safe fruits reach consumers,reducing the risks associated with contaminated produce.Thereby,the proposed approach will have a significant impact on the food industry for efficient fruit distribution and also benefit customers to purchase fresh fruits.
基金supported by the State Administration of Science,Technology and Industry for National Defence,PRC(KJSP2020020303)the National Institute of Natural Hazards,Ministry of Emergency Management of China(ZDJ2021-12)。
文摘Landslide hazard mapping is essential for regional landslide hazard management.The main objective of this study is to construct a rainfall-induced landslide hazard map of Luhe County,China based on an automated machine learning framework(AutoGluon).A total of 2241 landslides were identified from satellite images before and after the rainfall event,and 10 impact factors including elevation,slope,aspect,normalized difference vegetation index(NDVI),topographic wetness index(TWI),lithology,land cover,distance to roads,distance to rivers,and rainfall were selected as indicators.The WeightedEnsemble model,which is an ensemble of 13 basic machine learning models weighted together,was used to output the landslide hazard assessment results.The results indicate that landslides mainly occurred in the central part of the study area,especially in Hetian and Shanghu.Totally 102.44 s were spent to train all the models,and the ensemble model WeightedEnsemble has an Area Under the Curve(AUC)value of92.36%in the test set.In addition,14.95%of the study area was determined to be at very high hazard,with a landslide density of 12.02 per square kilometer.This study serves as a significant reference for the prevention and mitigation of geological hazards and land use planning in Luhe County.
基金supported by the projects of the China Geological Survey(DD20221729,DD20190291)Zhuhai Urban Geological Survey(including informatization)(MZCD–2201–008).
文摘Machine learning is currently one of the research hotspots in the field of landslide prediction.To clarify and evaluate the differences in characteristics and prediction effects of different machine learning models,Conghua District,which is the most prone to landslide disasters in Guangzhou,was selected for landslide susceptibility evaluation.The evaluation factors were selected by using correlation analysis and variance expansion factor method.Applying four machine learning methods namely Logistic Regression(LR),Random Forest(RF),Support Vector Machines(SVM),and Extreme Gradient Boosting(XGB),landslide models were constructed.Comparative analysis and evaluation of the model were conducted through statistical indices and receiver operating characteristic(ROC)curves.The results showed that LR,RF,SVM,and XGB models have good predictive performance for landslide susceptibility,with the area under curve(AUC)values of 0.752,0.965,0.996,and 0.998,respectively.XGB model had the highest predictive ability,followed by RF model,SVM model,and LR model.The frequency ratio(FR)accuracy of LR,RF,SVM,and XGB models was 0.775,0.842,0.759,and 0.822,respectively.RF and XGB models were superior to LR and SVM models,indicating that the integrated algorithm has better predictive ability than a single classification algorithm in regional landslide classification problems.
文摘The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Therapy is highly relevant to the treatment of Parkinson’s disease through deep brain stimulation. Originally wearable and wireless systems for quantifying Parkinson’s disease involved the use a smartphone to quantify hand tremor. Although originally novel, the smartphone has notable issues as a wearable application for quantifying movement disorder tremor. The smartphone has evolved in a pathway that has made the smartphone progressively more cumbersome to mount about the dorsum of the hand. Furthermore, the smartphone utilizes an inertial sensor package that is not certified for medical analysis, and the trial data access a provisional Cloud computing environment through an email account. These concerns are resolved with the recent development of a conformal wearable and wireless inertial sensor system. This conformal wearable and wireless system mounts to the hand with the profile of a bandage by adhesive and accesses a secure Cloud computing environment through a segmented wireless connectivity strategy involving a smartphone and tablet. Additionally, the conformal wearable and wireless system is certified by the FDA of the United States of America for ascertaining medical grade inertial sensor data. These characteristics make the conformal wearable and wireless system uniquely suited for the quantification of Parkinson’s disease treatment through deep brain stimulation. Preliminary evaluation of the conformal wearable and wireless system is demonstrated through the differentiation of deep brain stimulation set to “On” and “Off” status. Based on the robustness of the acceleration signal, this signal was selected to quantify hand tremor for the prescribed deep brain stimulation settings. Machine learning classification using the Waikato Environment for Knowledge Analysis (WEKA) was applied using the multilayer perceptron neural network. The multilayer perceptron neural network achieved considerable classification accuracy for distinguishing between the deep brain stimulation system set to “On” and “Off” status through the quantified acceleration signal data obtained by this recently developed conformal wearable and wireless system. The research achievement establishes a progressive pathway to the future objective of achieving deep brain stimulation capabilities that promote closed-loop acquisition of configuration parameters that are uniquely optimized to the individual through extrinsic means of a highly conformal wearable and wireless inertial sensor system and machine learning with access to Cloud computing resources.
基金This researchwork is supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R411),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Malware attacks on Windows machines pose significant cybersecurity threats,necessitating effective detection and prevention mechanisms.Supervised machine learning classifiers have emerged as promising tools for malware detection.However,there remains a need for comprehensive studies that compare the performance of different classifiers specifically for Windows malware detection.Addressing this gap can provide valuable insights for enhancing cybersecurity strategies.While numerous studies have explored malware detection using machine learning techniques,there is a lack of systematic comparison of supervised classifiers for Windows malware detection.Understanding the relative effectiveness of these classifiers can inform the selection of optimal detection methods and improve overall security measures.This study aims to bridge the research gap by conducting a comparative analysis of supervised machine learning classifiers for detecting malware on Windows systems.The objectives include Investigating the performance of various classifiers,such as Gaussian Naïve Bayes,K Nearest Neighbors(KNN),Stochastic Gradient Descent Classifier(SGDC),and Decision Tree,in detecting Windows malware.Evaluating the accuracy,efficiency,and suitability of each classifier for real-world malware detection scenarios.Identifying the strengths and limitations of different classifiers to provide insights for cybersecurity practitioners and researchers.Offering recommendations for selecting the most effective classifier for Windows malware detection based on empirical evidence.The study employs a structured methodology consisting of several phases:exploratory data analysis,data preprocessing,model training,and evaluation.Exploratory data analysis involves understanding the dataset’s characteristics and identifying preprocessing requirements.Data preprocessing includes cleaning,feature encoding,dimensionality reduction,and optimization to prepare the data for training.Model training utilizes various supervised classifiers,and their performance is evaluated using metrics such as accuracy,precision,recall,and F1 score.The study’s outcomes comprise a comparative analysis of supervised machine learning classifiers for Windows malware detection.Results reveal the effectiveness and efficiency of each classifier in detecting different types of malware.Additionally,insights into their strengths and limitations provide practical guidance for enhancing cybersecurity defenses.Overall,this research contributes to advancing malware detection techniques and bolstering the security posture of Windows systems against evolving cyber threats.
基金This work was funded by the Deanship of Scientific Research at Jouf University under Grant Number(DSR2022-RG-0102).
文摘Software Defined Network(SDN)and Network Function Virtualization(NFV)technology promote several benefits to network operators,including reduced maintenance costs,increased network operational performance,simplified network lifecycle,and policies management.Network vulnerabilities try to modify services provided by Network Function Virtualization MANagement and Orchestration(NFV MANO),and malicious attacks in different scenarios disrupt the NFV Orchestrator(NFVO)and Virtualized Infrastructure Manager(VIM)lifecycle management related to network services or individual Virtualized Network Function(VNF).This paper proposes an anomaly detection mechanism that monitors threats in NFV MANO and manages promptly and adaptively to implement and handle security functions in order to enhance the quality of experience for end users.An anomaly detector investigates these identified risks and provides secure network services.It enables virtual network security functions and identifies anomalies in Kubernetes(a cloud-based platform).For training and testing purpose of the proposed approach,an intrusion-containing dataset is used that hold multiple malicious activities like a Smurf,Neptune,Teardrop,Pod,Land,IPsweep,etc.,categorized as Probing(Prob),Denial of Service(DoS),User to Root(U2R),and Remote to User(R2L)attacks.An anomaly detector is anticipated with the capabilities of a Machine Learning(ML)technique,making use of supervised learning techniques like Logistic Regression(LR),Support Vector Machine(SVM),Random Forest(RF),Naïve Bayes(NB),and Extreme Gradient Boosting(XGBoost).The proposed framework has been evaluated by deploying the identified ML algorithm on a Jupyter notebook in Kubeflow to simulate Kubernetes for validation purposes.RF classifier has shown better outcomes(99.90%accuracy)than other classifiers in detecting anomalies/intrusions in the containerized environment.
文摘Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver damage. Prevalence of NAFLD (Nonalcoholic fatty liver disease) and resulting NASH (nonalcoholic steatohepatitis) are constantly increasing worldwide, creating challenges for screening as the diagnosis for NASH requires invasive liver biopsy. Key issues in NAFLD patients are the differentiation of NASH from simple steatosis and identification of advanced hepatic fibrosis. In this prospective study, the staging of three different lesions of the liver to diagnose fatty liver was analyzed using a proprietary ML algorithm LIVERFAStTM developed with a database of 2862 unique medical assessments of biomarkers, where 1027 assessments were used to train the algorithm and 1835 constituted the validation set. Data of 13,068 patients who underwent the LIVERFAStTM test for evaluation of fatty liver disease were analysed. Data evaluation revealed 11% of the patients exhibited significant fibrosis with fibrosis scores 0.6 - 1.00. Approximately 7% of the population had severe hepatic inflammation. Steatosis was observed in most patients, 63%, whereas severe steatosis S3 was observed in 20%. Using modified SAF (Steatosis, Activity and Fibrosis) scores obtained using the LIVERFAStTM algorithm, NAFLD was detected in 13.41% of the patients (Sx > 0, Ay 0). Approximately 1.91% (Sx > 0, Ay = 2, Fz > 0) of the patients showed NAFLD or NASH scorings while 1.08% had confirmed NASH (Sx > 0, Ay > 2, Fz = 1 - 2) and 1.49% had advanced NASH (Sx > 0, Ay > 2, Fz = 3 - 4). The modified SAF scoring system generated by LIVERFAStTM provides a simple and convenient evaluation of NAFLD and NASH in a cohort of Southeast Asians. This system may lead to the use of noninvasive liver tests in extended populations for more accurate diagnosis of liver pathology, prediction of clinical path of individuals at all stages of liver diseases, and provision of an efficient system for therapeutic interventions.