Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our ho...Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our hospital from May 2022 to May 2023 were selected for the study.They were divided into Group A(45,conventional teaching method)and Group B(45 cases,PBL independent learning model)by randomized numerical table method to compare the effects of the two groups.Results:The teaching effect indicators and student satisfaction scores in Group B were higher than those in Group A(P<0.05).Conclusion:The use of the PBL independent learning model in the teaching of CIS first aid can significantly improve the teaching effect and student satisfaction.展开更多
This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends t...This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends that they are not the same. The concept of cyber security is explored, which goes beyond protecting information resources to include a wider variety of assets, including people [1]. Protecting information assets is the main goal of traditional information security, with consideration to the human element and how people fit into the security process. On the other hand, cyber security adds a new level of complexity, as people might unintentionally contribute to or become targets of cyberattacks. This aspect presents moral questions since it is becoming more widely accepted that society has a duty to protect weaker members of society, including children [1]. The study emphasizes how important cyber security is on a larger scale, with many countries creating plans and laws to counteract cyberattacks. Nevertheless, a lot of these sources frequently neglect to define the differences or the relationship between information security and cyber security [1]. The paper focus on differentiating between cybersecurity and information security on a larger scale. The study also highlights other areas of cybersecurity which includes defending people, social norms, and vital infrastructure from threats that arise from online in addition to information and technology protection. It contends that ethical issues and the human factor are becoming more and more important in protecting assets in the digital age, and that cyber security is a paradigm shift in this regard [1].展开更多
This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactic...This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactical offensive threats. Conformal prediction is the principled and unified adaptive and learning framework used to design, develop, and deploy a multi-faceted?self-managing defensive shield to detect, disrupt, and deny intrusive attacks, hostile and malicious behavior, and subterfuge. Conformal prediction leverages apparent relationships between immunity and intrusion detection using non-conformity measures characteristic of affinity, a typicality, and surprise, to recognize patterns and messages as friend or foe and to respond to them accordingly. The solutions proffered throughout are built around active learning, meta-reasoning, randomness, distributed semantics and stratification, and most important and above all around adaptive Oracles. The motivation for using conformal prediction and its immediate off-spring, those of semi-supervised learning and transduction, comes from them first and foremost supporting discriminative and non-parametric methods characteristic of principled demarcation using cohorts and sensitivity analysis to hedge on the prediction outcomes including negative selection, on one side, and providing credibility and confidence indices that assist meta-reasoning and information fusion.展开更多
With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such disti...With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such distinguishing itself from the majority of research literature on the topic which is primarily focused on building ontologies from a vast array of different types of data sources, both structured and unstructured, to support various forms of AI, in particular, the Semantic Web as envisioned by Tim Berners-Lee. We first elaborate on mutually informing disciplines of philosophy and computer science, or more specifically the relationship between metaphysics, epistemology, ontology, computing and AI, followed by a technically in-depth discussion of DEBRA, our dependency tree based concept hierarchy constructor, which as its name alludes to, constructs a conceptual map in the form of a directed graph which illustrates the concepts, their respective relations, and the implied ontological structure of the concepts as encoded in the text, decoded with standard Python NLP libraries such as spaCy and NLTK. With this work we hope to both augment the Knowledge Representation literature with opportunities for intellectual advancement in AI with more intuitive, less analytical, and well-known forms of knowledge representation from the cognitive science community, as well as open up new areas of research between Computer Science and the Humanities with respect to the application of the latest in NLP tools and techniques upon literature of cultural significance, shedding light on existing methods of computation with respect to documents in semantic space that effectively allows for, at the very least, the comparison and evolution of texts through time, using vector space math.展开更多
IEEE 1012 [1] describes the SDLC phase activities for software independent verification and validation (IV & V) for nuclear power plant in truly general and conceptual manner, which requires the upward and/or down...IEEE 1012 [1] describes the SDLC phase activities for software independent verification and validation (IV & V) for nuclear power plant in truly general and conceptual manner, which requires the upward and/or downward tailoring on its interpretation for practical IV & V. It contains crucial and encompassing check points and guidelines to analyze the design integrity, without addressing the formalized and the specific criteria for IV & V activities confirming the technical integrity. It is necessary to list up the inspection viewpoint via interpretation of the standard that is practical review points checking design consistency. For fruitful IV & V of Control Element Driving Mechanism Control System (CEDMCS) software for Yonggwang Nuclear Power Plant unit 3 & 4, the specific viewpoints and approach are necessary based on the guidelines of IEEE 1012 to enhance the system quality by considering the level of implementation of the theoretical and the practical IV & V. Additionally IV & V guideline of IEEE 1012 does not specifically provide the concrete measure considering the system characteristics of CEDMCS. This paper provides the seven (7) characteristic criteria for CEDMCS IV & V, and by applying these viewpoints, the design analysis such as function, performance, interface and exception, backward and forward requirement traceability analysis has been conducted. The requirement, design, implementation, and test phase were only considered for IV & V in this project. This article also provides the translation of code to map theoretical verification and validation into practical verification and validation. This paper emphasizes the necessity of the intensive design inspection and walkthrough for requirement phase to resolve the design faults because the IV & V of early phase of SDLC obviously contributes to find out most of critical design inconsistency. Especially for test phase IV & V, it is strongly recommended to prepare the test plan document which is going to be the basis for the test coverage selection and test strategy. This test plan document should be based on the critical characteristics of function and performance of CEDMCS. Also to guarantee the independency of V & V organization participating in this project, and to acquire the full package of design details for IV & V, the systematic approach and efforts with an aspect of management is highlighted among the participants.展开更多
Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control sy...Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.展开更多
The aim of this work is mathematical education through the knowledge system and mathematical modeling. A net model of formation of mathematical knowledge as a deductive theory is suggested here. Within this model the ...The aim of this work is mathematical education through the knowledge system and mathematical modeling. A net model of formation of mathematical knowledge as a deductive theory is suggested here. Within this model the formation of deductive theory is represented as the development of a certain informational space, the elements of which are structured in the form of the orientated semantic net. This net is properly metrized and characterized by a certain system of coverings. It allows injecting net optimization parameters, regulating qualitative aspects of knowledge system under consideration. To regulate the creative processes of the formation and realization of mathematical know- edge, stochastic model of formation deductive theory is suggested here in the form of branching Markovian process, which is realized in the corresponding informational space as a semantic net. According to this stochastic model we can get correct foundation of criterion of optimization creative processes that leads to “great main points” strategy (GMP-strategy) in the process of realization of the effective control in the research work in the sphere of mathematics and its applications.展开更多
First of all it is necessary to point out that 'reciting' is the wrong term for what Chinese students are often asked to do when they are learning English. The correct terms are, 'learning by heart'... First of all it is necessary to point out that 'reciting' is the wrong term for what Chinese students are often asked to do when they are learning English. The correct terms are, 'learning by heart' or 'rote learning'. In this article the term 'rote learning' will be used.……展开更多
E-learning platforms support education systems worldwide, transferring theoretical knowledge as well as soft skills. In the present study high-school pupils’, and adult students’ opinions were evaluated through a mo...E-learning platforms support education systems worldwide, transferring theoretical knowledge as well as soft skills. In the present study high-school pupils’, and adult students’ opinions were evaluated through a modern structured MOODLE interactive course, designed for the needs of the laboratory course “Automotive Systems”. The study concerns Greek secondary vocational education pupils aged 18 and vocational training adult students aged 20 to 50 years. The multistage, equal size simple random cluster sample was used as a sampling method. Pupils and adult students of each cluster completed structured 10-question questionnaires both before and after attending the course. A total of 120 questionnaires were collected. In general, our findings disclosed that the majority of pupils and adult students had significantly improved their knowledge and skills from using MOODLE. They reported strengthening conventional teaching, using the new MOODLE technology. The satisfaction indices improved quite, with the differences in their mean values being statistically significant.展开更多
Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However...Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.展开更多
The frequent occurrence of extreme weather events has rendered numerous landslides to a global natural disaster issue.It is crucial to rapidly and accurately determine the boundaries of landslides for geohazards evalu...The frequent occurrence of extreme weather events has rendered numerous landslides to a global natural disaster issue.It is crucial to rapidly and accurately determine the boundaries of landslides for geohazards evaluation and emergency response.Therefore,the Skip Connection DeepLab neural network(SCDnn),a deep learning model based on 770 optical remote sensing images of landslide,is proposed to improve the accuracy of landslide boundary detection.The SCDnn model is optimized for the over-segmentation issue which occurs in conventional deep learning models when there is a significant degree of similarity between topographical geomorphic features.SCDnn exhibits notable improvements in landslide feature extraction and semantic segmentation by combining an enhanced Atrous Spatial Pyramid Convolutional Block(ASPC)with a coding structure that reduces model complexity.The experimental results demonstrate that SCDnn can identify landslide boundaries in 119 images with MIoU values between 0.8and 0.9;while 52 images with MIoU values exceeding 0.9,which exceeds the identification accuracy of existing techniques.This work can offer a novel technique for the automatic extensive identification of landslide boundaries in remote sensing images in addition to establishing the groundwork for future inve stigations and applications in related domains.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
Ransomware has emerged as a critical cybersecurity threat,characterized by its ability to encrypt user data or lock devices,demanding ransom for their release.Traditional ransomware detection methods face limitations ...Ransomware has emerged as a critical cybersecurity threat,characterized by its ability to encrypt user data or lock devices,demanding ransom for their release.Traditional ransomware detection methods face limitations due to their assumption of similar data distributions between training and testing phases,rendering them less effective against evolving ransomware families.This paper introduces TLERAD(Transfer Learning for Enhanced Ransomware Attack Detection),a novel approach that leverages unsupervised transfer learning and co-clustering techniques to bridge the gap between source and target domains,enabling robust detection of both known and unknown ransomware variants.The proposed method achieves high detection accuracy,with an AUC of 0.98 for known ransomware and 0.93 for unknown ransomware,significantly outperforming baseline methods.Comprehensive experiments demonstrate TLERAD’s effectiveness in real-world scenarios,highlighting its adapt-ability to the rapidly evolving ransomware landscape.The paper also discusses future directions for enhancing TLERAD,including real-time adaptation,integration with lightweight and post-quantum cryptography,and the incorporation of explainable AI techniques.展开更多
The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral...The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral markers of genetic or brain bases of language learning) predict reading and writing achievement in students with and without specific learning disabilities in written language (SLDs-WL). Results largely replicated prior findings that verbally gifted with dyslexia score higher on reading and writing achievement than those with average verbal ability but not on endophenotypes. The current study extended that research by comparing those with and without SLDs-WL with assessed verbal ability held constant. The verbally gifted without SLDs-WL (n = 14) scored higher than the verbally gifted with SLDs-WL (n = 27) on six language skills (oral sentence construction, best and fastest handwriting in copying, single real word oral reading accuracy, oral pseudoword reading accuracy and rate) and four endophenotypes (orthographic and morphological coding, orthographic loop, and switching attention). The verbally average without SLDs-WL (n = 6) scored higher than the verbally average with SLDs-WL (n = 22) on four language skills (best and fastest hand-writing in copying, oral pseudoword reading accuracy and rate) and two endophenotypes (orthographic coding and orthographic loop). Implications of results for translating interdisciplinary research into flexible definitions for assessment and instruction to serve students with varying verbal abilities and language learning and endophenotype profiles are discussed along with directions for future research.展开更多
Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligen...Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research.展开更多
Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CS...Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CSF). TTR aids in sequestering of beta-amyloid peptides Aβ deposition, and protects the brain from trauma, ischemic stroke and Alzheimer disease (AD). Accordingly, hippocampal gene expression of TTR plays a significant role in learning and memory as well as in simulation of spatial memory tasks. TTR via interacting with transcription factor CREB regulates this process and decreased expression leads to memory deficits. By different signaling pathways, like MAPK, AKT, and ERK via Src, TTR provides tropical support through megalin receptor by promoting neurite outgrowth and protecting the neurons from traumatic brain injury. TTR is also responsible for the transient rise in intracellular Ca2+ via NMDA receptor, playing a dominant role under excitotoxic conditions. In this review, we tried to shed light on how TTR is involved in maintaining normal cognitive processes, its role in learning and memory, under memory deficit conditions;by which mechanisms it promotes neurite outgrowth;and how it protects the brain from Alzheimer disease (AD).展开更多
Multimodal monitoring(MMM)in the intensive care unit(ICU)has become increasingly sophisticated with the integration of neurophysical principles.However,the challenge remains to select and interpret the most appropriat...Multimodal monitoring(MMM)in the intensive care unit(ICU)has become increasingly sophisticated with the integration of neurophysical principles.However,the challenge remains to select and interpret the most appropriate combination of neuromonitoring modalities to optimize patient outcomes.This manuscript reviewed current neuromonitoring tools,focusing on intracranial pressure,cerebral electrical activity,metabolism,and invasive and noninvasive autoregulation moni-toring.In addition,the integration of advanced machine learning and data science tools within the ICU were discussed.Invasive monitoring includes analysis of intracranial pressure waveforms,jugular venous oximetry,monitoring of brain tissue oxygenation,thermal diffusion flowmetry,electrocorticography,depth electroencephalography,and cerebral microdialysis.Noninvasive measures include transcranial Doppler,tympanic membrane displacement,near-infrared spectroscopy,optic nerve sheath diameter,positron emission tomography,and systemic hemodynamic monitoring including heart rate variability analysis.The neurophysical basis and clinical relevance of each method within the ICU setting were examined.Machine learning algorithms have shown promise by helping to analyze and interpret data in real time from continuous MMM tools,helping clinicians make more accurate and timely decisions.These algorithms can integrate diverse data streams to generate predictive models for patient outcomes and optimize treatment strategies.MMM,grounded in neurophysics,offers a more nuanced understanding of cerebral physiology and disease in the ICU.Although each modality has its strengths and limitations,its integrated use,especially in combination with machine learning algorithms,can offer invaluable information for individualized patient care.展开更多
The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Thera...The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Therapy is highly relevant to the treatment of Parkinson’s disease through deep brain stimulation. Originally wearable and wireless systems for quantifying Parkinson’s disease involved the use a smartphone to quantify hand tremor. Although originally novel, the smartphone has notable issues as a wearable application for quantifying movement disorder tremor. The smartphone has evolved in a pathway that has made the smartphone progressively more cumbersome to mount about the dorsum of the hand. Furthermore, the smartphone utilizes an inertial sensor package that is not certified for medical analysis, and the trial data access a provisional Cloud computing environment through an email account. These concerns are resolved with the recent development of a conformal wearable and wireless inertial sensor system. This conformal wearable and wireless system mounts to the hand with the profile of a bandage by adhesive and accesses a secure Cloud computing environment through a segmented wireless connectivity strategy involving a smartphone and tablet. Additionally, the conformal wearable and wireless system is certified by the FDA of the United States of America for ascertaining medical grade inertial sensor data. These characteristics make the conformal wearable and wireless system uniquely suited for the quantification of Parkinson’s disease treatment through deep brain stimulation. Preliminary evaluation of the conformal wearable and wireless system is demonstrated through the differentiation of deep brain stimulation set to “On” and “Off” status. Based on the robustness of the acceleration signal, this signal was selected to quantify hand tremor for the prescribed deep brain stimulation settings. Machine learning classification using the Waikato Environment for Knowledge Analysis (WEKA) was applied using the multilayer perceptron neural network. The multilayer perceptron neural network achieved considerable classification accuracy for distinguishing between the deep brain stimulation system set to “On” and “Off” status through the quantified acceleration signal data obtained by this recently developed conformal wearable and wireless system. The research achievement establishes a progressive pathway to the future objective of achieving deep brain stimulation capabilities that promote closed-loop acquisition of configuration parameters that are uniquely optimized to the individual through extrinsic means of a highly conformal wearable and wireless inertial sensor system and machine learning with access to Cloud computing resources.展开更多
Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver dam...Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver damage. Prevalence of NAFLD (Nonalcoholic fatty liver disease) and resulting NASH (nonalcoholic steatohepatitis) are constantly increasing worldwide, creating challenges for screening as the diagnosis for NASH requires invasive liver biopsy. Key issues in NAFLD patients are the differentiation of NASH from simple steatosis and identification of advanced hepatic fibrosis. In this prospective study, the staging of three different lesions of the liver to diagnose fatty liver was analyzed using a proprietary ML algorithm LIVERFAStTM developed with a database of 2862 unique medical assessments of biomarkers, where 1027 assessments were used to train the algorithm and 1835 constituted the validation set. Data of 13,068 patients who underwent the LIVERFAStTM test for evaluation of fatty liver disease were analysed. Data evaluation revealed 11% of the patients exhibited significant fibrosis with fibrosis scores 0.6 - 1.00. Approximately 7% of the population had severe hepatic inflammation. Steatosis was observed in most patients, 63%, whereas severe steatosis S3 was observed in 20%. Using modified SAF (Steatosis, Activity and Fibrosis) scores obtained using the LIVERFAStTM algorithm, NAFLD was detected in 13.41% of the patients (Sx > 0, Ay 0). Approximately 1.91% (Sx > 0, Ay = 2, Fz > 0) of the patients showed NAFLD or NASH scorings while 1.08% had confirmed NASH (Sx > 0, Ay > 2, Fz = 1 - 2) and 1.49% had advanced NASH (Sx > 0, Ay > 2, Fz = 3 - 4). The modified SAF scoring system generated by LIVERFAStTM provides a simple and convenient evaluation of NAFLD and NASH in a cohort of Southeast Asians. This system may lead to the use of noninvasive liver tests in extended populations for more accurate diagnosis of liver pathology, prediction of clinical path of individuals at all stages of liver diseases, and provision of an efficient system for therapeutic interventions.展开更多
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique...Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.展开更多
文摘Objective:To analyze the effect of using a problem-based(PBL)independent learning model in teaching cerebral ischemic stroke(CIS)first aid in emergency medicine.Methods:90 interns in the emergency department of our hospital from May 2022 to May 2023 were selected for the study.They were divided into Group A(45,conventional teaching method)and Group B(45 cases,PBL independent learning model)by randomized numerical table method to compare the effects of the two groups.Results:The teaching effect indicators and student satisfaction scores in Group B were higher than those in Group A(P<0.05).Conclusion:The use of the PBL independent learning model in the teaching of CIS first aid can significantly improve the teaching effect and student satisfaction.
文摘This paper examines how cybersecurity is developing and how it relates to more conventional information security. Although information security and cyber security are sometimes used synonymously, this study contends that they are not the same. The concept of cyber security is explored, which goes beyond protecting information resources to include a wider variety of assets, including people [1]. Protecting information assets is the main goal of traditional information security, with consideration to the human element and how people fit into the security process. On the other hand, cyber security adds a new level of complexity, as people might unintentionally contribute to or become targets of cyberattacks. This aspect presents moral questions since it is becoming more widely accepted that society has a duty to protect weaker members of society, including children [1]. The study emphasizes how important cyber security is on a larger scale, with many countries creating plans and laws to counteract cyberattacks. Nevertheless, a lot of these sources frequently neglect to define the differences or the relationship between information security and cyber security [1]. The paper focus on differentiating between cybersecurity and information security on a larger scale. The study also highlights other areas of cybersecurity which includes defending people, social norms, and vital infrastructure from threats that arise from online in addition to information and technology protection. It contends that ethical issues and the human factor are becoming more and more important in protecting assets in the digital age, and that cyber security is a paradigm shift in this regard [1].
文摘This paper advances new directions for cyber security using adversarial learning and conformal prediction in order to enhance network and computing services defenses against adaptive, malicious, persistent, and tactical offensive threats. Conformal prediction is the principled and unified adaptive and learning framework used to design, develop, and deploy a multi-faceted?self-managing defensive shield to detect, disrupt, and deny intrusive attacks, hostile and malicious behavior, and subterfuge. Conformal prediction leverages apparent relationships between immunity and intrusion detection using non-conformity measures characteristic of affinity, a typicality, and surprise, to recognize patterns and messages as friend or foe and to respond to them accordingly. The solutions proffered throughout are built around active learning, meta-reasoning, randomness, distributed semantics and stratification, and most important and above all around adaptive Oracles. The motivation for using conformal prediction and its immediate off-spring, those of semi-supervised learning and transduction, comes from them first and foremost supporting discriminative and non-parametric methods characteristic of principled demarcation using cohorts and sensitivity analysis to hedge on the prediction outcomes including negative selection, on one side, and providing credibility and confidence indices that assist meta-reasoning and information fusion.
文摘With this work, we introduce a novel method for the unsupervised learning of conceptual hierarchies, or concept maps as they are sometimes called, which is aimed specifically for use with literary texts, as such distinguishing itself from the majority of research literature on the topic which is primarily focused on building ontologies from a vast array of different types of data sources, both structured and unstructured, to support various forms of AI, in particular, the Semantic Web as envisioned by Tim Berners-Lee. We first elaborate on mutually informing disciplines of philosophy and computer science, or more specifically the relationship between metaphysics, epistemology, ontology, computing and AI, followed by a technically in-depth discussion of DEBRA, our dependency tree based concept hierarchy constructor, which as its name alludes to, constructs a conceptual map in the form of a directed graph which illustrates the concepts, their respective relations, and the implied ontological structure of the concepts as encoded in the text, decoded with standard Python NLP libraries such as spaCy and NLTK. With this work we hope to both augment the Knowledge Representation literature with opportunities for intellectual advancement in AI with more intuitive, less analytical, and well-known forms of knowledge representation from the cognitive science community, as well as open up new areas of research between Computer Science and the Humanities with respect to the application of the latest in NLP tools and techniques upon literature of cultural significance, shedding light on existing methods of computation with respect to documents in semantic space that effectively allows for, at the very least, the comparison and evolution of texts through time, using vector space math.
文摘IEEE 1012 [1] describes the SDLC phase activities for software independent verification and validation (IV & V) for nuclear power plant in truly general and conceptual manner, which requires the upward and/or downward tailoring on its interpretation for practical IV & V. It contains crucial and encompassing check points and guidelines to analyze the design integrity, without addressing the formalized and the specific criteria for IV & V activities confirming the technical integrity. It is necessary to list up the inspection viewpoint via interpretation of the standard that is practical review points checking design consistency. For fruitful IV & V of Control Element Driving Mechanism Control System (CEDMCS) software for Yonggwang Nuclear Power Plant unit 3 & 4, the specific viewpoints and approach are necessary based on the guidelines of IEEE 1012 to enhance the system quality by considering the level of implementation of the theoretical and the practical IV & V. Additionally IV & V guideline of IEEE 1012 does not specifically provide the concrete measure considering the system characteristics of CEDMCS. This paper provides the seven (7) characteristic criteria for CEDMCS IV & V, and by applying these viewpoints, the design analysis such as function, performance, interface and exception, backward and forward requirement traceability analysis has been conducted. The requirement, design, implementation, and test phase were only considered for IV & V in this project. This article also provides the translation of code to map theoretical verification and validation into practical verification and validation. This paper emphasizes the necessity of the intensive design inspection and walkthrough for requirement phase to resolve the design faults because the IV & V of early phase of SDLC obviously contributes to find out most of critical design inconsistency. Especially for test phase IV & V, it is strongly recommended to prepare the test plan document which is going to be the basis for the test coverage selection and test strategy. This test plan document should be based on the critical characteristics of function and performance of CEDMCS. Also to guarantee the independency of V & V organization participating in this project, and to acquire the full package of design details for IV & V, the systematic approach and efforts with an aspect of management is highlighted among the participants.
基金partly supported by the University of Malaya Impact Oriented Interdisci-plinary Research Grant under Grant IIRG008(A,B,C)-19IISS.
文摘Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.
文摘The aim of this work is mathematical education through the knowledge system and mathematical modeling. A net model of formation of mathematical knowledge as a deductive theory is suggested here. Within this model the formation of deductive theory is represented as the development of a certain informational space, the elements of which are structured in the form of the orientated semantic net. This net is properly metrized and characterized by a certain system of coverings. It allows injecting net optimization parameters, regulating qualitative aspects of knowledge system under consideration. To regulate the creative processes of the formation and realization of mathematical know- edge, stochastic model of formation deductive theory is suggested here in the form of branching Markovian process, which is realized in the corresponding informational space as a semantic net. According to this stochastic model we can get correct foundation of criterion of optimization creative processes that leads to “great main points” strategy (GMP-strategy) in the process of realization of the effective control in the research work in the sphere of mathematics and its applications.
文摘 First of all it is necessary to point out that 'reciting' is the wrong term for what Chinese students are often asked to do when they are learning English. The correct terms are, 'learning by heart' or 'rote learning'. In this article the term 'rote learning' will be used.……
文摘E-learning platforms support education systems worldwide, transferring theoretical knowledge as well as soft skills. In the present study high-school pupils’, and adult students’ opinions were evaluated through a modern structured MOODLE interactive course, designed for the needs of the laboratory course “Automotive Systems”. The study concerns Greek secondary vocational education pupils aged 18 and vocational training adult students aged 20 to 50 years. The multistage, equal size simple random cluster sample was used as a sampling method. Pupils and adult students of each cluster completed structured 10-question questionnaires both before and after attending the course. A total of 120 questionnaires were collected. In general, our findings disclosed that the majority of pupils and adult students had significantly improved their knowledge and skills from using MOODLE. They reported strengthening conventional teaching, using the new MOODLE technology. The satisfaction indices improved quite, with the differences in their mean values being statistically significant.
基金financially supported by the National Natural Science Foundation of China,No.81303115,81774042 (both to XC)the Pearl River S&T Nova Program of Guangzhou,No.201806010025 (to XC)+3 种基金the Specialty Program of Guangdong Province Hospital of Chinese Medicine of China,No.YN2018ZD07 (to XC)the Natural Science Foundatior of Guangdong Province of China,No.2023A1515012174 (to JL)the Science and Technology Program of Guangzhou of China,No.20210201 0268 (to XC),20210201 0339 (to JS)Guangdong Provincial Key Laboratory of Research on Emergency in TCM,Nos.2018-75,2019-140 (to JS)
文摘Vascular etiology is the second most prevalent cause of cognitive impairment globally.Endothelin-1,which is produced and secreted by endothelial cells and astrocytes,is implicated in the pathogenesis of stroke.However,the way in which changes in astrocytic endothelin-1 lead to poststroke cognitive deficits following transient middle cerebral artery occlusion is not well understood.Here,using mice in which astrocytic endothelin-1 was overexpressed,we found that the selective overexpression of endothelin-1 by astrocytic cells led to ischemic stroke-related dementia(1 hour of ischemia;7 days,28 days,or 3 months of reperfusion).We also revealed that astrocytic endothelin-1 overexpression contributed to the role of neural stem cell proliferation but impaired neurogenesis in the dentate gyrus of the hippocampus after middle cerebral artery occlusion.Comprehensive proteome profiles and western blot analysis confirmed that levels of glial fibrillary acidic protein and peroxiredoxin 6,which were differentially expressed in the brain,were significantly increased in mice with astrocytic endothelin-1 overexpression in comparison with wild-type mice 28 days after ischemic stroke.Moreover,the levels of the enriched differentially expressed proteins were closely related to lipid metabolism,as indicated by Kyoto Encyclopedia of Genes and Genomes pathway analysis.Liquid chromatography-mass spectrometry nontargeted metabolite profiling of brain tissues showed that astrocytic endothelin-1 overexpression altered lipid metabolism products such as glycerol phosphatidylcholine,sphingomyelin,and phosphatidic acid.Overall,this study demonstrates that astrocytic endothelin-1 overexpression can impair hippocampal neurogenesis and that it is correlated with lipid metabolism in poststroke cognitive dysfunction.
基金supported by the National Natural Science Foundation of China(Grant Nos.42090054,41931295)the Natural Science Foundation of Hubei Province of China(2022CFA002)。
文摘The frequent occurrence of extreme weather events has rendered numerous landslides to a global natural disaster issue.It is crucial to rapidly and accurately determine the boundaries of landslides for geohazards evaluation and emergency response.Therefore,the Skip Connection DeepLab neural network(SCDnn),a deep learning model based on 770 optical remote sensing images of landslide,is proposed to improve the accuracy of landslide boundary detection.The SCDnn model is optimized for the over-segmentation issue which occurs in conventional deep learning models when there is a significant degree of similarity between topographical geomorphic features.SCDnn exhibits notable improvements in landslide feature extraction and semantic segmentation by combining an enhanced Atrous Spatial Pyramid Convolutional Block(ASPC)with a coding structure that reduces model complexity.The experimental results demonstrate that SCDnn can identify landslide boundaries in 119 images with MIoU values between 0.8and 0.9;while 52 images with MIoU values exceeding 0.9,which exceeds the identification accuracy of existing techniques.This work can offer a novel technique for the automatic extensive identification of landslide boundaries in remote sensing images in addition to establishing the groundwork for future inve stigations and applications in related domains.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
文摘Ransomware has emerged as a critical cybersecurity threat,characterized by its ability to encrypt user data or lock devices,demanding ransom for their release.Traditional ransomware detection methods face limitations due to their assumption of similar data distributions between training and testing phases,rendering them less effective against evolving ransomware families.This paper introduces TLERAD(Transfer Learning for Enhanced Ransomware Attack Detection),a novel approach that leverages unsupervised transfer learning and co-clustering techniques to bridge the gap between source and target domains,enabling robust detection of both known and unknown ransomware variants.The proposed method achieves high detection accuracy,with an AUC of 0.98 for known ransomware and 0.93 for unknown ransomware,significantly outperforming baseline methods.Comprehensive experiments demonstrate TLERAD’s effectiveness in real-world scenarios,highlighting its adapt-ability to the rapidly evolving ransomware landscape.The paper also discusses future directions for enhancing TLERAD,including real-time adaptation,integration with lightweight and post-quantum cryptography,and the incorporation of explainable AI techniques.
文摘The current research was grounded in prior interdisciplinary research that showed cognitive ability (verbal ability for translating cognitions into oral language) and multiple-working memory endophenotypes (behavioral markers of genetic or brain bases of language learning) predict reading and writing achievement in students with and without specific learning disabilities in written language (SLDs-WL). Results largely replicated prior findings that verbally gifted with dyslexia score higher on reading and writing achievement than those with average verbal ability but not on endophenotypes. The current study extended that research by comparing those with and without SLDs-WL with assessed verbal ability held constant. The verbally gifted without SLDs-WL (n = 14) scored higher than the verbally gifted with SLDs-WL (n = 27) on six language skills (oral sentence construction, best and fastest handwriting in copying, single real word oral reading accuracy, oral pseudoword reading accuracy and rate) and four endophenotypes (orthographic and morphological coding, orthographic loop, and switching attention). The verbally average without SLDs-WL (n = 6) scored higher than the verbally average with SLDs-WL (n = 22) on four language skills (best and fastest hand-writing in copying, oral pseudoword reading accuracy and rate) and two endophenotypes (orthographic coding and orthographic loop). Implications of results for translating interdisciplinary research into flexible definitions for assessment and instruction to serve students with varying verbal abilities and language learning and endophenotype profiles are discussed along with directions for future research.
文摘Machine learning(ML)is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis,thus creating machines that can complete tasks otherwise requiring human intelligence.Among its various applications,it has proven groundbreaking in healthcare as well,both in clinical practice and research.In this editorial,we succinctly introduce ML applications and present a study,featured in the latest issue of the World Journal of Clinical Cases.The authors of this study conducted an analysis using both multiple linear regression(MLR)and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease(NAFLD).Their results implicated age as the most important determining factor in both groups,followed by lactic dehydrogenase,uric acid,forced expiratory volume in one second,and albumin.In addition,for the NAFLD-group,the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure,as compared to plasma calcium and body fat for the NAFLD+group.However,the study's distinctive contribution lies in its adoption of ML methodologies,showcasing their superiority over traditional statistical approaches(herein MLR),thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research.
文摘Transthyretin (TTR), a carrier protein present in the liver and choroid plexus of the brain, has been shown to be responsible for binding thyroid hormone thyroxin (T4) and retinol in plasma and cerebrospinal fluid (CSF). TTR aids in sequestering of beta-amyloid peptides Aβ deposition, and protects the brain from trauma, ischemic stroke and Alzheimer disease (AD). Accordingly, hippocampal gene expression of TTR plays a significant role in learning and memory as well as in simulation of spatial memory tasks. TTR via interacting with transcription factor CREB regulates this process and decreased expression leads to memory deficits. By different signaling pathways, like MAPK, AKT, and ERK via Src, TTR provides tropical support through megalin receptor by promoting neurite outgrowth and protecting the neurons from traumatic brain injury. TTR is also responsible for the transient rise in intracellular Ca2+ via NMDA receptor, playing a dominant role under excitotoxic conditions. In this review, we tried to shed light on how TTR is involved in maintaining normal cognitive processes, its role in learning and memory, under memory deficit conditions;by which mechanisms it promotes neurite outgrowth;and how it protects the brain from Alzheimer disease (AD).
文摘Multimodal monitoring(MMM)in the intensive care unit(ICU)has become increasingly sophisticated with the integration of neurophysical principles.However,the challenge remains to select and interpret the most appropriate combination of neuromonitoring modalities to optimize patient outcomes.This manuscript reviewed current neuromonitoring tools,focusing on intracranial pressure,cerebral electrical activity,metabolism,and invasive and noninvasive autoregulation moni-toring.In addition,the integration of advanced machine learning and data science tools within the ICU were discussed.Invasive monitoring includes analysis of intracranial pressure waveforms,jugular venous oximetry,monitoring of brain tissue oxygenation,thermal diffusion flowmetry,electrocorticography,depth electroencephalography,and cerebral microdialysis.Noninvasive measures include transcranial Doppler,tympanic membrane displacement,near-infrared spectroscopy,optic nerve sheath diameter,positron emission tomography,and systemic hemodynamic monitoring including heart rate variability analysis.The neurophysical basis and clinical relevance of each method within the ICU setting were examined.Machine learning algorithms have shown promise by helping to analyze and interpret data in real time from continuous MMM tools,helping clinicians make more accurate and timely decisions.These algorithms can integrate diverse data streams to generate predictive models for patient outcomes and optimize treatment strategies.MMM,grounded in neurophysics,offers a more nuanced understanding of cerebral physiology and disease in the ICU.Although each modality has its strengths and limitations,its integrated use,especially in combination with machine learning algorithms,can offer invaluable information for individualized patient care.
文摘The concept of Network Centric Therapy represents an amalgamation of wearable and wireless inertial sensor systems and machine learning with access to a Cloud computing environment. The advent of Network Centric Therapy is highly relevant to the treatment of Parkinson’s disease through deep brain stimulation. Originally wearable and wireless systems for quantifying Parkinson’s disease involved the use a smartphone to quantify hand tremor. Although originally novel, the smartphone has notable issues as a wearable application for quantifying movement disorder tremor. The smartphone has evolved in a pathway that has made the smartphone progressively more cumbersome to mount about the dorsum of the hand. Furthermore, the smartphone utilizes an inertial sensor package that is not certified for medical analysis, and the trial data access a provisional Cloud computing environment through an email account. These concerns are resolved with the recent development of a conformal wearable and wireless inertial sensor system. This conformal wearable and wireless system mounts to the hand with the profile of a bandage by adhesive and accesses a secure Cloud computing environment through a segmented wireless connectivity strategy involving a smartphone and tablet. Additionally, the conformal wearable and wireless system is certified by the FDA of the United States of America for ascertaining medical grade inertial sensor data. These characteristics make the conformal wearable and wireless system uniquely suited for the quantification of Parkinson’s disease treatment through deep brain stimulation. Preliminary evaluation of the conformal wearable and wireless system is demonstrated through the differentiation of deep brain stimulation set to “On” and “Off” status. Based on the robustness of the acceleration signal, this signal was selected to quantify hand tremor for the prescribed deep brain stimulation settings. Machine learning classification using the Waikato Environment for Knowledge Analysis (WEKA) was applied using the multilayer perceptron neural network. The multilayer perceptron neural network achieved considerable classification accuracy for distinguishing between the deep brain stimulation system set to “On” and “Off” status through the quantified acceleration signal data obtained by this recently developed conformal wearable and wireless system. The research achievement establishes a progressive pathway to the future objective of achieving deep brain stimulation capabilities that promote closed-loop acquisition of configuration parameters that are uniquely optimized to the individual through extrinsic means of a highly conformal wearable and wireless inertial sensor system and machine learning with access to Cloud computing resources.
文摘Using the latest available artificial intelligence (AI) technology, an advanced algorithm LIVERFAStTM has been used to evaluate the diagnostic accuracy of machine learning (ML) biomarker algorithms to assess liver damage. Prevalence of NAFLD (Nonalcoholic fatty liver disease) and resulting NASH (nonalcoholic steatohepatitis) are constantly increasing worldwide, creating challenges for screening as the diagnosis for NASH requires invasive liver biopsy. Key issues in NAFLD patients are the differentiation of NASH from simple steatosis and identification of advanced hepatic fibrosis. In this prospective study, the staging of three different lesions of the liver to diagnose fatty liver was analyzed using a proprietary ML algorithm LIVERFAStTM developed with a database of 2862 unique medical assessments of biomarkers, where 1027 assessments were used to train the algorithm and 1835 constituted the validation set. Data of 13,068 patients who underwent the LIVERFAStTM test for evaluation of fatty liver disease were analysed. Data evaluation revealed 11% of the patients exhibited significant fibrosis with fibrosis scores 0.6 - 1.00. Approximately 7% of the population had severe hepatic inflammation. Steatosis was observed in most patients, 63%, whereas severe steatosis S3 was observed in 20%. Using modified SAF (Steatosis, Activity and Fibrosis) scores obtained using the LIVERFAStTM algorithm, NAFLD was detected in 13.41% of the patients (Sx > 0, Ay 0). Approximately 1.91% (Sx > 0, Ay = 2, Fz > 0) of the patients showed NAFLD or NASH scorings while 1.08% had confirmed NASH (Sx > 0, Ay > 2, Fz = 1 - 2) and 1.49% had advanced NASH (Sx > 0, Ay > 2, Fz = 3 - 4). The modified SAF scoring system generated by LIVERFAStTM provides a simple and convenient evaluation of NAFLD and NASH in a cohort of Southeast Asians. This system may lead to the use of noninvasive liver tests in extended populations for more accurate diagnosis of liver pathology, prediction of clinical path of individuals at all stages of liver diseases, and provision of an efficient system for therapeutic interventions.
文摘Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease.