The growing number of COVID-19 cases puts pressure on healthcare services and public institutions worldwide.The pandemic has brought much uncertainty to the global economy and the situation in general.Forecasting meth...The growing number of COVID-19 cases puts pressure on healthcare services and public institutions worldwide.The pandemic has brought much uncertainty to the global economy and the situation in general.Forecasting methods and modeling techniques are important tools for governments to manage critical situations caused by pandemics,which have negative impact on public health.The main purpose of this study is to obtain short-term forecasts of disease epidemiology that could be useful for policymakers and public institutions to make necessary short-term decisions.To evaluate the effectiveness of the proposed attention-based method combining certain data mining algorithms and the classical ARIMA model for short-term forecasts,data on the spread of the COVID-19 virus in Lithuania is used,the forecasts of epidemic dynamics were examined,and the results were presented in the study.Nevertheless,the approach presented might be applied to any country and other pandemic situations.The COVID-19 outbreak started at different times in different countries,hence some countries have a longer history of the disease with more historical data than others.The paper proposes a novel approach to data registration and machine learning-based analysis using data from attention-based countries for forecast validation to predict trends of the spread of COVID-19 and assess risks.展开更多
BACKGROUND The coronavirus disease 2019(COVID-19)was perhaps the most severe global health crisis in living memory.Alongside respiratory symptoms,elevated liver enzymes,abnormal liver function,and even acute liver fai...BACKGROUND The coronavirus disease 2019(COVID-19)was perhaps the most severe global health crisis in living memory.Alongside respiratory symptoms,elevated liver enzymes,abnormal liver function,and even acute liver failure were reported in patients suffering from severe acute respiratory disease coronavirus 2 pneumonia.However,the precise triggers of these forms of liver damage and how they affect the course and outcomes of COVID-19 itself remain unclear.AIM To analyze the impact of liver enzyme abnormalities on the severity and outcomes of COVID-19 in hospitalized patients.METHODS In this study,684 depersonalized medical records from patients hospitalized with COVID-19 during the 2020-2021 period were analyzed.COVID-19 was diagnosed according to the guidelines of the National Institutes of Health(2021).Patients were assigned to two groups:those with elevated liver enzymes(Group 1:603 patients),where at least one out of four liver enzymes were elevated(following the norm of hospital laboratory tests:alanine aminotransferase(ALT)≥40,aspartate aminotransferase(AST)≥40,gamma-glutamyl transferase≥36,or alkaline phosphatase≥150)at any point of hospitalization,from admission to discharge;and the control group(Group 2:81 patients),with normal liver enzymes during hospitalization.COVID-19 severity was assessed according to the interim World Health Organization guidance(2022).Data on viral pneumonia complications,laboratory tests,and underlying diseases were also collected and analyzed.RESULTS In total,603(88.2%)patients produced abnormal liver test results.ALT and AST levels were elevated by a factor of less than 3 in 54.9%and 74.8%of cases with increased enzyme levels,respectively.Patients in Group 1 had almost double the chance of bacterial viral pneumonia complications[odds ratio(OR)=1.73,P=0.0217],required oxygen supply more often,and displayed higher biochemical inflammation indices than those in Group 2.No differences in other COVID-19 complications or underlying diseases were observed between groups.Preexisting hepatitis of a different etiology was rarely documented(in only 3.5%of patients),and had no impact on the severity of COVID-19.Only 5(0.73%)patients experienced acute liver failure,4 of whom died.Overall,the majority of the deceased patients(17 out of 20)had elevated liver enzymes,and most were male.All deceased patients had at least one underlying disease or combination thereof,and the deceased suffered significantly more often from heart diseases,hypertension,and urinary tract infections than those who made recoveries.Alongside male gender(OR=1.72,P=0.0161)and older age(OR=1.02,P=0.0234),diabetes(OR=3.22,P=0.0016)and hyperlipidemia(OR=2.67,P=0.0238),but not obesity,were confirmed as independent factors associated with more a severe COVID-19 infection in our cohort.CONCLUSION In our study,the presence of liver impairment allows us to predict a more severe inflammation with a higher risk of bacterial complication and worse outcomes of COVID-19.Therefore,patients with severe disease forms should have their liver tests monitored regularly and their results should be considered when selecting treatment to avoid further liver damage or even insufficiency.展开更多
DEAR EDITOR,Ancient DNA(a DNA) from mollusc shells is considered a potential archive of historical biodiversity and evolution.However, such information is currently lacking for mollusc shells from the deep ocean, espe...DEAR EDITOR,Ancient DNA(a DNA) from mollusc shells is considered a potential archive of historical biodiversity and evolution.However, such information is currently lacking for mollusc shells from the deep ocean, especially those from acidic chemosynthetic environments theoretically unsuitable for longterm DNA preservation. Here, we report on the recovery of mitochondrial and nuclear gene markers by Illumina sequencing of a DNA from three shells of Archivesica nanshaensis – a hydrocarbon-seep vesicomyid clam previously known only from a pair of empty shells collected at a depth of 2626 m in the South China Sea.展开更多
Sentiment analysis is a method to identify and understand the emotion in the text through NLP and text analysis. In the era of information technology, there is often a certain error between the comments on the movie w...Sentiment analysis is a method to identify and understand the emotion in the text through NLP and text analysis. In the era of information technology, there is often a certain error between the comments on the movie website and the actual score of the movie, and sentiment analysis technology provides a new way to solve this problem. In this paper, Python is used to obtain the movie review data from the Douban platform, and the model is constructed and trained by using naive Bayes and Bi-LSTM. According to the index, a better Bi-LSTM model is selected to classify the emotion of users’ movie reviews, and the classification results are scored according to the classification results, and compared with the real ratings on the website. According to the error of the final comparison results, the feasibility of this technology in the scoring direction of film reviews is being verified. By applying this technology, the phenomenon of film rating distortion in the information age can be prevented and the rights and interests of film and television works can be safeguarded.展开更多
Background:Pneumothorax is a medical emergency caused by the abnormal accumulation of air in the pleural space—the potential space between the lungs and chest wall.On 2D chest radiographs,pneumothorax occurs within t...Background:Pneumothorax is a medical emergency caused by the abnormal accumulation of air in the pleural space—the potential space between the lungs and chest wall.On 2D chest radiographs,pneumothorax occurs within the thoracic cavity and outside of the mediastinum,and we refer to this area as“lung+space.”While deep learning(DL)has increasingly been utilized to segment pneumothorax lesions in chest radiographs,many existing DL models employ an end-to-end approach.These models directly map chest radiographs to clinician-annotated lesion areas,often neglecting the vital domain knowl-edge that pneumothorax is inherently location-sensitive.Methods:We propose a novel approach that incorporates the lung+space as a constraint during DL model training for pneumothorax segmentation on 2D chest radiographs.To circumvent the need for additional annotations and to prevent potential label leakage on the target task,our method utilizes external datasets and an auxiliary task of lung segmentation.This approach generates a specific constraint of lung+space for each chest radiograph.Furthermore,we have incorporated a discriminator to eliminate unreliable constraints caused by the domain shift between the auxiliary and target datasets.Results:Our results demonstrated considerable improvements,with average performance gains of 4.6%,3.6%,and 3.3%regarding intersection over union,dice similarity coefficient,and Hausdorff distance.These results were con-sistent across six baseline models built on three architectures(U-Net,LinkNet,or PSPNet)and two backbones(VGG-11 or MobileOne-S0).We further con-ducted an ablation study to evaluate the contribution of each component in the proposed method and undertook several robustness studies on hyper-parameter selection to validate the stability of our method.Conclusions:The integration of domain knowledge in DL models for medical applications has often been underemphasized.Our research underscores the significance of incorporating medical domain knowledge about the location-specific nature of pneumothorax to enhance DL-based lesion segmentation and further bolster clinicians'trust in DL tools.Beyond pneumothorax,our approach is promising for other thoracic conditions that possess location-relevant characteristics.展开更多
Background:Federated learning(FL)holds promise for safeguarding data privacy in healthcare collaborations.While the term“FL”was originally coined by the engineering community,the statistical field has also developed...Background:Federated learning(FL)holds promise for safeguarding data privacy in healthcare collaborations.While the term“FL”was originally coined by the engineering community,the statistical field has also developed privacy-preserving algorithms,though these are less recognized.Our goal was to bridge this gap with the ffrst comprehensive comparison of FL frameworks from both domains.Methods:We assessed 7 FL frameworks,encompassing both engineering-based and statistical FL algorithms,and compared them against local and centralized modeling of logistic regression and least absolute shrinkage and selection operator(Lasso).Our evaluation utilized both simulated data and real-world emergency department data,focusing on comparing both estimated model coefffcients and the performance of model predictions.Results:The ffndings reveal that statistical FL algorithms produce much less biased estimates of model coefffcients.Conversely,engineering-based methods can yield models with slightly better prediction performance,occasionally outperforming both centralized and statistical FL models.Conclusion:This study underscores the relative strengths and weaknesses of both types of methods,providing recommendations for their selection based on distinct study characteristics.Furthermore,we emphasize the critical need to raise awareness of and integrate these methods into future applications of FL within the healthcare domain.展开更多
Causal inference prevails in the field of laparoscopic surgery.Once the causality between an intervention and outcome is established,the intervention can be applied to a target population to improve clinical outcomes....Causal inference prevails in the field of laparoscopic surgery.Once the causality between an intervention and outcome is established,the intervention can be applied to a target population to improve clinical outcomes.In many clinical scenarios,interventions are applied longitudinally in response to patients’conditions.Such longitudinal data comprise static variables,such as age,gender,and comorbidities;and dynamic variables,such as the treatment regime,laboratory variables,and vital signs.Some dynamic variables can act as both the confounder and mediator for the effect of an intervention on the outcome;in such cases,simple adjustment with a conventional regression model will bias the effect sizes.To address this,numerous statistical methods are being developed for causal inference;these include,but are not limited to,the structural marginal Cox regression model,dynamic treatment regime,and Cox regression model with time-varying covariates.This technical note provides a gentle introduction to such models and illustrates their use with an example in the field of laparoscopic surgery.展开更多
This paper presents an efficient image feature representation method, namely angle structure descriptor(ASD), which is built based on the angle structures of images. According to the diversity in directions, angle str...This paper presents an efficient image feature representation method, namely angle structure descriptor(ASD), which is built based on the angle structures of images. According to the diversity in directions, angle structures are defined in local blocks. Combining color information in HSV color space, we use angle structures to detect images. The internal correlations between neighboring pixels in angle structures are explored to form a feature vector. With angle structures as bridges, ASD extracts image features by integrating multiple information as a whole, such as color, texture, shape and spatial layout information. In addition, the proposed algorithm is efficient for image retrieval without any clustering implementation or model training. Experimental results demonstrate that ASD outperforms the other related algorithms.展开更多
Considering the flexible attitude maneuver and the narrow field of view of agile Earth observation satellite(AEOS)together,a comprehensive task clustering(CTC)is proposed to improve the observation scheduling problem ...Considering the flexible attitude maneuver and the narrow field of view of agile Earth observation satellite(AEOS)together,a comprehensive task clustering(CTC)is proposed to improve the observation scheduling problem for AEOS(OSPFAS).Since the observation scheduling problem for AEOS with comprehensive task clustering(OSWCTC)is a dynamic combination optimization problem,two optimization objectives,the loss rate(LR)of the image quality and the energy consumption(EC),are proposed to format OSWCTC as a bi-objective optimization model.Harnessing the power of an adaptive large neighborhood search(ALNS)algorithm with a nondominated sorting genetic algorithm II(NSGA-II),a bi-objective optimization algorithm,ALNS+NSGA-II,is developed to solve OSWCTC.Based on the existing instances,the efficiency of ALNS+NSGA-II is analyzed from several aspects,meanwhile,results of extensive computational experiments are presented which disclose that OSPFAS considering CTC produces superior outcomes.展开更多
The excellent optical properties of MXene provide new opportunities for short-pulse lasers. A diode-pumped passively Q-switched laser at 1.3 μm wavelength with MXene Ti3C2Tx as saturable absorber was achieved for the...The excellent optical properties of MXene provide new opportunities for short-pulse lasers. A diode-pumped passively Q-switched laser at 1.3 μm wavelength with MXene Ti3C2Tx as saturable absorber was achieved for the first time. The stable passively Q-switched laser has 454 ns pulse width and 162 kHz repetition rate at 4.5 W incident pumped power. The experimental results show that the MXene Ti3C2Tx saturable absorber can be used as an optical modulator to generate short pulse lasers in a solid-state laser field.展开更多
Fast recognition of elevator buttons is a key step for service robots toride elevators automatically. Although there are some studies in this field, noneof them can achieve real-time application due to problems such a...Fast recognition of elevator buttons is a key step for service robots toride elevators automatically. Although there are some studies in this field, noneof them can achieve real-time application due to problems such as recognitionspeed and algorithm complexity. Elevator button recognition is a comprehensiveproblem. Not only does it need to detect the position of multiple buttonsat the same time, but also needs to accurately identify the characters on eachbutton. The latest version 5 of you only look once algorithm (YOLOv5) hasthe fastest reasoning speed and can be used for detecting multiple objects inreal-time. The advantages ofYOLOv5 make it an ideal choice for detecting theposition of multiple buttons in an elevator, but it’s not good at specific wordrecognition. Optical character recognition (OCR) is a well-known techniquefor character recognition. This paper innovatively improved the YOLOv5network, integrated OCR technology, and applied them to the elevator buttonrecognition process. First, we changed the detection scale in the YOLOv5network and only maintained the detection scales of 40 ∗ 40 and 80 ∗ 80, thusimproving the overall object detection speed. Then, we put a modified OCRbranch after the YOLOv5 network to identify the numbers on the buttons.Finally, we verified this method on different datasets and compared it withother typical methods. The results show that the average recall and precisionof this method are 81.2% and 92.4%. Compared with others, the accuracyof this method has reached a very high level, but the recognition speed hasreached 0.056 s, which is far higher than other methods.展开更多
Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing cybercrimes.Recent researches have investigated the role of machine learning(ML)models to detect malicious...Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing cybercrimes.Recent researches have investigated the role of machine learning(ML)models to detect malicious URLs.By using ML algorithms,rst,the features of URLs are extracted,and then different ML models are trained.The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL.Therefore,deep learning(DL)models are used to solve these issues since they are able to perform featureless detection.Furthermore,DL models give better accuracy and generalization to newly designed URLs;however,the results of our study show that these models,such as any other DL models,can be susceptible to adversarial attacks.In this paper,we examine the robustness of these models and demonstrate the importance of considering this susceptibility before applying such detection systems in real-world solutions.We propose and demonstrate a black-box attack based on scoring functions with greedy search for the minimum number of perturbations leading to a misclassication.The attack is examined against different types of convolutional neural networks(CNN)-based URL classiers and it causes a tangible decrease in the accuracy with more than 56%reduction in the accuracy of the best classier(among the selected classiers for this work).Moreover,adversarial training shows promising results in reducing the inuence of the attack on the robustness of the model to less than 7%on average.展开更多
Dear Editor, The main components of multi-view geometry and computer vision are robust pose estimation and feature matching. This letter discusses how to recover two-view geometry and match features between a pair of ...Dear Editor, The main components of multi-view geometry and computer vision are robust pose estimation and feature matching. This letter discusses how to recover two-view geometry and match features between a pair of images, and presents MCNet(a multiscale clustering network) as an algorithm for extracting multiscale features. It can identify the true inliers from the established putative correspondences, where outliers may degenerate the geometry estimation. In particular, the proposed MCNet is based on graph clustering.展开更多
We report the fabrication of a planar waveguide in the Nd:Bi_(12)SiO_(20) crystal by multi-energy C ions at room temperature. The waveguide is annealed at 200℃, 260℃, and 300℃ in succession each for 30 min in ...We report the fabrication of a planar waveguide in the Nd:Bi_(12)SiO_(20) crystal by multi-energy C ions at room temperature. The waveguide is annealed at 200℃, 260℃, and 300℃ in succession each for 30 min in an open oven. The effective refractive index profiles at transverse electric(TE) polarization are stable after the annealing treatments. Damage distribution for multi-energy C ion implanted in Nd:Bi_(12)SiO_(20) crystal is calculated by SRIM 2010. The Raman and fluorescence spectra of the Nd:Bi_(12)SiO_(20) crystal are collected by an excitation beam at 633 nm and 473 nm, respectively. The results indicate the stabilization of the optical waveguide in Nd:Bi_(12)SiO_(20) crystal.展开更多
A novel Nd, La:SrF_2 disordered crystal is prepared, and its continuous-wave wavelength tuning operation is performed for the first time. Employing a surface plasmon resonance(SPR) based gold nanobipyramids(G-NBPs...A novel Nd, La:SrF_2 disordered crystal is prepared, and its continuous-wave wavelength tuning operation is performed for the first time. Employing a surface plasmon resonance(SPR) based gold nanobipyramids(G-NBPs) saturable absorber,we obtain a compact diode-pumped passively Q-switched Nd, La:SrF_2 laser. The stable Q-switched pulse operates with the shortest pulse duration of 1.15 μs and the maximum repetition rate of 41 k Hz. The corresponding single pulse energy is 2.24 μJ. The results indicate that G-NBPs could be a promising saturable absorber applied to the diode-pumped solid state lasers(DPSSLs).展开更多
Guastello’s polynomial regression method for solving cusp catastrophe model has been widely applied to analyze nonlinear behavior outcomes. However, no statistical power analysis for this modeling approach has been r...Guastello’s polynomial regression method for solving cusp catastrophe model has been widely applied to analyze nonlinear behavior outcomes. However, no statistical power analysis for this modeling approach has been reported probably due to the complex nature of the cusp catastrophe model. Since statistical power analysis is essential for research design, we propose a novel method in this paper to fill in the gap. The method is simulation-based and can be used to calculate statistical power and sample size when Guastello’s polynomial regression method is used to do cusp catastrophe modeling analysis. With this novel approach, a power curve is produced first to depict the relationship between statistical power and samples size under different model specifications. This power curve is then used to determine sample size required for specified statistical power. We verify the method first through four scenarios generated through Monte Carlo simulations, and followed by an application of the method with real published data in modeling early sexual initiation among young adolescents. Findings of our study suggest that this simulation-based power analysis method can be used to estimate sample size and statistical power for Guastello’s polynomial regression method in cusp catastrophe modeling.展开更多
Graph neural networks have been shown to be very effective in utilizing pairwise relationships across samples.Recently,there have been several successful proposals to generalize graph neural networks to hypergraph neu...Graph neural networks have been shown to be very effective in utilizing pairwise relationships across samples.Recently,there have been several successful proposals to generalize graph neural networks to hypergraph neural networks to exploit more com-plex relationships.In particular,the hypergraph collaborative networks yield superior results compared to other hypergraph neural net-works for various semi-supervised learning tasks.The collaborative network can provide high quality vertex embeddings and hyperedge embeddings together by formulating them as a joint optimization problem and by using their consistency in reconstructing the given hy-pergraph.In this paper,we aim to establish the algorithmic stability of the core layer of the collaborative network and provide generaliz--ation guarantees.The analysis sheds light on the design of hypergraph filters in collaborative networks,for instance,how the data and hypergraph filters should be scaled to achieve uniform stability of the learning process.Some experimental results on real-world datasets are presented to illustrate the theory.展开更多
This paper focuses on fine-grained,secure access to FAIR data,for which we propose ontology-based data access policies.These policies take into account both the FAIR aspects of the data relevant to access(such as prov...This paper focuses on fine-grained,secure access to FAIR data,for which we propose ontology-based data access policies.These policies take into account both the FAIR aspects of the data relevant to access(such as provenance and licence),expressed as metadata,and additional metadata describing users.With this tripartite approach(data,associated metadata expressing FAIR information,and additional metadata about users),secure and controlled access to object data can be obtained.This yields a security dimension to the“A”(accessible)in FAIR,which is clearly needed in domains like security and intelligence.These domains need data to be shared under tight controls,with widely varying individual access rights.In this paper,we propose an approach called Ontology-Based Access Control(OBAC),which utilizes concepts and relations from a data set's domain ontology.We argue that ontology-based access policies contribute to data reusability and can be reconciled with privacy-aware data access policies.We illustrate our OBAC approach through a proof-of-concept and propose that OBAC to be adopted as a best practice for access management of FAIR data.展开更多
The FAIR principles were received with broad acceptance in several scientific communities.However,there is still some degree of uncertainty on how they should be implemented.Several self-report questionnaires have bee...The FAIR principles were received with broad acceptance in several scientific communities.However,there is still some degree of uncertainty on how they should be implemented.Several self-report questionnaires have been proposed to assess the implementation of the FAIR principles.Moreover,the FAIRmetrics group released 14,general-purpose maturity for representing FAIRness.Initially,these metrics were conducted as open-answer questionnaires.Recently,these metrics have been implemented into a software that can automatically harvest metadata from metadata providers and generate a principle-specific FAIRness evaluation.With so many different approaches for FAIRness evaluations,we believe that further clarification on their limitations and advantages,as well as on their interpretation and interplay should be considered.展开更多
Sepsis is a complex and heterogeneous syndrome that remains a serious challenge to healthcare worldwide.Patients afflicted by severe sepsis or septic shock are customarily placed under intensive care unit(ICU)supervis...Sepsis is a complex and heterogeneous syndrome that remains a serious challenge to healthcare worldwide.Patients afflicted by severe sepsis or septic shock are customarily placed under intensive care unit(ICU)supervision,where a multitude of apparatus is poised to produce high-granularity data.This reservoir of high-quality data forms the cornerstone for the integration of AI into clinical practice.However,existing reviews currently lack the inclusion of the latest advancements.This review examines the evolving integration of artificial intelligence(AI)in sepsis management.Applications of artificial intelligence include early detection,subtyping analysis,precise treatment and prognosis assessment.AI-driven early warning systems provide enhanced recognition and intervention capabilities,while profiling analyzes elucidate distinct sepsis manifestations for targeted therapy.Precision medicine harnesses the potential of artificial intelligence for pathogen identification,antibiotic selection,and fluid optimization.In conclusion,the seamless amalgamation of artificial intelligence into the domain of sepsis management heralds a transformative shift,ushering in novel prospects to elevate diagnostic precision,therapeutic efficacy,and prognostic acumen.As AI technologies develop,their impact on shaping the future of sepsis care warrants ongoing research and thoughtful implementation.展开更多
基金This project has received funding from the Research Council of Lithuania(LMTLT),agreement No S-COV-20-4.
文摘The growing number of COVID-19 cases puts pressure on healthcare services and public institutions worldwide.The pandemic has brought much uncertainty to the global economy and the situation in general.Forecasting methods and modeling techniques are important tools for governments to manage critical situations caused by pandemics,which have negative impact on public health.The main purpose of this study is to obtain short-term forecasts of disease epidemiology that could be useful for policymakers and public institutions to make necessary short-term decisions.To evaluate the effectiveness of the proposed attention-based method combining certain data mining algorithms and the classical ARIMA model for short-term forecasts,data on the spread of the COVID-19 virus in Lithuania is used,the forecasts of epidemic dynamics were examined,and the results were presented in the study.Nevertheless,the approach presented might be applied to any country and other pandemic situations.The COVID-19 outbreak started at different times in different countries,hence some countries have a longer history of the disease with more historical data than others.The paper proposes a novel approach to data registration and machine learning-based analysis using data from attention-based countries for forecast validation to predict trends of the spread of COVID-19 and assess risks.
文摘BACKGROUND The coronavirus disease 2019(COVID-19)was perhaps the most severe global health crisis in living memory.Alongside respiratory symptoms,elevated liver enzymes,abnormal liver function,and even acute liver failure were reported in patients suffering from severe acute respiratory disease coronavirus 2 pneumonia.However,the precise triggers of these forms of liver damage and how they affect the course and outcomes of COVID-19 itself remain unclear.AIM To analyze the impact of liver enzyme abnormalities on the severity and outcomes of COVID-19 in hospitalized patients.METHODS In this study,684 depersonalized medical records from patients hospitalized with COVID-19 during the 2020-2021 period were analyzed.COVID-19 was diagnosed according to the guidelines of the National Institutes of Health(2021).Patients were assigned to two groups:those with elevated liver enzymes(Group 1:603 patients),where at least one out of four liver enzymes were elevated(following the norm of hospital laboratory tests:alanine aminotransferase(ALT)≥40,aspartate aminotransferase(AST)≥40,gamma-glutamyl transferase≥36,or alkaline phosphatase≥150)at any point of hospitalization,from admission to discharge;and the control group(Group 2:81 patients),with normal liver enzymes during hospitalization.COVID-19 severity was assessed according to the interim World Health Organization guidance(2022).Data on viral pneumonia complications,laboratory tests,and underlying diseases were also collected and analyzed.RESULTS In total,603(88.2%)patients produced abnormal liver test results.ALT and AST levels were elevated by a factor of less than 3 in 54.9%and 74.8%of cases with increased enzyme levels,respectively.Patients in Group 1 had almost double the chance of bacterial viral pneumonia complications[odds ratio(OR)=1.73,P=0.0217],required oxygen supply more often,and displayed higher biochemical inflammation indices than those in Group 2.No differences in other COVID-19 complications or underlying diseases were observed between groups.Preexisting hepatitis of a different etiology was rarely documented(in only 3.5%of patients),and had no impact on the severity of COVID-19.Only 5(0.73%)patients experienced acute liver failure,4 of whom died.Overall,the majority of the deceased patients(17 out of 20)had elevated liver enzymes,and most were male.All deceased patients had at least one underlying disease or combination thereof,and the deceased suffered significantly more often from heart diseases,hypertension,and urinary tract infections than those who made recoveries.Alongside male gender(OR=1.72,P=0.0161)and older age(OR=1.02,P=0.0234),diabetes(OR=3.22,P=0.0016)and hyperlipidemia(OR=2.67,P=0.0238),but not obesity,were confirmed as independent factors associated with more a severe COVID-19 infection in our cohort.CONCLUSION In our study,the presence of liver impairment allows us to predict a more severe inflammation with a higher risk of bacterial complication and worse outcomes of COVID-19.Therefore,patients with severe disease forms should have their liver tests monitored regularly and their results should be considered when selecting treatment to avoid further liver damage or even insufficiency.
基金supported by the Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou)(SMSEGL20SC02)University Grants Committee of Hong Kong(GRF12102222)。
文摘DEAR EDITOR,Ancient DNA(a DNA) from mollusc shells is considered a potential archive of historical biodiversity and evolution.However, such information is currently lacking for mollusc shells from the deep ocean, especially those from acidic chemosynthetic environments theoretically unsuitable for longterm DNA preservation. Here, we report on the recovery of mitochondrial and nuclear gene markers by Illumina sequencing of a DNA from three shells of Archivesica nanshaensis – a hydrocarbon-seep vesicomyid clam previously known only from a pair of empty shells collected at a depth of 2626 m in the South China Sea.
文摘Sentiment analysis is a method to identify and understand the emotion in the text through NLP and text analysis. In the era of information technology, there is often a certain error between the comments on the movie website and the actual score of the movie, and sentiment analysis technology provides a new way to solve this problem. In this paper, Python is used to obtain the movie review data from the Douban platform, and the model is constructed and trained by using naive Bayes and Bi-LSTM. According to the index, a better Bi-LSTM model is selected to classify the emotion of users’ movie reviews, and the classification results are scored according to the classification results, and compared with the real ratings on the website. According to the error of the final comparison results, the feasibility of this technology in the scoring direction of film reviews is being verified. By applying this technology, the phenomenon of film rating distortion in the information age can be prevented and the rights and interests of film and television works can be safeguarded.
文摘Background:Pneumothorax is a medical emergency caused by the abnormal accumulation of air in the pleural space—the potential space between the lungs and chest wall.On 2D chest radiographs,pneumothorax occurs within the thoracic cavity and outside of the mediastinum,and we refer to this area as“lung+space.”While deep learning(DL)has increasingly been utilized to segment pneumothorax lesions in chest radiographs,many existing DL models employ an end-to-end approach.These models directly map chest radiographs to clinician-annotated lesion areas,often neglecting the vital domain knowl-edge that pneumothorax is inherently location-sensitive.Methods:We propose a novel approach that incorporates the lung+space as a constraint during DL model training for pneumothorax segmentation on 2D chest radiographs.To circumvent the need for additional annotations and to prevent potential label leakage on the target task,our method utilizes external datasets and an auxiliary task of lung segmentation.This approach generates a specific constraint of lung+space for each chest radiograph.Furthermore,we have incorporated a discriminator to eliminate unreliable constraints caused by the domain shift between the auxiliary and target datasets.Results:Our results demonstrated considerable improvements,with average performance gains of 4.6%,3.6%,and 3.3%regarding intersection over union,dice similarity coefficient,and Hausdorff distance.These results were con-sistent across six baseline models built on three architectures(U-Net,LinkNet,or PSPNet)and two backbones(VGG-11 or MobileOne-S0).We further con-ducted an ablation study to evaluate the contribution of each component in the proposed method and undertook several robustness studies on hyper-parameter selection to validate the stability of our method.Conclusions:The integration of domain knowledge in DL models for medical applications has often been underemphasized.Our research underscores the significance of incorporating medical domain knowledge about the location-specific nature of pneumothorax to enhance DL-based lesion segmentation and further bolster clinicians'trust in DL tools.Beyond pneumothorax,our approach is promising for other thoracic conditions that possess location-relevant characteristics.
基金supported by the Duke/Duke-NUS Collaboration grant.
文摘Background:Federated learning(FL)holds promise for safeguarding data privacy in healthcare collaborations.While the term“FL”was originally coined by the engineering community,the statistical field has also developed privacy-preserving algorithms,though these are less recognized.Our goal was to bridge this gap with the ffrst comprehensive comparison of FL frameworks from both domains.Methods:We assessed 7 FL frameworks,encompassing both engineering-based and statistical FL algorithms,and compared them against local and centralized modeling of logistic regression and least absolute shrinkage and selection operator(Lasso).Our evaluation utilized both simulated data and real-world emergency department data,focusing on comparing both estimated model coefffcients and the performance of model predictions.Results:The ffndings reveal that statistical FL algorithms produce much less biased estimates of model coefffcients.Conversely,engineering-based methods can yield models with slightly better prediction performance,occasionally outperforming both centralized and statistical FL models.Conclusion:This study underscores the relative strengths and weaknesses of both types of methods,providing recommendations for their selection based on distinct study characteristics.Furthermore,we emphasize the critical need to raise awareness of and integrate these methods into future applications of FL within the healthcare domain.
基金funding from the National Natural Science Foundation of China(82272180)Open Foundation of Key Laboratory of Digital Technology in Medical Diagnostics of Zhejiang Province(SZZD202206)+2 种基金funding from the Sichuan Medical Association Scientific Research Project(S21019)funding from the Key Research and Development Project of Zhejiang Province(2021C03071)funding from Zhejiang Medical and Health Science and Technology Project(2017ZD001)。
文摘Causal inference prevails in the field of laparoscopic surgery.Once the causality between an intervention and outcome is established,the intervention can be applied to a target population to improve clinical outcomes.In many clinical scenarios,interventions are applied longitudinally in response to patients’conditions.Such longitudinal data comprise static variables,such as age,gender,and comorbidities;and dynamic variables,such as the treatment regime,laboratory variables,and vital signs.Some dynamic variables can act as both the confounder and mediator for the effect of an intervention on the outcome;in such cases,simple adjustment with a conventional regression model will bias the effect sizes.To address this,numerous statistical methods are being developed for causal inference;these include,but are not limited to,the structural marginal Cox regression model,dynamic treatment regime,and Cox regression model with time-varying covariates.This technical note provides a gentle introduction to such models and illustrates their use with an example in the field of laparoscopic surgery.
基金supported by the National Natural Science Foundation of China (No.61170145, 61373081, 61402268, 61401260, 61572298)the Technology and Development Project of Shandong (No.2013GGX10125)+1 种基金the Natural Science Foundation of Shandong China (No.BS2014DX006, ZR2014FM012)the Taishan Scholar Project of Shandong, China
文摘This paper presents an efficient image feature representation method, namely angle structure descriptor(ASD), which is built based on the angle structures of images. According to the diversity in directions, angle structures are defined in local blocks. Combining color information in HSV color space, we use angle structures to detect images. The internal correlations between neighboring pixels in angle structures are explored to form a feature vector. With angle structures as bridges, ASD extracts image features by integrating multiple information as a whole, such as color, texture, shape and spatial layout information. In addition, the proposed algorithm is efficient for image retrieval without any clustering implementation or model training. Experimental results demonstrate that ASD outperforms the other related algorithms.
文摘Considering the flexible attitude maneuver and the narrow field of view of agile Earth observation satellite(AEOS)together,a comprehensive task clustering(CTC)is proposed to improve the observation scheduling problem for AEOS(OSPFAS).Since the observation scheduling problem for AEOS with comprehensive task clustering(OSWCTC)is a dynamic combination optimization problem,two optimization objectives,the loss rate(LR)of the image quality and the energy consumption(EC),are proposed to format OSWCTC as a bi-objective optimization model.Harnessing the power of an adaptive large neighborhood search(ALNS)algorithm with a nondominated sorting genetic algorithm II(NSGA-II),a bi-objective optimization algorithm,ALNS+NSGA-II,is developed to solve OSWCTC.Based on the existing instances,the efficiency of ALNS+NSGA-II is analyzed from several aspects,meanwhile,results of extensive computational experiments are presented which disclose that OSPFAS considering CTC produces superior outcomes.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61475089 and 61435010)the Science and Technology Planning Project of Guangdong Province,China(Grant No.2016B050501005)the Science and Technology Innovation Commission of Shenzhen,China(Grant No.KQTD2015032416270385)
文摘The excellent optical properties of MXene provide new opportunities for short-pulse lasers. A diode-pumped passively Q-switched laser at 1.3 μm wavelength with MXene Ti3C2Tx as saturable absorber was achieved for the first time. The stable passively Q-switched laser has 454 ns pulse width and 162 kHz repetition rate at 4.5 W incident pumped power. The experimental results show that the MXene Ti3C2Tx saturable absorber can be used as an optical modulator to generate short pulse lasers in a solid-state laser field.
基金the Research and Implementation of An Intelligent Driving Assistance System Based on Augmented Reality in Hebei Science and Technology Support Plan (Grant Number 17210803D)Science and Technology Research Project of Higher Education in Hebei Province (Grant Number ZD2020318)Middle School Students Science and Technology Innovation Ability Cultivation Special Project (Grant No.22E50075D)and project (Grant No.1181480).
文摘Fast recognition of elevator buttons is a key step for service robots toride elevators automatically. Although there are some studies in this field, noneof them can achieve real-time application due to problems such as recognitionspeed and algorithm complexity. Elevator button recognition is a comprehensiveproblem. Not only does it need to detect the position of multiple buttonsat the same time, but also needs to accurately identify the characters on eachbutton. The latest version 5 of you only look once algorithm (YOLOv5) hasthe fastest reasoning speed and can be used for detecting multiple objects inreal-time. The advantages ofYOLOv5 make it an ideal choice for detecting theposition of multiple buttons in an elevator, but it’s not good at specific wordrecognition. Optical character recognition (OCR) is a well-known techniquefor character recognition. This paper innovatively improved the YOLOv5network, integrated OCR technology, and applied them to the elevator buttonrecognition process. First, we changed the detection scale in the YOLOv5network and only maintained the detection scales of 40 ∗ 40 and 80 ∗ 80, thusimproving the overall object detection speed. Then, we put a modified OCRbranch after the YOLOv5 network to identify the numbers on the buttons.Finally, we verified this method on different datasets and compared it withother typical methods. The results show that the average recall and precisionof this method are 81.2% and 92.4%. Compared with others, the accuracyof this method has reached a very high level, but the recognition speed hasreached 0.056 s, which is far higher than other methods.
基金supported by Korea Electric Power Corporation(Grant Number:R18XA02).
文摘Detecting malicious Uniform Resource Locators(URLs)is crucially important to prevent attackers from committing cybercrimes.Recent researches have investigated the role of machine learning(ML)models to detect malicious URLs.By using ML algorithms,rst,the features of URLs are extracted,and then different ML models are trained.The limitation of this approach is that it requires manual feature engineering and it does not consider the sequential patterns in the URL.Therefore,deep learning(DL)models are used to solve these issues since they are able to perform featureless detection.Furthermore,DL models give better accuracy and generalization to newly designed URLs;however,the results of our study show that these models,such as any other DL models,can be susceptible to adversarial attacks.In this paper,we examine the robustness of these models and demonstrate the importance of considering this susceptibility before applying such detection systems in real-world solutions.We propose and demonstrate a black-box attack based on scoring functions with greedy search for the minimum number of perturbations leading to a misclassication.The attack is examined against different types of convolutional neural networks(CNN)-based URL classiers and it causes a tangible decrease in the accuracy with more than 56%reduction in the accuracy of the best classier(among the selected classiers for this work).Moreover,adversarial training shows promising results in reducing the inuence of the attack on the robustness of the model to less than 7%on average.
基金supported by the National Natural Science Foundation of China(61703260,62173252)。
文摘Dear Editor, The main components of multi-view geometry and computer vision are robust pose estimation and feature matching. This letter discusses how to recover two-view geometry and match features between a pair of images, and presents MCNet(a multiscale clustering network) as an algorithm for extracting multiscale features. It can identify the true inliers from the established putative correspondences, where outliers may degenerate the geometry estimation. In particular, the proposed MCNet is based on graph clustering.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.11404194 and 11274188)the Promotive Research Fund for Excellent Young and Middle-aged Scientisits of Shandong Province,China(Grant No.BS2015SF003)+3 种基金the China Postdoctoral Science Foundation(Grant Nos.2015M582053 and 2016T90609)the Qingdao Municipal Postdoctoral Application Research Project,China(Grant No.2015131)the State Key Laboratory of Nuclear Physics and Technology at Peking University,Chinathe State Key Laboratory of Crystal Materials and Key Laboratory of Particle Physics and Particle Irradiation(MOE)at Shandong University,China
文摘We report the fabrication of a planar waveguide in the Nd:Bi_(12)SiO_(20) crystal by multi-energy C ions at room temperature. The waveguide is annealed at 200℃, 260℃, and 300℃ in succession each for 30 min in an open oven. The effective refractive index profiles at transverse electric(TE) polarization are stable after the annealing treatments. Damage distribution for multi-energy C ion implanted in Nd:Bi_(12)SiO_(20) crystal is calculated by SRIM 2010. The Raman and fluorescence spectra of the Nd:Bi_(12)SiO_(20) crystal are collected by an excitation beam at 633 nm and 473 nm, respectively. The results indicate the stabilization of the optical waveguide in Nd:Bi_(12)SiO_(20) crystal.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.61475089,51432007,and 61422511)
文摘A novel Nd, La:SrF_2 disordered crystal is prepared, and its continuous-wave wavelength tuning operation is performed for the first time. Employing a surface plasmon resonance(SPR) based gold nanobipyramids(G-NBPs) saturable absorber,we obtain a compact diode-pumped passively Q-switched Nd, La:SrF_2 laser. The stable Q-switched pulse operates with the shortest pulse duration of 1.15 μs and the maximum repetition rate of 41 k Hz. The corresponding single pulse energy is 2.24 μJ. The results indicate that G-NBPs could be a promising saturable absorber applied to the diode-pumped solid state lasers(DPSSLs).
文摘Guastello’s polynomial regression method for solving cusp catastrophe model has been widely applied to analyze nonlinear behavior outcomes. However, no statistical power analysis for this modeling approach has been reported probably due to the complex nature of the cusp catastrophe model. Since statistical power analysis is essential for research design, we propose a novel method in this paper to fill in the gap. The method is simulation-based and can be used to calculate statistical power and sample size when Guastello’s polynomial regression method is used to do cusp catastrophe modeling analysis. With this novel approach, a power curve is produced first to depict the relationship between statistical power and samples size under different model specifications. This power curve is then used to determine sample size required for specified statistical power. We verify the method first through four scenarios generated through Monte Carlo simulations, and followed by an application of the method with real published data in modeling early sexual initiation among young adolescents. Findings of our study suggest that this simulation-based power analysis method can be used to estimate sample size and statistical power for Guastello’s polynomial regression method in cusp catastrophe modeling.
基金Ng was supported in part by Hong Kong Research Grant Council General Research Fund(GRF),China(Nos.12300218,12300519,117201020,17300021,CRF C1013-21GF,C7004-21GF and Joint NSFC-RGC NHKU76921)Wu is supported by National Natural Science Foundation of China(No.62206111)+3 种基金Young Talent Support Project of Guangzhou Association for Science and Technology,China(No.QT-2023-017)Guangzhou Basic and Applied Basic Research Foundation,China(No.2023A04J1058)Fundamental Research Funds for the Central Universities,China(No.21622326)China Postdoctoral Science Foundation(No.2022M721343).
文摘Graph neural networks have been shown to be very effective in utilizing pairwise relationships across samples.Recently,there have been several successful proposals to generalize graph neural networks to hypergraph neural networks to exploit more com-plex relationships.In particular,the hypergraph collaborative networks yield superior results compared to other hypergraph neural net-works for various semi-supervised learning tasks.The collaborative network can provide high quality vertex embeddings and hyperedge embeddings together by formulating them as a joint optimization problem and by using their consistency in reconstructing the given hy-pergraph.In this paper,we aim to establish the algorithmic stability of the core layer of the collaborative network and provide generaliz--ation guarantees.The analysis sheds light on the design of hypergraph filters in collaborative networks,for instance,how the data and hypergraph filters should be scaled to achieve uniform stability of the learning process.Some experimental results on real-world datasets are presented to illustrate the theory.
基金Part of this work was supported by the Titanium Project(funded by the European Comission under grant agreement 740558)The work was also supported by TNO’s internal research project“ERP AI”.
文摘This paper focuses on fine-grained,secure access to FAIR data,for which we propose ontology-based data access policies.These policies take into account both the FAIR aspects of the data relevant to access(such as provenance and licence),expressed as metadata,and additional metadata describing users.With this tripartite approach(data,associated metadata expressing FAIR information,and additional metadata about users),secure and controlled access to object data can be obtained.This yields a security dimension to the“A”(accessible)in FAIR,which is clearly needed in domains like security and intelligence.These domains need data to be shared under tight controls,with widely varying individual access rights.In this paper,we propose an approach called Ontology-Based Access Control(OBAC),which utilizes concepts and relations from a data set's domain ontology.We argue that ontology-based access policies contribute to data reusability and can be reconciled with privacy-aware data access policies.We illustrate our OBAC approach through a proof-of-concept and propose that OBAC to be adopted as a best practice for access management of FAIR data.
基金M.Dumontier was supported by grants from NWO(400.17.605628.011.011)+5 种基金NIH(3OT3TR002027-01S11OT3OD025467-011OT3OD025464-01)H2020-EU EOSClife(824087)ELIXIR,the research infrastructure for life-science data.R.de Miranda Azevedo was supported by grants from H2020-EU EOSClife(824087)ELIXIR,the research infrastructure for life-science data.
文摘The FAIR principles were received with broad acceptance in several scientific communities.However,there is still some degree of uncertainty on how they should be implemented.Several self-report questionnaires have been proposed to assess the implementation of the FAIR principles.Moreover,the FAIRmetrics group released 14,general-purpose maturity for representing FAIRness.Initially,these metrics were conducted as open-answer questionnaires.Recently,these metrics have been implemented into a software that can automatically harvest metadata from metadata providers and generate a principle-specific FAIRness evaluation.With so many different approaches for FAIRness evaluations,we believe that further clarification on their limitations and advantages,as well as on their interpretation and interplay should be considered.
文摘Sepsis is a complex and heterogeneous syndrome that remains a serious challenge to healthcare worldwide.Patients afflicted by severe sepsis or septic shock are customarily placed under intensive care unit(ICU)supervision,where a multitude of apparatus is poised to produce high-granularity data.This reservoir of high-quality data forms the cornerstone for the integration of AI into clinical practice.However,existing reviews currently lack the inclusion of the latest advancements.This review examines the evolving integration of artificial intelligence(AI)in sepsis management.Applications of artificial intelligence include early detection,subtyping analysis,precise treatment and prognosis assessment.AI-driven early warning systems provide enhanced recognition and intervention capabilities,while profiling analyzes elucidate distinct sepsis manifestations for targeted therapy.Precision medicine harnesses the potential of artificial intelligence for pathogen identification,antibiotic selection,and fluid optimization.In conclusion,the seamless amalgamation of artificial intelligence into the domain of sepsis management heralds a transformative shift,ushering in novel prospects to elevate diagnostic precision,therapeutic efficacy,and prognostic acumen.As AI technologies develop,their impact on shaping the future of sepsis care warrants ongoing research and thoughtful implementation.