Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Metho...Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Methods Eighty-eight urinary calculi patients were prospectively enrolled.Low dose CT(LDCT)and ULDCT scanning were performed,and the effective dose(ED)of each scanning protocol were calculated.The patients were then randomly divided into training set(n=75)and test set(n=13),and a self-supervised deep learning AI noise reduction system based on the nearest adjacent layer constructed with ULDCT images in training set was used for reducing noise of ULDCT images in test set.In test set,the quality of ULDCT images before and after AI noise reduction were compared with LDCT images,i.e.Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)scores,image noise(SD ROI)and signal-to-noise ratio(SNR).Results The tube current,the volume CT dose index and the dose length product of abdominal ULDCT scanning protocol were all lower compared with those of LDCT scanning protocol(all P<0.05),with a decrease of ED for approximately 82.66%.For 13 patients with urinary calculi in test set,BRISQUE score showed that the quality level of ULDCT images before AI noise reduction reached 54.42%level but raised to 95.76%level of LDCT images after AI noise reduction.Both ULDCT images after AI noise reduction and LDCT images had lower SD ROI and higher SNR than ULDCT images before AI noise reduction(all adjusted P<0.05),whereas no significant difference was found between the former two(both adjusted P>0.05).Conclusion Self-supervised learning AI noise reduction technology based on the nearest adjacent layer could effectively reduce noise and improve image quality of urinary calculi ULDCT images,being conducive for clinical application of ULDCT.展开更多
Noise pollution tends to receive less awareness compared to other types of pollution,however,it greatly impacts the quality of life for humans such as causing sleep disruption,stress or hearing impairment.Profiling ur...Noise pollution tends to receive less awareness compared to other types of pollution,however,it greatly impacts the quality of life for humans such as causing sleep disruption,stress or hearing impairment.Profiling urban sound through the identification of noise sources in cities could help to benefit livability by reducing exposure to noise pollution through methods such as noise control,planning of the soundscape environment,or selection of safe living space.In this paper,we proposed a self-attention long short-term memory(LSTM)method that can improve sound classification compared to previous baselines.An attention mechanism will be designed solely to capture the key section of an audio data series.This is practical as we only need to process important parts of the data and can ignore the rest,making it applicable when gathering information with long-term dependencies.The dataset used is the Urbansound8k dataset which specifically pertains to urban environments and data augmentation was applied to overcome imbalanced data and dataset scarcity.All audio sources in the dataset were normalized to mono signals.From the dataset above,an experiment was conducted to confirm the suitability of the proposed model when applied to the mel-spectrogram and MFCC(Mel-Frequency Cepstral Coefficients)datasets transformed from the original dataset.Improving the classification accuracy depends on the machine learning models as well as the input data,therefore we have evaluated different class models and extraction methods to find the best performing.By combining data augmentation techniques and various extraction methods,our classification model has achieved state-of-the-art performance,each class accuracy is up to 98%.展开更多
Accurate determination of the optical properties of biological tissues enables quantitative understanding of light propagation in these tissues for optical diagnosis and treatment applications.The absorption(μa)and s...Accurate determination of the optical properties of biological tissues enables quantitative understanding of light propagation in these tissues for optical diagnosis and treatment applications.The absorption(μa)and scattering(μs)coe±cients of biological tissues are inversely analyzed from their diffuse re°ectance(R)and total transmittance(T),which are measured using a double integrating spheres(DIS)system.The inversion algorithms,for example,inverse adding doubling method and inverse Monte Carlo method,are sensitive to noise signals during the DIS measurements,resulting in reduced accuracy during determination.In this study,we propose an arti ficial neural network(ANN)to estimateμa andμs at a target wavelength from the R and T spectra measured via the DIS to reduce noise in the optical properties.Approximate models of the optical properties and Monte Carlo calculations that simulated the DIS measurements were used to generate spectral datasets comprisingμa,μs,R and T.Measurement noise signals were added to R and T,and the ANN model was then trained using the noise-added datasets.Numerical results showed that the trained ANN model reduced the effects of noise inμa andμs estimation.Experimental veri fication indicated noise-reduced estimation from the R and T values measured by the DIS with a small number of scans on average,resulting in measurement time reduction.The results demonstrated the noise robustness of the proposed ANN-based method for optical properties determination and will contribute to shorter DIS measurement times,thus reducing changes in the optical properties due to desiccation of the samples.展开更多
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p...Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.展开更多
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ...Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.展开更多
Reducing the aerodynamic drag and noise levels of high-speed pantographs is important for promoting environmentally friendly,energy efficient and rapid advances in train technology.Using computational fluid dynamics t...Reducing the aerodynamic drag and noise levels of high-speed pantographs is important for promoting environmentally friendly,energy efficient and rapid advances in train technology.Using computational fluid dynamics theory and the K-FWH acoustic equation,a numerical simulation is conducted to investigate the aerodynamic characteristics of high-speed pantographs.A component optimization method is proposed as a possible solution to the problemof aerodynamic drag and noise in high-speed pantographs.The results of the study indicate that the panhead,base and insulator are the main contributors to aerodynamic drag and noise in high-speed pantographs.Therefore,a gradual optimization process is implemented to improve the most significant components that cause aerodynamic drag and noise.By optimizing the cross-sectional shape of the strips and insulators,the drag and noise caused by airflow separation and vortex shedding can be reduced.The aerodynamic drag of insulator with circular cross section and strips with rectangular cross section is the largest.Ellipsifying insulators and optimizing the chamfer angle and height of the windward surface of the strips can improve the aerodynamic performance of the pantograph.In addition,the streamlined fairing attached to the base can eliminate the complex flow and shield the radiated noise.In contrast to the original pantograph design,the improved pantograph shows a 21.1%reduction in aerodynamic drag and a 1.65 dBA reduction in aerodynamic noise.展开更多
Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review...Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review is to assess and analyze the use of AI and its use in orthopedic practice, as well as its applications, limitations, and pitfalls. Methods: A review of all relevant databases such as EMBASE, Cochrane Database of Systematic Reviews, MEDLINE, Science Citation Index, Scopus, and Web of Science with keywords of AI, orthopedic surgery, applications, and drawbacks. All related articles on AI and orthopaedic practice were reviewed. A total of 3210 articles were included in the review. Results: The data from 351 studies were analyzed where in orthopedic surgery. AI is being used for diagnostic procedures, radiological diagnosis, models of clinical care, and utilization of hospital and bed resources. AI has also taken a chunk of share in assisted robotic orthopaedic surgery. Conclusions: AI has now become part of the orthopedic practice and will further increase its stake in the healthcare industry. Nonetheless, clinicians should remain aware of AI’s serious limitations and pitfalls and consider the drawbacks and errors in its use.展开更多
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce...Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.展开更多
Orthogonal frequency division multiplexing passive optical network(OFDM-PON) has superior anti-dispersion property to operate in the C-band of fiber for increased optical power budget. However,the downlink broadcast e...Orthogonal frequency division multiplexing passive optical network(OFDM-PON) has superior anti-dispersion property to operate in the C-band of fiber for increased optical power budget. However,the downlink broadcast exposes the physical layer vulnerable to the threat of illegal eavesdropping. Quantum noise stream cipher(QNSC) is a classic physical layer encryption method and well compatible with the OFDM-PON. Meanwhile, it is indispensable to exploit forward error correction(FEC) to control errors in data transmission. However, when QNSC and FEC are jointly coded, the redundant information becomes heavier and thus the code rate of the transmitted signal will be largely reduced. In this work, we propose a physical layer encryption scheme based on polar-code-assisted QNSC. In order to improve the code rate and security of the transmitted signal, we exploit chaotic sequences to yield the redundant bits and utilize the redundant information of the polar code to generate the higher-order encrypted signal in the QNSC scheme with the operation of the interleaver.We experimentally demonstrate the encrypted 16/64-QAM, 16/256-QAM, 16/1024-QAM, 16/4096-QAM QNSC signals transmitted over 30-km standard single mode fiber. For the transmitted 16/4096-QAM QNSC signal, compared with the conventional QNSC method, the proposed method increases the code rate from 0.1 to 0.32 with enhanced security.展开更多
Controlling intracranial pressure,nerve cell regeneration,and microenvironment regulation are the key issues in reducing mortality and disability in acute brain injury.There is currently a lack of effective treatment ...Controlling intracranial pressure,nerve cell regeneration,and microenvironment regulation are the key issues in reducing mortality and disability in acute brain injury.There is currently a lack of effective treatment methods.Hibernation has the characteristics of low temperature,low metabolism,and hibernation rhythm,as well as protective effects on the nervous,cardiovascular,and motor systems.Artificial hibernation technology is a new technology that can effectively treat acute brain injury by altering the body’s metabolism,lowering the body’s core temperature,and allowing the body to enter a state similar to hibernation.This review introduces artificial hibernation technology,including mild hypothermia treatment technology,central nervous system regulation technology,and artificial hibernation-inducer technology.Upon summarizing the relevant research on artificial hibernation technology in acute brain injury,the research results show that artificial hibernation technology has neuroprotective,anti-inflammatory,and oxidative stress-resistance effects,indicating that it has therapeutic significance in acute brain injury.Furthermore,artificial hibernation technology can alleviate the damage of ischemic stroke,traumatic brain injury,cerebral hemorrhage,cerebral infarction,and other diseases,providing new strategies for treating acute brain injury.However,artificial hibernation technology is currently in its infancy and has some complications,such as electrolyte imbalance and coagulation disorders,which limit its use.Further research is needed for its clinical application.展开更多
This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Cl...This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture.展开更多
An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and ...An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and 9.3%,respectively.Through ANN model,the influence of interfacial tension and pulsation intensity on the droplet diameter has been developed.Droplet size gradually increases with the increase of interfacial tension,and decreases with the increase of pulse intensity.It can be seen that the accuracy of ANN model in predicting droplet size outside the training set range is reach the same level as the accuracy of correlation obtained based on experiments within this range.For two kinds of columns,the drop size prediction deviations of ANN model are 9.6%and 18.5%and the deviations in correlations are 11%and 15%.展开更多
In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e...In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.展开更多
Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have rev...Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.展开更多
Background: Bilayer artificial dermis promotes wound healing and offers a treatment option for chronic wounds. Aim: Examine the clinical efficacy of bilayer artificial dermis combined with Vacuum Sealing Drainage (VSD...Background: Bilayer artificial dermis promotes wound healing and offers a treatment option for chronic wounds. Aim: Examine the clinical efficacy of bilayer artificial dermis combined with Vacuum Sealing Drainage (VSD) technology in the treatment of chronic wounds. Method: From June 2021 to December 2023, our hospital treated 24 patients with chronic skin tissue wounds on their limbs using a novel tissue engineering product, the bilayer artificial dermis, in combination with VSD technology to repair the wounds. The bilayer artificial dermis protects subcutaneous tissue, blood vessels, nerves, muscles, and tendons, and also promotes the growth of granulation tissue and blood vessels to aid in wound healing when used in conjunction with VSD technology for wound dressing changes in chronic wounds. Results: In this study, 24 cases of chronic wounds with exposed bone or tendon larger than 1.0 cm2 were treated with a bilayer artificial skin combined with VSD dressing after wound debridement. The wounds were not suitable for immediate skin grafting. At 2 - 3 weeks post-treatment, good granulation tissue growth was observed. Subsequent procedures included thick skin grafting or wound dressing changes until complete wound healing. Patients were followed up on average for 3 months (range: 1 - 12 months) post-surgery. Comparative analysis of the appearance, function, skin color, elasticity, and sensation of the healed chronic wounds revealed superior outcomes compared to traditional skin fl repairs, resulting in significantly higher satisfaction levels among patients and their families. Conclusion: The application of bilayer artificial dermis combined with VSD technology for the repair of chronic wounds proves to be a viable method, yielding satisfactory therapeutic effects compared to traditional skin flap procedures.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated...The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.展开更多
Spinal cord injury is a serious disease of the central nervous system involving irreversible nerve injury and various organ system injuries.At present,no effective clinical treatment exists.As one of the artificial hi...Spinal cord injury is a serious disease of the central nervous system involving irreversible nerve injury and various organ system injuries.At present,no effective clinical treatment exists.As one of the artificial hibernation techniques,mild hypothermia has preliminarily confirmed its clinical effect on spinal cord injury.However,its technical defects and barriers,along with serious clinical side effects,restrict its clinical application for spinal cord injury.Artificial hibernation is a futureoriented disruptive technology for human life support.It involves endogenous hibernation inducers and hibernation-related central neuromodulation that activate particular neurons,reduce the central constant temperature setting point,disrupt the normal constant body temperature,make the body adapt"to the external cold environment,and reduce the physiological resistance to cold stimulation.Thus,studying the artificial hibernation mechanism may help develop new treatment strategies more suitable for clinical use than the cooling method of mild hypothermia technology.This review introduces artificial hibernation technologies,including mild hypothermia technology,hibernation inducers,and hibernation-related central neuromodulation technology.It summarizes the relevant research on hypothermia and hibernation for organ and nerve protection.These studies show that artificial hibernation technologies have therapeutic significance on nerve injury after spinal co rd injury through inflammatory inhibition,immunosuppression,oxidative defense,and possible central protection.It also promotes the repair and protection of res pirato ry and digestive,cardiovascular,locomoto r,urinary,and endocrine systems.This review provides new insights for the clinical treatment of nerve and multiple organ protection after spinal cord injury thanks to artificial hibernation.At present,artificial hibernation technology is not mature,and research fa ces various challenges.Neve rtheless,the effort is wo rthwhile for the future development of medicine.展开更多
Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper anal...Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper analyzes the cultivation demand of landscape architecture graduate students in the context of the new era,and identifies the problems by comparing the original professional graduate training mode.The new cultivation mode of graduate students in landscape architecture is proposed,including updating the target orientation of the discipline,optimizing the teaching system,building a“dualteacher”tutor team,and improving the“industry-university-research-utilization”integrated cultivation,so as to cultivate high-quality compound talents with disciplinary characteristics.展开更多
文摘Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Methods Eighty-eight urinary calculi patients were prospectively enrolled.Low dose CT(LDCT)and ULDCT scanning were performed,and the effective dose(ED)of each scanning protocol were calculated.The patients were then randomly divided into training set(n=75)and test set(n=13),and a self-supervised deep learning AI noise reduction system based on the nearest adjacent layer constructed with ULDCT images in training set was used for reducing noise of ULDCT images in test set.In test set,the quality of ULDCT images before and after AI noise reduction were compared with LDCT images,i.e.Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)scores,image noise(SD ROI)and signal-to-noise ratio(SNR).Results The tube current,the volume CT dose index and the dose length product of abdominal ULDCT scanning protocol were all lower compared with those of LDCT scanning protocol(all P<0.05),with a decrease of ED for approximately 82.66%.For 13 patients with urinary calculi in test set,BRISQUE score showed that the quality level of ULDCT images before AI noise reduction reached 54.42%level but raised to 95.76%level of LDCT images after AI noise reduction.Both ULDCT images after AI noise reduction and LDCT images had lower SD ROI and higher SNR than ULDCT images before AI noise reduction(all adjusted P<0.05),whereas no significant difference was found between the former two(both adjusted P>0.05).Conclusion Self-supervised learning AI noise reduction technology based on the nearest adjacent layer could effectively reduce noise and improve image quality of urinary calculi ULDCT images,being conducive for clinical application of ULDCT.
文摘Noise pollution tends to receive less awareness compared to other types of pollution,however,it greatly impacts the quality of life for humans such as causing sleep disruption,stress or hearing impairment.Profiling urban sound through the identification of noise sources in cities could help to benefit livability by reducing exposure to noise pollution through methods such as noise control,planning of the soundscape environment,or selection of safe living space.In this paper,we proposed a self-attention long short-term memory(LSTM)method that can improve sound classification compared to previous baselines.An attention mechanism will be designed solely to capture the key section of an audio data series.This is practical as we only need to process important parts of the data and can ignore the rest,making it applicable when gathering information with long-term dependencies.The dataset used is the Urbansound8k dataset which specifically pertains to urban environments and data augmentation was applied to overcome imbalanced data and dataset scarcity.All audio sources in the dataset were normalized to mono signals.From the dataset above,an experiment was conducted to confirm the suitability of the proposed model when applied to the mel-spectrogram and MFCC(Mel-Frequency Cepstral Coefficients)datasets transformed from the original dataset.Improving the classification accuracy depends on the machine learning models as well as the input data,therefore we have evaluated different class models and extraction methods to find the best performing.By combining data augmentation techniques and various extraction methods,our classification model has achieved state-of-the-art performance,each class accuracy is up to 98%.
基金supported by the Japan Society for the Promotion of Science KAKENHI(Grant numbers:20H04549 and 19K12822)the Japan Science and Technology Agency ACT–X(Grant Number:JPMJAX21K7).
文摘Accurate determination of the optical properties of biological tissues enables quantitative understanding of light propagation in these tissues for optical diagnosis and treatment applications.The absorption(μa)and scattering(μs)coe±cients of biological tissues are inversely analyzed from their diffuse re°ectance(R)and total transmittance(T),which are measured using a double integrating spheres(DIS)system.The inversion algorithms,for example,inverse adding doubling method and inverse Monte Carlo method,are sensitive to noise signals during the DIS measurements,resulting in reduced accuracy during determination.In this study,we propose an arti ficial neural network(ANN)to estimateμa andμs at a target wavelength from the R and T spectra measured via the DIS to reduce noise in the optical properties.Approximate models of the optical properties and Monte Carlo calculations that simulated the DIS measurements were used to generate spectral datasets comprisingμa,μs,R and T.Measurement noise signals were added to R and T,and the ANN model was then trained using the noise-added datasets.Numerical results showed that the trained ANN model reduced the effects of noise inμa andμs estimation.Experimental veri fication indicated noise-reduced estimation from the R and T values measured by the DIS with a small number of scans on average,resulting in measurement time reduction.The results demonstrated the noise robustness of the proposed ANN-based method for optical properties determination and will contribute to shorter DIS measurement times,thus reducing changes in the optical properties due to desiccation of the samples.
基金supported by the Capital’s Funds for Health Improvement and Research,No.2022-2-2072(to YG).
文摘Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.
基金supported by the National Natural Science Foundation of China(Grant Nos.42141019 and 42261144687)and STEP(Grant No.2019QZKK0102)supported by the Korea Environmental Industry&Technology Institute(KEITI)through the“Project for developing an observation-based GHG emissions geospatial information map”,funded by the Korea Ministry of Environment(MOE)(Grant No.RS-2023-00232066).
文摘Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal.
基金supported by National Natural Science Foundation of China(12372049)Science and Technology Program of China National Accreditation Service for Confor-mity Assessment(2022CNAS15)+1 种基金Sichuan Science and Technology Program(2023JDRC0062)Independent Project of State Key Laboratory of Rail Transit Vehicle System(2023TPL-T06).
文摘Reducing the aerodynamic drag and noise levels of high-speed pantographs is important for promoting environmentally friendly,energy efficient and rapid advances in train technology.Using computational fluid dynamics theory and the K-FWH acoustic equation,a numerical simulation is conducted to investigate the aerodynamic characteristics of high-speed pantographs.A component optimization method is proposed as a possible solution to the problemof aerodynamic drag and noise in high-speed pantographs.The results of the study indicate that the panhead,base and insulator are the main contributors to aerodynamic drag and noise in high-speed pantographs.Therefore,a gradual optimization process is implemented to improve the most significant components that cause aerodynamic drag and noise.By optimizing the cross-sectional shape of the strips and insulators,the drag and noise caused by airflow separation and vortex shedding can be reduced.The aerodynamic drag of insulator with circular cross section and strips with rectangular cross section is the largest.Ellipsifying insulators and optimizing the chamfer angle and height of the windward surface of the strips can improve the aerodynamic performance of the pantograph.In addition,the streamlined fairing attached to the base can eliminate the complex flow and shield the radiated noise.In contrast to the original pantograph design,the improved pantograph shows a 21.1%reduction in aerodynamic drag and a 1.65 dBA reduction in aerodynamic noise.
文摘Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review is to assess and analyze the use of AI and its use in orthopedic practice, as well as its applications, limitations, and pitfalls. Methods: A review of all relevant databases such as EMBASE, Cochrane Database of Systematic Reviews, MEDLINE, Science Citation Index, Scopus, and Web of Science with keywords of AI, orthopedic surgery, applications, and drawbacks. All related articles on AI and orthopaedic practice were reviewed. A total of 3210 articles were included in the review. Results: The data from 351 studies were analyzed where in orthopedic surgery. AI is being used for diagnostic procedures, radiological diagnosis, models of clinical care, and utilization of hospital and bed resources. AI has also taken a chunk of share in assisted robotic orthopaedic surgery. Conclusions: AI has now become part of the orthopedic practice and will further increase its stake in the healthcare industry. Nonetheless, clinicians should remain aware of AI’s serious limitations and pitfalls and consider the drawbacks and errors in its use.
文摘Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.
基金supported in part by the National Natural Science Foundation of China Project under Grant 62075147the Suzhou Industry Technological Innovation Projects under Grant SYG202348.
文摘Orthogonal frequency division multiplexing passive optical network(OFDM-PON) has superior anti-dispersion property to operate in the C-band of fiber for increased optical power budget. However,the downlink broadcast exposes the physical layer vulnerable to the threat of illegal eavesdropping. Quantum noise stream cipher(QNSC) is a classic physical layer encryption method and well compatible with the OFDM-PON. Meanwhile, it is indispensable to exploit forward error correction(FEC) to control errors in data transmission. However, when QNSC and FEC are jointly coded, the redundant information becomes heavier and thus the code rate of the transmitted signal will be largely reduced. In this work, we propose a physical layer encryption scheme based on polar-code-assisted QNSC. In order to improve the code rate and security of the transmitted signal, we exploit chaotic sequences to yield the redundant bits and utilize the redundant information of the polar code to generate the higher-order encrypted signal in the QNSC scheme with the operation of the interleaver.We experimentally demonstrate the encrypted 16/64-QAM, 16/256-QAM, 16/1024-QAM, 16/4096-QAM QNSC signals transmitted over 30-km standard single mode fiber. For the transmitted 16/4096-QAM QNSC signal, compared with the conventional QNSC method, the proposed method increases the code rate from 0.1 to 0.32 with enhanced security.
基金supported by the National Defense Science and Technology Outstanding Youth Science Fund Project,No.2021-JCJQ-ZQ-035National Defense Innovation Special Zone Project,No.21-163-12-ZT-006-002-13Key Program of the National Natural Science Foundation of China,No.11932013(all to XuC).
文摘Controlling intracranial pressure,nerve cell regeneration,and microenvironment regulation are the key issues in reducing mortality and disability in acute brain injury.There is currently a lack of effective treatment methods.Hibernation has the characteristics of low temperature,low metabolism,and hibernation rhythm,as well as protective effects on the nervous,cardiovascular,and motor systems.Artificial hibernation technology is a new technology that can effectively treat acute brain injury by altering the body’s metabolism,lowering the body’s core temperature,and allowing the body to enter a state similar to hibernation.This review introduces artificial hibernation technology,including mild hypothermia treatment technology,central nervous system regulation technology,and artificial hibernation-inducer technology.Upon summarizing the relevant research on artificial hibernation technology in acute brain injury,the research results show that artificial hibernation technology has neuroprotective,anti-inflammatory,and oxidative stress-resistance effects,indicating that it has therapeutic significance in acute brain injury.Furthermore,artificial hibernation technology can alleviate the damage of ischemic stroke,traumatic brain injury,cerebral hemorrhage,cerebral infarction,and other diseases,providing new strategies for treating acute brain injury.However,artificial hibernation technology is currently in its infancy and has some complications,such as electrolyte imbalance and coagulation disorders,which limit its use.Further research is needed for its clinical application.
文摘This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture.
基金the support of the National Natural Science Foundation of China(22278234,21776151)。
文摘An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and 9.3%,respectively.Through ANN model,the influence of interfacial tension and pulsation intensity on the droplet diameter has been developed.Droplet size gradually increases with the increase of interfacial tension,and decreases with the increase of pulse intensity.It can be seen that the accuracy of ANN model in predicting droplet size outside the training set range is reach the same level as the accuracy of correlation obtained based on experiments within this range.For two kinds of columns,the drop size prediction deviations of ANN model are 9.6%and 18.5%and the deviations in correlations are 11%and 15%.
基金supported by the National Natural Science Foundation of China(62172033).
文摘In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.
基金supported by a grant from the Standardization and Integration of Resources Information for Seed-cluster in Hub-Spoke Material Bank Program,Rural Development Administration,Republic of Korea(PJ01587004).
文摘Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI.
文摘Background: Bilayer artificial dermis promotes wound healing and offers a treatment option for chronic wounds. Aim: Examine the clinical efficacy of bilayer artificial dermis combined with Vacuum Sealing Drainage (VSD) technology in the treatment of chronic wounds. Method: From June 2021 to December 2023, our hospital treated 24 patients with chronic skin tissue wounds on their limbs using a novel tissue engineering product, the bilayer artificial dermis, in combination with VSD technology to repair the wounds. The bilayer artificial dermis protects subcutaneous tissue, blood vessels, nerves, muscles, and tendons, and also promotes the growth of granulation tissue and blood vessels to aid in wound healing when used in conjunction with VSD technology for wound dressing changes in chronic wounds. Results: In this study, 24 cases of chronic wounds with exposed bone or tendon larger than 1.0 cm2 were treated with a bilayer artificial skin combined with VSD dressing after wound debridement. The wounds were not suitable for immediate skin grafting. At 2 - 3 weeks post-treatment, good granulation tissue growth was observed. Subsequent procedures included thick skin grafting or wound dressing changes until complete wound healing. Patients were followed up on average for 3 months (range: 1 - 12 months) post-surgery. Comparative analysis of the appearance, function, skin color, elasticity, and sensation of the healed chronic wounds revealed superior outcomes compared to traditional skin fl repairs, resulting in significantly higher satisfaction levels among patients and their families. Conclusion: The application of bilayer artificial dermis combined with VSD technology for the repair of chronic wounds proves to be a viable method, yielding satisfactory therapeutic effects compared to traditional skin flap procedures.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
文摘The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats.
基金supported by the Key Projects of the National Natural Science Foundation of China,No.11932013(to XC)Key Military Logistics Research Projects,No.B WJ21J002(to XC)+4 种基金the Key projects of the Special Zone for National Defence Innovation,No.21-163-12-ZT006002-13(to XC)the National Nature Science Foundation of China No.82272255(to XC)the National Defense Science and Technology Outstanding Youth Science Fund Program,No.2021-JCIQ-ZQ-035(to XC)the Scientific Research Innovation Team Project of Armed Police Characteristic Medical Center,No.KYCXTD0104(to ZL)the National Natural Science Foundation of China Youth Fund,No.82004467(to BC)。
文摘Spinal cord injury is a serious disease of the central nervous system involving irreversible nerve injury and various organ system injuries.At present,no effective clinical treatment exists.As one of the artificial hibernation techniques,mild hypothermia has preliminarily confirmed its clinical effect on spinal cord injury.However,its technical defects and barriers,along with serious clinical side effects,restrict its clinical application for spinal cord injury.Artificial hibernation is a futureoriented disruptive technology for human life support.It involves endogenous hibernation inducers and hibernation-related central neuromodulation that activate particular neurons,reduce the central constant temperature setting point,disrupt the normal constant body temperature,make the body adapt"to the external cold environment,and reduce the physiological resistance to cold stimulation.Thus,studying the artificial hibernation mechanism may help develop new treatment strategies more suitable for clinical use than the cooling method of mild hypothermia technology.This review introduces artificial hibernation technologies,including mild hypothermia technology,hibernation inducers,and hibernation-related central neuromodulation technology.It summarizes the relevant research on hypothermia and hibernation for organ and nerve protection.These studies show that artificial hibernation technologies have therapeutic significance on nerve injury after spinal co rd injury through inflammatory inhibition,immunosuppression,oxidative defense,and possible central protection.It also promotes the repair and protection of res pirato ry and digestive,cardiovascular,locomoto r,urinary,and endocrine systems.This review provides new insights for the clinical treatment of nerve and multiple organ protection after spinal cord injury thanks to artificial hibernation.At present,artificial hibernation technology is not mature,and research fa ces various challenges.Neve rtheless,the effort is wo rthwhile for the future development of medicine.
基金University-level Graduate Education Reform Project of Yangtze University(YJY202329).
文摘Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper analyzes the cultivation demand of landscape architecture graduate students in the context of the new era,and identifies the problems by comparing the original professional graduate training mode.The new cultivation mode of graduate students in landscape architecture is proposed,including updating the target orientation of the discipline,optimizing the teaching system,building a“dualteacher”tutor team,and improving the“industry-university-research-utilization”integrated cultivation,so as to cultivate high-quality compound talents with disciplinary characteristics.