期刊文献+
共找到156,697篇文章
< 1 2 250 >
每页显示 20 50 100
Self-supervised learning artificial intelligence noise reduction technology based on the nearest adjacent layer in ultra-low dose CT of urinary calculi
1
作者 ZHOU Cheng LIU Yang +4 位作者 QIU Yingwei HE Daijun YAN Yu LUO Min LEI Youyuan 《中国医学影像技术》 CSCD 北大核心 2024年第8期1249-1253,共5页
Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Metho... Objective To observe the value of self-supervised deep learning artificial intelligence(AI)noise reduction technology based on the nearest adjacent layer applicated in ultra-low dose CT(ULDCT)for urinary calculi.Methods Eighty-eight urinary calculi patients were prospectively enrolled.Low dose CT(LDCT)and ULDCT scanning were performed,and the effective dose(ED)of each scanning protocol were calculated.The patients were then randomly divided into training set(n=75)and test set(n=13),and a self-supervised deep learning AI noise reduction system based on the nearest adjacent layer constructed with ULDCT images in training set was used for reducing noise of ULDCT images in test set.In test set,the quality of ULDCT images before and after AI noise reduction were compared with LDCT images,i.e.Blind/Referenceless Image Spatial Quality Evaluator(BRISQUE)scores,image noise(SD ROI)and signal-to-noise ratio(SNR).Results The tube current,the volume CT dose index and the dose length product of abdominal ULDCT scanning protocol were all lower compared with those of LDCT scanning protocol(all P<0.05),with a decrease of ED for approximately 82.66%.For 13 patients with urinary calculi in test set,BRISQUE score showed that the quality level of ULDCT images before AI noise reduction reached 54.42%level but raised to 95.76%level of LDCT images after AI noise reduction.Both ULDCT images after AI noise reduction and LDCT images had lower SD ROI and higher SNR than ULDCT images before AI noise reduction(all adjusted P<0.05),whereas no significant difference was found between the former two(both adjusted P>0.05).Conclusion Self-supervised learning AI noise reduction technology based on the nearest adjacent layer could effectively reduce noise and improve image quality of urinary calculi ULDCT images,being conducive for clinical application of ULDCT. 展开更多
关键词 urinary calculi tomography X-ray computed artificial intelligence prospective studies
下载PDF
Profiling of Urban Noise Using Artificial Intelligence
2
作者 Le Quang Thao Duong Duc Cuong +1 位作者 Tran Thi Tuong Anh Tran Duc Luong 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期1309-1321,共13页
Noise pollution tends to receive less awareness compared to other types of pollution,however,it greatly impacts the quality of life for humans such as causing sleep disruption,stress or hearing impairment.Profiling ur... Noise pollution tends to receive less awareness compared to other types of pollution,however,it greatly impacts the quality of life for humans such as causing sleep disruption,stress or hearing impairment.Profiling urban sound through the identification of noise sources in cities could help to benefit livability by reducing exposure to noise pollution through methods such as noise control,planning of the soundscape environment,or selection of safe living space.In this paper,we proposed a self-attention long short-term memory(LSTM)method that can improve sound classification compared to previous baselines.An attention mechanism will be designed solely to capture the key section of an audio data series.This is practical as we only need to process important parts of the data and can ignore the rest,making it applicable when gathering information with long-term dependencies.The dataset used is the Urbansound8k dataset which specifically pertains to urban environments and data augmentation was applied to overcome imbalanced data and dataset scarcity.All audio sources in the dataset were normalized to mono signals.From the dataset above,an experiment was conducted to confirm the suitability of the proposed model when applied to the mel-spectrogram and MFCC(Mel-Frequency Cepstral Coefficients)datasets transformed from the original dataset.Improving the classification accuracy depends on the machine learning models as well as the input data,therefore we have evaluated different class models and extraction methods to find the best performing.By combining data augmentation techniques and various extraction methods,our classification model has achieved state-of-the-art performance,each class accuracy is up to 98%. 展开更多
关键词 Urban noise noise classification mel-spectrogram MFCC LSTM self-attention
下载PDF
Artificial neural network-based determination of denoised optical properties in double integrating spheres measurement
3
作者 Yusaku Takai Takahiro Nishimura +1 位作者 Yu Shimojo Kunio Awazu 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2023年第6期105-116,共12页
Accurate determination of the optical properties of biological tissues enables quantitative understanding of light propagation in these tissues for optical diagnosis and treatment applications.The absorption(μa)and s... Accurate determination of the optical properties of biological tissues enables quantitative understanding of light propagation in these tissues for optical diagnosis and treatment applications.The absorption(μa)and scattering(μs)coe±cients of biological tissues are inversely analyzed from their diffuse re°ectance(R)and total transmittance(T),which are measured using a double integrating spheres(DIS)system.The inversion algorithms,for example,inverse adding doubling method and inverse Monte Carlo method,are sensitive to noise signals during the DIS measurements,resulting in reduced accuracy during determination.In this study,we propose an arti ficial neural network(ANN)to estimateμa andμs at a target wavelength from the R and T spectra measured via the DIS to reduce noise in the optical properties.Approximate models of the optical properties and Monte Carlo calculations that simulated the DIS measurements were used to generate spectral datasets comprisingμa,μs,R and T.Measurement noise signals were added to R and T,and the ANN model was then trained using the noise-added datasets.Numerical results showed that the trained ANN model reduced the effects of noise inμa andμs estimation.Experimental veri fication indicated noise-reduced estimation from the R and T values measured by the DIS with a small number of scans on average,resulting in measurement time reduction.The results demonstrated the noise robustness of the proposed ANN-based method for optical properties determination and will contribute to shorter DIS measurement times,thus reducing changes in the optical properties due to desiccation of the samples. 展开更多
关键词 Absorption coefficient scattering coe±cient bio-tissue tissue spectroscopy noise reduction.
下载PDF
Artificial intelligence-assisted repair of peripheral nerve injury: a new research hotspot and associated challenges 被引量:2
4
作者 Yang Guo Liying Sun +3 位作者 Wenyao Zhong Nan Zhang Zongxuan Zhao Wen Tian 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第3期663-670,共8页
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p... Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies. 展开更多
关键词 artificial intelligence artificial prosthesis medical-industrial integration brain-machine interface deep learning machine learning networked hand prosthesis neural interface neural network neural regeneration peripheral nerve
下载PDF
Toward a Learnable Climate Model in the Artificial Intelligence Era 被引量:2
5
作者 Gang HUANG Ya WANG +3 位作者 Yoo-Geun HAM Bin MU Weichen TAO Chaoyang XIE 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1281-1288,共8页
Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of ... Artificial intelligence(AI)models have significantly impacted various areas of the atmospheric sciences,reshaping our approach to climate-related challenges.Amid this AI-driven transformation,the foundational role of physics in climate science has occasionally been overlooked.Our perspective suggests that the future of climate modeling involves a synergistic partnership between AI and physics,rather than an“either/or”scenario.Scrutinizing controversies around current physical inconsistencies in large AI models,we stress the critical need for detailed dynamic diagnostics and physical constraints.Furthermore,we provide illustrative examples to guide future assessments and constraints for AI models.Regarding AI integration with numerical models,we argue that offline AI parameterization schemes may fall short of achieving global optimality,emphasizing the importance of constructing online schemes.Additionally,we highlight the significance of fostering a community culture and propose the OCR(Open,Comparable,Reproducible)principles.Through a better community culture and a deep integration of physics and AI,we contend that developing a learnable climate model,balancing AI and physics,is an achievable goal. 展开更多
关键词 artificial intelligence deep learning learnable climate model
下载PDF
Numerical Study on Reduction in Aerodynamic Drag and Noise of High-Speed Pantograph 被引量:1
6
作者 Deng Qin Xing Du +1 位作者 Tian Li Jiye Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期2155-2173,共19页
Reducing the aerodynamic drag and noise levels of high-speed pantographs is important for promoting environmentally friendly,energy efficient and rapid advances in train technology.Using computational fluid dynamics t... Reducing the aerodynamic drag and noise levels of high-speed pantographs is important for promoting environmentally friendly,energy efficient and rapid advances in train technology.Using computational fluid dynamics theory and the K-FWH acoustic equation,a numerical simulation is conducted to investigate the aerodynamic characteristics of high-speed pantographs.A component optimization method is proposed as a possible solution to the problemof aerodynamic drag and noise in high-speed pantographs.The results of the study indicate that the panhead,base and insulator are the main contributors to aerodynamic drag and noise in high-speed pantographs.Therefore,a gradual optimization process is implemented to improve the most significant components that cause aerodynamic drag and noise.By optimizing the cross-sectional shape of the strips and insulators,the drag and noise caused by airflow separation and vortex shedding can be reduced.The aerodynamic drag of insulator with circular cross section and strips with rectangular cross section is the largest.Ellipsifying insulators and optimizing the chamfer angle and height of the windward surface of the strips can improve the aerodynamic performance of the pantograph.In addition,the streamlined fairing attached to the base can eliminate the complex flow and shield the radiated noise.In contrast to the original pantograph design,the improved pantograph shows a 21.1%reduction in aerodynamic drag and a 1.65 dBA reduction in aerodynamic noise. 展开更多
关键词 High-speed pantograph aerodynamic drag aerodynamic noise REDUCTION optimizing
下载PDF
Concept of Artificial Intelligence (AI) and Its Use in Orthopaedic Practice: Applications and Pitfalls: A Narrative Review 被引量:1
7
作者 Mir Sadat-Ali 《Open Journal of Orthopedics》 2024年第1期32-40,共9页
Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review... Background: The growth and use of Artificial Intelligence (AI) in the medical field is rapidly rising. AI is exhibiting a practical tool in the healthcare industry in patient care. The objective of this current review is to assess and analyze the use of AI and its use in orthopedic practice, as well as its applications, limitations, and pitfalls. Methods: A review of all relevant databases such as EMBASE, Cochrane Database of Systematic Reviews, MEDLINE, Science Citation Index, Scopus, and Web of Science with keywords of AI, orthopedic surgery, applications, and drawbacks. All related articles on AI and orthopaedic practice were reviewed. A total of 3210 articles were included in the review. Results: The data from 351 studies were analyzed where in orthopedic surgery. AI is being used for diagnostic procedures, radiological diagnosis, models of clinical care, and utilization of hospital and bed resources. AI has also taken a chunk of share in assisted robotic orthopaedic surgery. Conclusions: AI has now become part of the orthopedic practice and will further increase its stake in the healthcare industry. Nonetheless, clinicians should remain aware of AI’s serious limitations and pitfalls and consider the drawbacks and errors in its use. 展开更多
关键词 artificial Intelligence Healthcare PITFALLS Drawbacks
下载PDF
Artificial Intelligence and Computer Vision during Surgery: Discussing Laparoscopic Images with ChatGPT4—Preliminary Results 被引量:1
8
作者 Savvas Hirides Petros Hirides +1 位作者 Kouloufakou Kalliopi Constantinos Hirides 《Surgical Science》 2024年第3期169-181,共13页
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce... Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come. 展开更多
关键词 artificial Intelligence SURGERY Image Recognition Autonomous Surgery
下载PDF
Physical Layer Encryption of OFDM-PON Based on Quantum Noise Stream Cipher with Polar Code 被引量:1
9
作者 Xu Yinbo Gao Mingyi +3 位作者 Zhu Huaqing Chen Bowen Xiang Lian Shen Gangxiang 《China Communications》 SCIE CSCD 2024年第3期174-188,共15页
Orthogonal frequency division multiplexing passive optical network(OFDM-PON) has superior anti-dispersion property to operate in the C-band of fiber for increased optical power budget. However,the downlink broadcast e... Orthogonal frequency division multiplexing passive optical network(OFDM-PON) has superior anti-dispersion property to operate in the C-band of fiber for increased optical power budget. However,the downlink broadcast exposes the physical layer vulnerable to the threat of illegal eavesdropping. Quantum noise stream cipher(QNSC) is a classic physical layer encryption method and well compatible with the OFDM-PON. Meanwhile, it is indispensable to exploit forward error correction(FEC) to control errors in data transmission. However, when QNSC and FEC are jointly coded, the redundant information becomes heavier and thus the code rate of the transmitted signal will be largely reduced. In this work, we propose a physical layer encryption scheme based on polar-code-assisted QNSC. In order to improve the code rate and security of the transmitted signal, we exploit chaotic sequences to yield the redundant bits and utilize the redundant information of the polar code to generate the higher-order encrypted signal in the QNSC scheme with the operation of the interleaver.We experimentally demonstrate the encrypted 16/64-QAM, 16/256-QAM, 16/1024-QAM, 16/4096-QAM QNSC signals transmitted over 30-km standard single mode fiber. For the transmitted 16/4096-QAM QNSC signal, compared with the conventional QNSC method, the proposed method increases the code rate from 0.1 to 0.32 with enhanced security. 展开更多
关键词 physical layer encryption polar code quantum noise stream cipher
下载PDF
Application of artificial hibernation technology in acute brain injury 被引量:1
10
作者 Xiaoni Wang Shulian Chen +5 位作者 Xiaoyu Wang Zhen Song Ziqi Wang Xiaofei Niu Xiaochu Chen Xuyi Chen 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第9期1940-1946,共7页
Controlling intracranial pressure,nerve cell regeneration,and microenvironment regulation are the key issues in reducing mortality and disability in acute brain injury.There is currently a lack of effective treatment ... Controlling intracranial pressure,nerve cell regeneration,and microenvironment regulation are the key issues in reducing mortality and disability in acute brain injury.There is currently a lack of effective treatment methods.Hibernation has the characteristics of low temperature,low metabolism,and hibernation rhythm,as well as protective effects on the nervous,cardiovascular,and motor systems.Artificial hibernation technology is a new technology that can effectively treat acute brain injury by altering the body’s metabolism,lowering the body’s core temperature,and allowing the body to enter a state similar to hibernation.This review introduces artificial hibernation technology,including mild hypothermia treatment technology,central nervous system regulation technology,and artificial hibernation-inducer technology.Upon summarizing the relevant research on artificial hibernation technology in acute brain injury,the research results show that artificial hibernation technology has neuroprotective,anti-inflammatory,and oxidative stress-resistance effects,indicating that it has therapeutic significance in acute brain injury.Furthermore,artificial hibernation technology can alleviate the damage of ischemic stroke,traumatic brain injury,cerebral hemorrhage,cerebral infarction,and other diseases,providing new strategies for treating acute brain injury.However,artificial hibernation technology is currently in its infancy and has some complications,such as electrolyte imbalance and coagulation disorders,which limit its use.Further research is needed for its clinical application. 展开更多
关键词 cute brain injury artificial hibernation HYPOTHERMIA low metabolism mild hypothermia
下载PDF
Application of artificial intelligence in the diagnosis and treatment of Kawasaki disease 被引量:1
11
作者 Yan Pan Fu-Yong Jiao 《World Journal of Clinical Cases》 SCIE 2024年第23期5304-5307,共4页
This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Cl... This editorial provides commentary on an article titled"Potential and limitationsof ChatGPT and generative artificial intelligence(AI)in medical safety education"recently published in the World Journal of Clinical Cases.AI has enormous potentialfor various applications in the field of Kawasaki disease(KD).One is machinelearning(ML)to assist in the diagnosis of KD,and clinical prediction models havebeen constructed worldwide using ML;the second is using a gene signalcalculation toolbox to identify KD,which can be used to monitor key clinicalfeatures and laboratory parameters of disease severity;and the third is using deeplearning(DL)to assist in cardiac ultrasound detection.The performance of the DLalgorithm is similar to that of experienced cardiac experts in detecting coronaryartery lesions to promoting the diagnosis of KD.To effectively utilize AI in thediagnosis and treatment process of KD,it is crucial to improve the accuracy of AIdecision-making using more medical data,while addressing issues related topatient personal information protection and AI decision-making responsibility.AIprogress is expected to provide patients with accurate and effective medicalservices that will positively impact the diagnosis and treatment of KD in thefuture. 展开更多
关键词 artificial intelligence Kawasaki disease DIAGNOSIS PREDICTION IMAGE
下载PDF
A data-driven model of drop size prediction based on artificial neural networks using small-scale data sets 被引量:1
12
作者 Bo Wang Han Zhou +3 位作者 Shan Jing Qiang Zheng Wenjie Lan Shaowei Li 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第2期71-83,共13页
An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and ... An artificial neural network(ANN)method is introduced to predict drop size in two kinds of pulsed columns with small-scale data sets.After training,the deviation between calculate and experimental results are 3.8%and 9.3%,respectively.Through ANN model,the influence of interfacial tension and pulsation intensity on the droplet diameter has been developed.Droplet size gradually increases with the increase of interfacial tension,and decreases with the increase of pulse intensity.It can be seen that the accuracy of ANN model in predicting droplet size outside the training set range is reach the same level as the accuracy of correlation obtained based on experiments within this range.For two kinds of columns,the drop size prediction deviations of ANN model are 9.6%and 18.5%and the deviations in correlations are 11%and 15%. 展开更多
关键词 artificial neural network Drop size Solvent extraction Pulsed column Two-phase flow HYDRODYNAMICS
下载PDF
A review of artificial intelligence applications in high-speed railway systems 被引量:2
13
作者 Xuehan Li Minghao Zhu +3 位作者 Boyang Zhang Xiaoxuan Wang Zha Liu Liang Han 《High-Speed Railway》 2024年第1期11-16,共6页
In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e... In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions. 展开更多
关键词 High-speed railway artificial intelligence Intelligent distribution Intelligent control Intelligent scheduling
下载PDF
Integrating artificial intelligence and high-throughput phenotyping for crop improvement 被引量:1
14
作者 Mansoor Sheikh Farooq Iqra +3 位作者 Hamadani Ambreen Kumar A Pravin Manzoor Ikra Yong Suk Chung 《Journal of Integrative Agriculture》 SCIE CAS CSCD 2024年第6期1787-1802,共16页
Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have rev... Crop improvement is crucial for addressing the global challenges of food security and sustainable agriculture.Recent advancements in high-throughput phenotyping(HTP)technologies and artificial intelligence(AI)have revolutionized the field,enabling rapid and accurate assessment of crop traits on a large scale.The integration of AI and machine learning algorithms with HTP data has unlocked new opportunities for crop improvement.AI algorithms can analyze and interpret large datasets,and extract meaningful patterns and correlations between phenotypic traits and genetic factors.These technologies have the potential to revolutionize plant breeding programs by providing breeders with efficient and accurate tools for trait selection,thereby reducing the time and cost required for variety development.However,further research and collaboration are needed to overcome the existing challenges and fully unlock the power of HTP and AI in crop improvement.By leveraging AI algorithms,researchers can efficiently analyze phenotypic data,uncover complex patterns,and establish predictive models that enable precise trait selection and crop breeding.The aim of this review is to explore the transformative potential of integrating HTP and AI in crop improvement.This review will encompass an in-depth analysis of recent advances and applications,highlighting the numerous benefits and challenges associated with HTP and AI. 展开更多
关键词 artificial intelligence crop improvement data analysis high-throughput phenotyping machine learning precision agriculture trait selection
下载PDF
The Application of Bilayer Artificial Dermis Combined with VSD Technology in Chronic Wounds 被引量:1
15
作者 Xianjin Dong Huasong Luo 《Journal of Biosciences and Medicines》 2024年第3期238-244,共7页
Background: Bilayer artificial dermis promotes wound healing and offers a treatment option for chronic wounds. Aim: Examine the clinical efficacy of bilayer artificial dermis combined with Vacuum Sealing Drainage (VSD... Background: Bilayer artificial dermis promotes wound healing and offers a treatment option for chronic wounds. Aim: Examine the clinical efficacy of bilayer artificial dermis combined with Vacuum Sealing Drainage (VSD) technology in the treatment of chronic wounds. Method: From June 2021 to December 2023, our hospital treated 24 patients with chronic skin tissue wounds on their limbs using a novel tissue engineering product, the bilayer artificial dermis, in combination with VSD technology to repair the wounds. The bilayer artificial dermis protects subcutaneous tissue, blood vessels, nerves, muscles, and tendons, and also promotes the growth of granulation tissue and blood vessels to aid in wound healing when used in conjunction with VSD technology for wound dressing changes in chronic wounds. Results: In this study, 24 cases of chronic wounds with exposed bone or tendon larger than 1.0 cm2 were treated with a bilayer artificial skin combined with VSD dressing after wound debridement. The wounds were not suitable for immediate skin grafting. At 2 - 3 weeks post-treatment, good granulation tissue growth was observed. Subsequent procedures included thick skin grafting or wound dressing changes until complete wound healing. Patients were followed up on average for 3 months (range: 1 - 12 months) post-surgery. Comparative analysis of the appearance, function, skin color, elasticity, and sensation of the healed chronic wounds revealed superior outcomes compared to traditional skin fl repairs, resulting in significantly higher satisfaction levels among patients and their families. Conclusion: The application of bilayer artificial dermis combined with VSD technology for the repair of chronic wounds proves to be a viable method, yielding satisfactory therapeutic effects compared to traditional skin flap procedures. 展开更多
关键词 Bilayer artificial Dermis Vacuum Sealing Drainage (VSD) Chronic Wounds Wound Healing APPLICATION
下载PDF
Advanced Optimized Anomaly Detection System for IoT Cyberattacks Using Artificial Intelligence 被引量:1
16
作者 Ali Hamid Farea Omar H.Alhazmi Kerem Kucuk 《Computers, Materials & Continua》 SCIE EI 2024年第2期1525-1545,共21页
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),... While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features. 展开更多
关键词 Internet of Things SECURITY anomaly detection and prevention system artificial intelligence optimization techniques
下载PDF
Artificial intelligence-driven radiomics study in cancer:the role of feature engineering and modeling 被引量:1
17
作者 Yuan-Peng Zhang Xin-Yun Zhang +11 位作者 Yu-Ting Cheng Bing Li Xin-Zhi Teng Jiang Zhang Saikit Lam Ta Zhou Zong-Rui Ma Jia-Bao Sheng Victor CWTam Shara WYLee Hong Ge Jing Cai 《Military Medical Research》 SCIE CAS CSCD 2024年第1期115-147,共33页
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of... Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research. 展开更多
关键词 artificial intelligence Radiomics Feature extraction Feature selection Modeling INTERPRETABILITY Multimodalities Head and neck cancer
下载PDF
A Review of Hybrid Cyber Threats Modelling and Detection Using Artificial Intelligence in IIoT 被引量:1
18
作者 Yifan Liu Shancang Li +1 位作者 Xinheng Wang Li Xu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第8期1233-1261,共29页
The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated... The Industrial Internet of Things(IIoT)has brought numerous benefits,such as improved efficiency,smart analytics,and increased automation.However,it also exposes connected devices,users,applications,and data generated to cyber security threats that need to be addressed.This work investigates hybrid cyber threats(HCTs),which are now working on an entirely new level with the increasingly adopted IIoT.This work focuses on emerging methods to model,detect,and defend against hybrid cyber attacks using machine learning(ML)techniques.Specifically,a novel ML-based HCT modelling and analysis framework was proposed,in which L1 regularisation and Random Forest were used to cluster features and analyse the importance and impact of each feature in both individual threats and HCTs.A grey relation analysis-based model was employed to construct the correlation between IIoT components and different threats. 展开更多
关键词 Cyber security Industrial Internet of Things artificial intelligence machine learning algorithms hybrid cyber threats
下载PDF
The future of artificial hibernation medicine:protection of nerves and organs after spinal cord injury 被引量:1
19
作者 Caiyun Liu Haixin Yu +4 位作者 Zhengchao Li Shulian Chen Xiaoyin Li Xuyi Chen Bo Chen 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第1期22-28,共7页
Spinal cord injury is a serious disease of the central nervous system involving irreversible nerve injury and various organ system injuries.At present,no effective clinical treatment exists.As one of the artificial hi... Spinal cord injury is a serious disease of the central nervous system involving irreversible nerve injury and various organ system injuries.At present,no effective clinical treatment exists.As one of the artificial hibernation techniques,mild hypothermia has preliminarily confirmed its clinical effect on spinal cord injury.However,its technical defects and barriers,along with serious clinical side effects,restrict its clinical application for spinal cord injury.Artificial hibernation is a futureoriented disruptive technology for human life support.It involves endogenous hibernation inducers and hibernation-related central neuromodulation that activate particular neurons,reduce the central constant temperature setting point,disrupt the normal constant body temperature,make the body adapt"to the external cold environment,and reduce the physiological resistance to cold stimulation.Thus,studying the artificial hibernation mechanism may help develop new treatment strategies more suitable for clinical use than the cooling method of mild hypothermia technology.This review introduces artificial hibernation technologies,including mild hypothermia technology,hibernation inducers,and hibernation-related central neuromodulation technology.It summarizes the relevant research on hypothermia and hibernation for organ and nerve protection.These studies show that artificial hibernation technologies have therapeutic significance on nerve injury after spinal co rd injury through inflammatory inhibition,immunosuppression,oxidative defense,and possible central protection.It also promotes the repair and protection of res pirato ry and digestive,cardiovascular,locomoto r,urinary,and endocrine systems.This review provides new insights for the clinical treatment of nerve and multiple organ protection after spinal cord injury thanks to artificial hibernation.At present,artificial hibernation technology is not mature,and research fa ces various challenges.Neve rtheless,the effort is wo rthwhile for the future development of medicine. 展开更多
关键词 artificial hibernation central thermostatic-resista nt regulation hypothermia multi-system protection neuroprotection organ protection spinal cord injury synthetic torpor
下载PDF
Exploration of Graduate Student Cultivation Mode of Landscape Architecture under the Background of“Artificial Intelligence+X” 被引量:1
20
作者 CAO Yangyang ZENG Junfeng 《Journal of Landscape Research》 2024年第1期67-69,76,共4页
Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper anal... Under the background of“artificial intelligence+X”,the development of landscape architecture industry ushers in new opportunities,and professional talents need to be updated to meet the social demand.This paper analyzes the cultivation demand of landscape architecture graduate students in the context of the new era,and identifies the problems by comparing the original professional graduate training mode.The new cultivation mode of graduate students in landscape architecture is proposed,including updating the target orientation of the discipline,optimizing the teaching system,building a“dualteacher”tutor team,and improving the“industry-university-research-utilization”integrated cultivation,so as to cultivate high-quality compound talents with disciplinary characteristics. 展开更多
关键词 artificial intelligence+ Landscape architecture Graduate training model Professional talent
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部