期刊文献+
共找到260,827篇文章
< 1 2 250 >
每页显示 20 50 100
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
1
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
Feature extraction for machine learning-based intrusion detection in IoT networks
2
作者 Mohanad Sarhan Siamak Layeghy +2 位作者 Nour Moustafa Marcus Gallagher Marius Portmann 《Digital Communications and Networks》 SCIE CSCD 2024年第1期205-216,共12页
A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have ... A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field. 展开更多
关键词 Feature extraction Machine learning Network intrusion detection system IOT
下载PDF
ConvNeXt-UperNet-Based Deep Learning Model for Road Extraction from High-Resolution Remote Sensing Images
3
作者 Jing Wang Chen Zhang Tianwen Lin 《Computers, Materials & Continua》 SCIE EI 2024年第8期1907-1925,共19页
When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in inco... When existing deep learning models are used for road extraction tasks from high-resolution images,they are easily affected by noise factors such as tree and building occlusion and complex backgrounds,resulting in incomplete road extraction and low accuracy.We propose the introduction of spatial and channel attention modules to the convolutional neural network ConvNeXt.Then,ConvNeXt is used as the backbone network,which cooperates with the perceptual analysis network UPerNet,retains the detection head of the semantic segmentation,and builds a new model ConvNeXt-UPerNet to suppress noise interference.Training on the open-source DeepGlobe and CHN6-CUG datasets and introducing the DiceLoss on the basis of CrossEntropyLoss solves the problem of positive and negative sample imbalance.Experimental results show that the new network model can achieve the following performance on the DeepGlobe dataset:79.40%for precision(Pre),97.93% for accuracy(Acc),69.28% for intersection over union(IoU),and 83.56% for mean intersection over union(MIoU).On the CHN6-CUG dataset,the model achieves the respective values of 78.17%for Pre,97.63%for Acc,65.4% for IoU,and 81.46% for MIoU.Compared with other network models,the fused ConvNeXt-UPerNet model can extract road information better when faced with the influence of noise contained in high-resolution remote sensing images.It also achieves multiscale image feature information with unified perception,ultimately improving the generalization ability of deep learning technology in extracting complex roads from high-resolution remote sensing images. 展开更多
关键词 Deep learning semantic segmentation remote sensing imagery road extraction
下载PDF
Machine learning with active pharmaceutical ingredient/polymer interaction mechanism:Prediction for complex phase behaviors of pharmaceuticals and formulations
4
作者 Kai Ge Yiping Huang Yuanhui Ji 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第2期263-272,共10页
The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceu... The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceutical formulations.In this work,a developed machine-learning model efficiently predicts the solubility of APIs in polymers by learning the phase equilibrium principle and using a few molecular descriptors.Under the few-shot learning framework,thermodynamic theory(perturbed-chain statistical associating fluid theory)was used for data augmentation,and computational chemistry was applied for molecular descriptors'screening.The results showed that the developed machine-learning model can predict the API-polymer phase diagram accurately,broaden the solubility data of APIs in polymers,and reproduce the relationship between API solubility and the interaction mechanisms between API and polymer successfully,which provided efficient guidance for the development of pharmaceutical formulations. 展开更多
关键词 Multi-task machine learning Density functional theory Hydrogen bond interaction MISCIBILITY SOLUBILITY
下载PDF
Intelligent Power Grid Load Transferring Based on Safe Action-Correction Reinforcement Learning
5
作者 Fuju Zhou Li Li +3 位作者 Tengfei Jia Yongchang Yin Aixiang Shi Shengrong Xu 《Energy Engineering》 EI 2024年第6期1697-1711,共15页
When a line failure occurs in a power grid, a load transfer is implemented to reconfigure the network by changingthe states of tie-switches and load demands. Computation speed is one of the major performance indicator... When a line failure occurs in a power grid, a load transfer is implemented to reconfigure the network by changingthe states of tie-switches and load demands. Computation speed is one of the major performance indicators inpower grid load transfer, as a fast load transfer model can greatly reduce the economic loss of post-fault powergrids. In this study, a reinforcement learning method is developed based on a deep deterministic policy gradient.The tedious training process of the reinforcement learning model can be conducted offline, so the model showssatisfactory performance in real-time operation, indicating that it is suitable for fast load transfer. Consideringthat the reinforcement learning model performs poorly in satisfying safety constraints, a safe action-correctionframework is proposed to modify the learning model. In the framework, the action of load shedding is correctedaccording to sensitivity analysis results under a small discrete increment so as to match the constraints of line flowlimits. The results of case studies indicate that the proposed method is practical for fast and safe power grid loadtransfer. 展开更多
关键词 Load transfer reinforcement learning electrical power grid safety constraints
下载PDF
HybridHR-Net:Action Recognition in Video Sequences Using Optimal Deep Learning Fusion Assisted Framework 被引量:1
6
作者 Muhammad Naeem Akbar Seemab Khan +3 位作者 Muhammad Umar Farooq Majed Alhaisoni Usman Tariq Muhammad Usman Akram 《Computers, Materials & Continua》 SCIE EI 2023年第9期3275-3295,共21页
The combination of spatiotemporal videos and essential features can improve the performance of human action recognition(HAR);however,the individual type of features usually degrades the performance due to similar acti... The combination of spatiotemporal videos and essential features can improve the performance of human action recognition(HAR);however,the individual type of features usually degrades the performance due to similar actions and complex backgrounds.The deep convolutional neural network has improved performance in recent years for several computer vision applications due to its spatial information.This article proposes a new framework called for video surveillance human action recognition dubbed HybridHR-Net.On a few selected datasets,deep transfer learning is used to pre-trained the EfficientNet-b0 deep learning model.Bayesian optimization is employed for the tuning of hyperparameters of the fine-tuned deep model.Instead of fully connected layer features,we considered the average pooling layer features and performed two feature selection techniques-an improved artificial bee colony and an entropy-based approach.Using a serial nature technique,the features that were selected are combined into a single vector,and then the results are categorized by machine learning classifiers.Five publically accessible datasets have been utilized for the experimental approach and obtained notable accuracy of 97%,98.7%,100%,99.7%,and 96.8%,respectively.Additionally,a comparison of the proposed framework with contemporarymethods is done to demonstrate the increase in accuracy. 展开更多
关键词 action recognition ENTROPY deep learning transfer learning artificial bee colony feature fusion
下载PDF
基于Q-Learning的航空器滑行路径规划研究
7
作者 王兴隆 王睿峰 《中国民航大学学报》 CAS 2024年第3期28-33,共6页
针对传统算法规划航空器滑行路径准确度低、不能根据整体场面运行情况进行路径规划的问题,提出一种基于Q-Learning的路径规划方法。通过对机场飞行区网络结构模型和强化学习的仿真环境分析,设置了状态空间和动作空间,并根据路径的合规... 针对传统算法规划航空器滑行路径准确度低、不能根据整体场面运行情况进行路径规划的问题,提出一种基于Q-Learning的路径规划方法。通过对机场飞行区网络结构模型和强化学习的仿真环境分析,设置了状态空间和动作空间,并根据路径的合规性和合理性设定了奖励函数,将路径合理性评价值设置为滑行路径长度与飞行区平均滑行时间乘积的倒数。最后,分析了动作选择策略参数对路径规划模型的影响。结果表明,与A*算法和Floyd算法相比,基于Q-Learning的路径规划在滑行距离最短的同时,避开了相对繁忙的区域,路径合理性评价值高。 展开更多
关键词 滑行路径规划 机场飞行区 强化学习 Q-learning
下载PDF
Reinforcement learning for wind-farm flow control:Current state and future actions 被引量:1
8
作者 Mahdi Abkar Navid Zehtabiyan-Rezaie Alexandros Iosifidis 《Theoretical & Applied Mechanics Letters》 CAS CSCD 2023年第6期455-464,共10页
Wind-farm flow control stands at the forefront of grand challenges in wind-energy science.The central issue is that current algorithms are based on simplified models and,thus,fall short of capturing the complex physic... Wind-farm flow control stands at the forefront of grand challenges in wind-energy science.The central issue is that current algorithms are based on simplified models and,thus,fall short of capturing the complex physics of wind farms associated with the high-dimensional nature of turbulence and multiscale wind-farm-atmosphere interactions.Reinforcement learning(RL),as a subset of machine learning,has demonstrated its effectiveness in solving high-dimensional problems in various domains,and the studies performed in the last decade prove that it can be exploited in the development of the next generation of algorithms for wind-farm flow control.This review has two main objectives.Firstly,it aims to provide an up-to-date overview of works focusing on the development of wind-farm flow control schemes utilizing RL methods.By examining the latest research in this area,the review seeks to offer a comprehensive understanding of the advancements made in wind-farm flow control through the application of RL techniques.Secondly,it aims to shed light on the obstacles that researchers face when implementing wind-farm flow control based on RL.By highlighting these challenges,the review aims to identify areas requiring further exploration and potential opportunities for future research. 展开更多
关键词 Wind-farm flow control Turbine wakes Power losses Reinforcement learning Machine learning
下载PDF
Deep Learning-Based Action Classification Using One-Shot Object Detection 被引量:1
9
作者 Hyun Yoo Seo-El Lee Kyungyong Chung 《Computers, Materials & Continua》 SCIE EI 2023年第8期1343-1359,共17页
Deep learning-based action classification technology has been applied to various fields,such as social safety,medical services,and sports.Analyzing an action on a practical level requires tracking multiple human bodie... Deep learning-based action classification technology has been applied to various fields,such as social safety,medical services,and sports.Analyzing an action on a practical level requires tracking multiple human bodies in an image in real-time and simultaneously classifying their actions.There are various related studies on the real-time classification of actions in an image.However,existing deep learning-based action classification models have prolonged response speeds,so there is a limit to real-time analysis.In addition,it has low accuracy of action of each object ifmultiple objects appear in the image.Also,it needs to be improved since it has a memory overhead in processing image data.Deep learning-based action classification using one-shot object detection is proposed to overcome the limitations of multiframe-based analysis technology.The proposed method uses a one-shot object detection model and a multi-object tracking algorithm to detect and track multiple objects in the image.Then,a deep learning-based pattern classification model is used to classify the body action of the object in the image by reducing the data for each object to an action vector.Compared to the existing studies,the constructed model shows higher accuracy of 74.95%,and in terms of speed,it offered better performance than the current studies at 0.234 s per frame.The proposed model makes it possible to classify some actions only through action vector learning without additional image learning because of the vector learning feature of the posterior neural network.Therefore,it is expected to contribute significantly to commercializing realistic streaming data analysis technologies,such as CCTV. 展开更多
关键词 Human action classification artificial intelligence deep neural network pattern analysis video analysis
下载PDF
Machine learning applications in stroke medicine:advancements,challenges,and future prospectives 被引量:2
10
作者 Mario Daidone Sergio Ferrantelli Antonino Tuttolomondo 《Neural Regeneration Research》 SCIE CAS CSCD 2024年第4期769-773,共5页
Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning technique... Stroke is a leading cause of disability and mortality worldwide,necessitating the development of advanced technologies to improve its diagnosis,treatment,and patient outcomes.In recent years,machine learning techniques have emerged as promising tools in stroke medicine,enabling efficient analysis of large-scale datasets and facilitating personalized and precision medicine approaches.This abstract provides a comprehensive overview of machine learning’s applications,challenges,and future directions in stroke medicine.Recently introduced machine learning algorithms have been extensively employed in all the fields of stroke medicine.Machine learning models have demonstrated remarkable accuracy in imaging analysis,diagnosing stroke subtypes,risk stratifications,guiding medical treatment,and predicting patient prognosis.Despite the tremendous potential of machine learning in stroke medicine,several challenges must be addressed.These include the need for standardized and interoperable data collection,robust model validation and generalization,and the ethical considerations surrounding privacy and bias.In addition,integrating machine learning models into clinical workflows and establishing regulatory frameworks are critical for ensuring their widespread adoption and impact in routine stroke care.Machine learning promises to revolutionize stroke medicine by enabling precise diagnosis,tailored treatment selection,and improved prognostication.Continued research and collaboration among clinicians,researchers,and technologists are essential for overcoming challenges and realizing the full potential of machine learning in stroke care,ultimately leading to enhanced patient outcomes and quality of life.This review aims to summarize all the current implications of machine learning in stroke diagnosis,treatment,and prognostic evaluation.At the same time,another purpose of this paper is to explore all the future perspectives these techniques can provide in combating this disabling disease. 展开更多
关键词 cerebrovascular disease deep learning machine learning reinforcement learning STROKE stroke therapy supervised learning unsupervised learning
下载PDF
Action Recognition and Detection Based on Deep Learning: A Comprehensive Summary
11
作者 Yong Li Qiming Liang +1 位作者 Bo Gan Xiaolong Cui 《Computers, Materials & Continua》 SCIE EI 2023年第10期1-23,共23页
Action recognition and detection is an important research topic in computer vision,which can be divided into action recognition and action detection.At present,the distinction between action recognition and action det... Action recognition and detection is an important research topic in computer vision,which can be divided into action recognition and action detection.At present,the distinction between action recognition and action detection is not clear,and the relevant reviews are not comprehensive.Thus,this paper summarized the action recognition and detection methods and datasets based on deep learning to accurately present the research status in this field.Firstly,according to the way that temporal and spatial features are extracted from the model,the commonly used models of action recognition are divided into the two stream models,the temporal models,the spatiotemporal models and the transformer models according to the architecture.And this paper briefly analyzes the characteristics of the four models and introduces the accuracy of various algorithms in common data sets.Then,from the perspective of tasks to be completed,action detection is further divided into temporal action detection and spatiotemporal action detection,and commonly used datasets are introduced.From the perspectives of the twostage method and one-stage method,various algorithms of temporal action detection are reviewed,and the various algorithms of spatiotemporal action detection are summarized in detail.Finally,the relationship between different parts of action recognition and detection is discussed,the difficulties faced by the current research are summarized in detail,and future development was prospected。 展开更多
关键词 action recognition action detection deep learning convolutional neural networks DATASET
下载PDF
Enhancing Iterative Learning Control With Fractional Power Update Law 被引量:1
12
作者 Zihan Li Dong Shen Xinghuo Yu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2023年第5期1137-1149,共13页
The P-type update law has been the mainstream technique used in iterative learning control(ILC)systems,which resembles linear feedback control with asymptotical convergence.In recent years,finite-time control strategi... The P-type update law has been the mainstream technique used in iterative learning control(ILC)systems,which resembles linear feedback control with asymptotical convergence.In recent years,finite-time control strategies such as terminal sliding mode control have been shown to be effective in ramping up convergence speed by introducing fractional power with feedback.In this paper,we show that such mechanism can equally ramp up the learning speed in ILC systems.We first propose a fractional power update rule for ILC of single-input-single-output linear systems.A nonlinear error dynamics is constructed along the iteration axis to illustrate the evolutionary converging process.Using the nonlinear mapping approach,fast convergence towards the limit cycles of tracking errors inherently existing in ILC systems is proven.The limit cycles are shown to be tunable to determine the steady states.Numerical simulations are provided to verify the theoretical results. 展开更多
关键词 Asymptotic convergence convergence rate finiteiteration tracking fractional power learning rule limit cycles
下载PDF
Internet of things intrusion detection model and algorithm based on cloud computing and multi-feature extraction extreme learning machine 被引量:1
13
作者 Haifeng Lin Qilin Xue +1 位作者 Jiayin Feng Di Bai 《Digital Communications and Networks》 SCIE CSCD 2023年第1期111-124,共14页
With the rapid development of the Internet of Things(IoT),there are several challenges pertaining to security in IoT applications.Compared with the characteristics of the traditional Internet,the IoT has many problems... With the rapid development of the Internet of Things(IoT),there are several challenges pertaining to security in IoT applications.Compared with the characteristics of the traditional Internet,the IoT has many problems,such as large assets,complex and diverse structures,and lack of computing resources.Traditional network intrusion detection systems cannot meet the security needs of IoT applications.In view of this situation,this study applies cloud computing and machine learning to the intrusion detection system of IoT to improve detection performance.Usually,traditional intrusion detection algorithms require considerable time for training,and these intrusion detection algorithms are not suitable for cloud computing due to the limited computing power and storage capacity of cloud nodes;therefore,it is necessary to study intrusion detection algorithms with low weights,short training time,and high detection accuracy for deployment and application on cloud nodes.An appropriate classification algorithm is a primary factor for deploying cloud computing intrusion prevention systems and a prerequisite for the system to respond to intrusion and reduce intrusion threats.This paper discusses the problems related to IoT intrusion prevention in cloud computing environments.Based on the analysis of cloud computing security threats,this study extensively explores IoT intrusion detection,cloud node monitoring,and intrusion response in cloud computing environments by using cloud computing,an improved extreme learning machine,and other methods.We use the Multi-Feature Extraction Extreme Learning Machine(MFE-ELM)algorithm for cloud computing,which adds a multi-feature extraction process to cloud servers,and use the deployed MFE-ELM algorithm on cloud nodes to detect and discover network intrusions to cloud nodes.In our simulation experiments,a classical dataset for intrusion detection is selected as a test,and test steps such as data preprocessing,feature engineering,model training,and result analysis are performed.The experimental results show that the proposed algorithm can effectively detect and identify most network data packets with good model performance and achieve efficient intrusion detection for heterogeneous data of the IoT from cloud nodes.Furthermore,it can enable the cloud server to discover nodes with serious security threats in the cloud cluster in real time,so that further security protection measures can be taken to obtain the optimal intrusion response strategy for the cloud cluster. 展开更多
关键词 Internet of Things Cloud Computing Intrusion Prevention Intrusion Detection Extreme learning Machine
下载PDF
改进Q-Learning的路径规划算法研究
14
作者 宋丽君 周紫瑜 +2 位作者 李云龙 侯佳杰 何星 《小型微型计算机系统》 CSCD 北大核心 2024年第4期823-829,共7页
针对Q-Learning算法学习效率低、收敛速度慢且在动态障碍物的环境下路径规划效果不佳的问题,本文提出一种改进Q-Learning的移动机器人路径规划算法.针对该问题,算法根据概率的突变性引入探索因子来平衡探索和利用以加快学习效率;通过在... 针对Q-Learning算法学习效率低、收敛速度慢且在动态障碍物的环境下路径规划效果不佳的问题,本文提出一种改进Q-Learning的移动机器人路径规划算法.针对该问题,算法根据概率的突变性引入探索因子来平衡探索和利用以加快学习效率;通过在更新函数中设计深度学习因子以保证算法探索概率;融合遗传算法,避免陷入局部路径最优同时按阶段探索最优迭代步长次数,以减少动态地图探索重复率;最后提取输出的最优路径关键节点采用贝塞尔曲线进行平滑处理,进一步保证路径平滑度和可行性.实验通过栅格法构建地图,对比实验结果表明,改进后的算法效率相较于传统算法在迭代次数和路径上均有较大优化,且能够较好的实现动态地图下的路径规划,进一步验证所提方法的有效性和实用性. 展开更多
关键词 移动机器人 路径规划 Q-learning算法 平滑处理 动态避障
下载PDF
Two-Stream Deep Learning Architecture-Based Human Action Recognition
15
作者 Faheem Shehzad Muhammad Attique Khan +5 位作者 Muhammad Asfand E.Yar Muhammad Sharif Majed Alhaisoni Usman Tariq Arnab Majumdar Orawit Thinnukool 《Computers, Materials & Continua》 SCIE EI 2023年第3期5931-5949,共19页
Human action recognition(HAR)based on Artificial intelligence reasoning is the most important research area in computer vision.Big breakthroughs in this field have been observed in the last few years;additionally,the ... Human action recognition(HAR)based on Artificial intelligence reasoning is the most important research area in computer vision.Big breakthroughs in this field have been observed in the last few years;additionally,the interest in research in this field is evolving,such as understanding of actions and scenes,studying human joints,and human posture recognition.Many HAR techniques are introduced in the literature.Nonetheless,the challenge of redundant and irrelevant features reduces recognition accuracy.They also faced a few other challenges,such as differing perspectives,environmental conditions,and temporal variations,among others.In this work,a deep learning and improved whale optimization algorithm based framework is proposed for HAR.The proposed framework consists of a few core stages i.e.,frames initial preprocessing,fine-tuned pre-trained deep learning models through transfer learning(TL),features fusion using modified serial based approach,and improved whale optimization based best features selection for final classification.Two pre-trained deep learning models such as InceptionV3 and Resnet101 are fine-tuned and TL is employed to train on action recognition datasets.The fusion process increases the length of feature vectors;therefore,improved whale optimization algorithm is proposed and selects the best features.The best selected features are finally classified usingmachine learning(ML)classifiers.Four publicly accessible datasets such as Ut-interaction,Hollywood,Free Viewpoint Action Recognition usingMotion History Volumes(IXMAS),and centre of computer vision(UCF)Sports,are employed and achieved the testing accuracy of 100%,99.9%,99.1%,and 100%respectively.Comparison with state of the art techniques(SOTA),the proposed method showed the improved accuracy. 展开更多
关键词 Human action recognition deep learning transfer learning fusion of multiple features features optimization
下载PDF
Recognition of Human Actions through Speech or Voice Using Machine Learning Techniques
16
作者 Oscar Peña-Cáceres Henry Silva-Marchan +1 位作者 Manuela Albert Miriam Gil 《Computers, Materials & Continua》 SCIE EI 2023年第11期1873-1891,共19页
The development of artificial intelligence(AI)and smart home technologies has driven the need for speech recognition-based solutions.This demand stems from the quest for more intuitive and natural interaction between ... The development of artificial intelligence(AI)and smart home technologies has driven the need for speech recognition-based solutions.This demand stems from the quest for more intuitive and natural interaction between users and smart devices in their homes.Speech recognition allows users to control devices and perform everyday actions through spoken commands,eliminating the need for physical interfaces or touch screens and enabling specific tasks such as turning on or off the light,heating,or lowering the blinds.The purpose of this study is to develop a speech-based classification model for recognizing human actions in the smart home.It seeks to demonstrate the effectiveness and feasibility of using machine learning techniques in predicting categories,subcategories,and actions from sentences.A dataset labeled with relevant information about categories,subcategories,and actions related to human actions in the smart home is used.The methodology uses machine learning techniques implemented in Python,extracting features using CountVectorizer to convert sentences into numerical representations.The results show that the classification model is able to accurately predict categories,subcategories,and actions based on sentences,with 82.99%accuracy for category,76.19%accuracy for subcategory,and 90.28%accuracy for action.The study concludes that using machine learning techniques is effective for recognizing and classifying human actions in the smart home,supporting its feasibility in various scenarios and opening new possibilities for advanced natural language processing systems in the field of AI and smart homes. 展开更多
关键词 AI machine learning smart home human action recognition
下载PDF
Feature Fusion-Based Deep Learning Network to Recognize Table Tennis Actions
17
作者 Chih-Ta Yen Tz-Yun Chen +1 位作者 Un-Hung Chen Guo-Chang WangZong-Xian Chen 《Computers, Materials & Continua》 SCIE EI 2023年第1期83-99,共17页
A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.M... A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.Multiple kernel sizes were used in convolutional neural network(CNN)to evaluate their performance for extracting features.Moreover,a multiscale CNN with two kernel sizes was used to perform feature fusion at different scales in a concatenated manner.The CNN achieved recognition of the four table tennis strokes.Experimental data were obtained from20 research participants who wore sensors on the back of their hands while performing the four table tennis strokes in a laboratory environment.The data were collected to verify the performance of the proposed models for wearable devices.Finally,the sensor and multi-scale CNN designed in this study achieved accuracy and F1 scores of 99.58%and 99.16%,respectively,for the four strokes.The accuracy for five-fold cross validation was 99.87%.This result also shows that the multi-scale convolutional neural network has better robustness after fivefold cross validation. 展开更多
关键词 Wearable devices deep learning six-axis sensor feature fusion multi-scale convolutional neural networks action recognit
下载PDF
Deep Learning-Based Semantic Feature Extraction:A Literature Review and Future Directions 被引量:1
18
作者 DENG Letian ZHAO Yanru 《ZTE Communications》 2023年第2期11-17,共7页
Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications ... Semantic communication,as a critical component of artificial intelligence(AI),has gained increasing attention in recent years due to its significant impact on various fields.In this paper,we focus on the applications of semantic feature extraction,a key step in the semantic communication,in several areas of artificial intelligence,including natural language processing,medical imaging,remote sensing,autonomous driving,and other image-related applications.Specifically,we discuss how semantic feature extraction can enhance the accuracy and efficiency of natural language processing tasks,such as text classification,sentiment analysis,and topic modeling.In the medical imaging field,we explore how semantic feature extraction can be used for disease diagnosis,drug development,and treatment planning.In addition,we investigate the applications of semantic feature extraction in remote sensing and autonomous driving,where it can facilitate object detection,scene understanding,and other tasks.By providing an overview of the applications of semantic feature extraction in various fields,this paper aims to provide insights into the potential of this technology to advance the development of artificial intelligence. 展开更多
关键词 semantic feature extraction semantic communication deep learning
下载PDF
Deep Learning Accelerates the Discovery of Two- Dimensional Catalysts for Hydrogen Evolution Reaction 被引量:1
19
作者 Sicheng Wu Zhilong Wang +2 位作者 Haikuo Zhang Junfei Cai Jinjin Li 《Energy & Environmental Materials》 SCIE EI CAS CSCD 2023年第1期138-144,共7页
Two-dimensional materials with active sites are expected to replace platinum as large-scale hydrogen production catalysts.However,the rapid discovery of excellent two-dimensional hydrogen evolution reaction catalysts ... Two-dimensional materials with active sites are expected to replace platinum as large-scale hydrogen production catalysts.However,the rapid discovery of excellent two-dimensional hydrogen evolution reaction catalysts is seriously hindered due to the long experiment cycle and the huge cost of high-throughput calculations of adsorption energies.Considering that the traditional regression models cannot consider all the potential sites on the surface of catalysts,we use a deep learning method with crystal graph convolutional neural networks to accelerate the discovery of high-performance two-dimensional hydrogen evolution reaction catalysts from two-dimensional materials database,with the prediction accuracy as high as 95.2%.The proposed method considers all active sites,screens out 38 high performance catalysts from 6,531 two-dimensional materials,predicts their adsorption energies at different active sites,and determines the potential strongest adsorption sites.The prediction accuracy of the two-dimensional hydrogen evolution reaction catalysts screening strategy proposed in this work is at the density-functional-theory level,but the prediction speed is 10.19 years ahead of the high-throughput screening,demonstrating the capability of crystal graph convolutional neural networks-deep learning method for efficiently discovering high-performance new structures over a wide catalytic materials space. 展开更多
关键词 crystal graph convolutional neural network deep learning hydrogen evolution reaction two-dimensional(2D)material
下载PDF
Significant risk factors for intensive care unit-acquired weakness:A processing strategy based on repeated machine learning 被引量:5
20
作者 Ling Wang Deng-Yan Long 《World Journal of Clinical Cases》 SCIE 2024年第7期1235-1242,共8页
BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective pr... BACKGROUND Intensive care unit-acquired weakness(ICU-AW)is a common complication that significantly impacts the patient's recovery process,even leading to adverse outcomes.Currently,there is a lack of effective preventive measures.AIM To identify significant risk factors for ICU-AW through iterative machine learning techniques and offer recommendations for its prevention and treatment.METHODS Patients were categorized into ICU-AW and non-ICU-AW groups on the 14th day post-ICU admission.Relevant data from the initial 14 d of ICU stay,such as age,comorbidities,sedative dosage,vasopressor dosage,duration of mechanical ventilation,length of ICU stay,and rehabilitation therapy,were gathered.The relationships between these variables and ICU-AW were examined.Utilizing iterative machine learning techniques,a multilayer perceptron neural network model was developed,and its predictive performance for ICU-AW was assessed using the receiver operating characteristic curve.RESULTS Within the ICU-AW group,age,duration of mechanical ventilation,lorazepam dosage,adrenaline dosage,and length of ICU stay were significantly higher than in the non-ICU-AW group.Additionally,sepsis,multiple organ dysfunction syndrome,hypoalbuminemia,acute heart failure,respiratory failure,acute kidney injury,anemia,stress-related gastrointestinal bleeding,shock,hypertension,coronary artery disease,malignant tumors,and rehabilitation therapy ratios were significantly higher in the ICU-AW group,demonstrating statistical significance.The most influential factors contributing to ICU-AW were identified as the length of ICU stay(100.0%)and the duration of mechanical ventilation(54.9%).The neural network model predicted ICU-AW with an area under the curve of 0.941,sensitivity of 92.2%,and specificity of 82.7%.CONCLUSION The main factors influencing ICU-AW are the length of ICU stay and the duration of mechanical ventilation.A primary preventive strategy,when feasible,involves minimizing both ICU stay and mechanical ventilation duration. 展开更多
关键词 Intensive care unit-acquired weakness Risk factors Machine learning PREVENTION Strategies
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部