This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was f...This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was formed by combining 28,309 samples from the ChestX-ray14,PadChest,and CheXpert databases,with 10,287,6022,and 12,000 samples representing Pleural Effusion,Pulmonary Edema,and Normal cases,respectively.Consequently,the preprocessing step involves applying the Contrast Limited Adaptive Histogram Equalization(CLAHE)method to boost the local contrast of the X-ray samples,then resizing the images to 380×380 dimensions,followed by using the data augmentation technique.The classification task employs a deep learning model based on the EfficientNet-V1-B4 architecture and is trained using the AdamW optimizer.The proposed multiclass system achieved an accuracy(ACC)of 98.3%,recall of 98.3%,precision of 98.7%,and F1-score of 98.7%.Moreover,the robustness of the model was revealed by the Receiver Operating Characteristic(ROC)analysis,which demonstrated an Area Under the Curve(AUC)of 1.00 for edema and normal cases and 0.99 for effusion.The experimental results demonstrate the superiority of the proposedmulti-class system,which has the potential to assist clinicians in timely and accurate diagnosis,leading to improved patient outcomes.Notably,ablation-CAM visualization at the last convolutional layer portrayed further enhanced diagnostic capabilities with heat maps on X-ray images,which will aid clinicians in interpreting and localizing abnormalities more effectively.展开更多
In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the e...In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the extraction of basic features.The images captured by wearable sensors contain advanced features,allowing them to be analyzed by deep learning algorithms to enhance the detection and recognition of human actions.Poor lighting and limited sensor capabilities can impact data quality,making the recognition of human actions a challenging task.The unimodal-based HAR approaches are not suitable in a real-time environment.Therefore,an updated HAR model is developed using multiple types of data and an advanced deep-learning approach.Firstly,the required signals and sensor data are accumulated from the standard databases.From these signals,the wave features are retrieved.Then the extracted wave features and sensor data are given as the input to recognize the human activity.An Adaptive Hybrid Deep Attentive Network(AHDAN)is developed by incorporating a“1D Convolutional Neural Network(1DCNN)”with a“Gated Recurrent Unit(GRU)”for the human activity recognition process.Additionally,the Enhanced Archerfish Hunting Optimizer(EAHO)is suggested to fine-tune the network parameters for enhancing the recognition process.An experimental evaluation is performed on various deep learning networks and heuristic algorithms to confirm the effectiveness of the proposed HAR model.The EAHO-based HAR model outperforms traditional deep learning networks with an accuracy of 95.36,95.25 for recall,95.48 for specificity,and 95.47 for precision,respectively.The result proved that the developed model is effective in recognizing human action by taking less time.Additionally,it reduces the computation complexity and overfitting issue through using an optimization approach.展开更多
Based on the optimization method, a new modified GM (1,1) model is presented, which is characterized by more accuracy prediction for the grey modeling.
文摘This paper presents a novelmulticlass systemdesigned to detect pleural effusion and pulmonary edema on chest Xray images,addressing the critical need for early detection in healthcare.A new comprehensive dataset was formed by combining 28,309 samples from the ChestX-ray14,PadChest,and CheXpert databases,with 10,287,6022,and 12,000 samples representing Pleural Effusion,Pulmonary Edema,and Normal cases,respectively.Consequently,the preprocessing step involves applying the Contrast Limited Adaptive Histogram Equalization(CLAHE)method to boost the local contrast of the X-ray samples,then resizing the images to 380×380 dimensions,followed by using the data augmentation technique.The classification task employs a deep learning model based on the EfficientNet-V1-B4 architecture and is trained using the AdamW optimizer.The proposed multiclass system achieved an accuracy(ACC)of 98.3%,recall of 98.3%,precision of 98.7%,and F1-score of 98.7%.Moreover,the robustness of the model was revealed by the Receiver Operating Characteristic(ROC)analysis,which demonstrated an Area Under the Curve(AUC)of 1.00 for edema and normal cases and 0.99 for effusion.The experimental results demonstrate the superiority of the proposedmulti-class system,which has the potential to assist clinicians in timely and accurate diagnosis,leading to improved patient outcomes.Notably,ablation-CAM visualization at the last convolutional layer portrayed further enhanced diagnostic capabilities with heat maps on X-ray images,which will aid clinicians in interpreting and localizing abnormalities more effectively.
文摘In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the extraction of basic features.The images captured by wearable sensors contain advanced features,allowing them to be analyzed by deep learning algorithms to enhance the detection and recognition of human actions.Poor lighting and limited sensor capabilities can impact data quality,making the recognition of human actions a challenging task.The unimodal-based HAR approaches are not suitable in a real-time environment.Therefore,an updated HAR model is developed using multiple types of data and an advanced deep-learning approach.Firstly,the required signals and sensor data are accumulated from the standard databases.From these signals,the wave features are retrieved.Then the extracted wave features and sensor data are given as the input to recognize the human activity.An Adaptive Hybrid Deep Attentive Network(AHDAN)is developed by incorporating a“1D Convolutional Neural Network(1DCNN)”with a“Gated Recurrent Unit(GRU)”for the human activity recognition process.Additionally,the Enhanced Archerfish Hunting Optimizer(EAHO)is suggested to fine-tune the network parameters for enhancing the recognition process.An experimental evaluation is performed on various deep learning networks and heuristic algorithms to confirm the effectiveness of the proposed HAR model.The EAHO-based HAR model outperforms traditional deep learning networks with an accuracy of 95.36,95.25 for recall,95.48 for specificity,and 95.47 for precision,respectively.The result proved that the developed model is effective in recognizing human action by taking less time.Additionally,it reduces the computation complexity and overfitting issue through using an optimization approach.
文摘Based on the optimization method, a new modified GM (1,1) model is presented, which is characterized by more accuracy prediction for the grey modeling.