The intrinsic heterogeneity of metabolic dysfunction-associated fatty liver disease(MASLD)and the intricate pathogenesis have impeded the advancement and clinical implementation of therapeutic interventions,underscori...The intrinsic heterogeneity of metabolic dysfunction-associated fatty liver disease(MASLD)and the intricate pathogenesis have impeded the advancement and clinical implementation of therapeutic interventions,underscoring the critical demand for novel treatments.A recent publication by Li et al proposes mesenchymal stem cells as promising effectors for the treatment of MASLD.This editorial is a continuum of the article published by Jiang et al which focuses on the significance of strategies to enhance the functionality of mesenchymal stem cells to improve efficacy in curing MASLD,including physical pretreatment,drug or chemical pretreatment,pretreatment with bioactive substances,and genetic engineering.展开更多
Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill ...Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill from doctors and technicians.In our study,we focused on the 3D dose prediction problem in radiotherapy by applying the deeplearning approach to computed tomography(CT)images of cancer patients.Medical image data has more complex characteristics than normal image data,and this research aims to explore the effectiveness of data preprocessing and augmentation in the context of the 3D dose prediction problem.We proposed four strategies to clarify our hypothesis in different aspects of applying data preprocessing and augmentation.In strategies,we trained our custom convolutional neural network model which has a structure inspired by the U-net,and residual blocks were also applied to the architecture.The output of the network is added with a rectified linear unit(Re-Lu)function for each pixel to ensure there are no negative values,which are absurd with radiation doses.Our experiments were conducted on the dataset of the Open Knowledge-Based Planning Challenge which was collected from head and neck cancer patients treatedwith radiation therapy.The results of four strategies showthat our hypothesis is rational by evaluating metrics in terms of the Dose-score and the Dose-volume histogram score(DVH-score).In the best training cases,the Dose-score is 3.08 and the DVH-score is 1.78.In addition,we also conducted a comparison with the results of another study in the same context of using the loss function.展开更多
Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, wh...Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, which combines the advantages of several machine learning models. The ensemble is made up of various base models, such as Decision Trees, K-Nearest Neighbors (KNN), Multi-Layer Perceptrons (MLP), and Naive Bayes, each of which offers a distinct perspective on the properties of the data. The research adheres to a methodical workflow that begins with thorough data preprocessing to guarantee the accuracy and applicability of the data. In order to extract useful attributes from network traffic data—which are essential for efficient model training—feature engineering is used. The ensemble approach combines these models by training a Logistic Regression model meta-learner on base model predictions. In addition to increasing prediction accuracy, this tiered approach helps get around the drawbacks that come with using individual models. High accuracy, precision, and recall are shown in the model’s evaluation of a network intrusion dataset, indicating the model’s efficacy in identifying malicious activity. Cross-validation is used to make sure the models are reliable and well-generalized to new, untested data. In addition to advancing cybersecurity, the research establishes a foundation for the implementation of flexible and scalable intrusion detection systems. This hybrid, stacked ensemble model has a lot of potential for improving cyberattack prevention, lowering the likelihood of cyberattacks, and offering a scalable solution that can be adjusted to meet new threats and technological advancements.展开更多
In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model...In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.展开更多
Cardiac diseases are one of the greatest global health challenges.Due to the high annual mortality rates,cardiac diseases have attracted the attention of numerous researchers in recent years.This article proposes a hy...Cardiac diseases are one of the greatest global health challenges.Due to the high annual mortality rates,cardiac diseases have attracted the attention of numerous researchers in recent years.This article proposes a hybrid fuzzy fusion classification model for cardiac arrhythmia diseases.The fusion model is utilized to optimally select the highest-ranked features generated by a variety of well-known feature-selection algorithms.An ensemble of classifiers is then applied to the fusion’s results.The proposed model classifies the arrhythmia dataset from the University of California,Irvine into normal/abnormal classes as well as 16 classes of arrhythmia.Initially,at the preprocessing steps,for the miss-valued attributes,we used the average value in the linear attributes group by the same class and the most frequent value for nominal attributes.However,in order to ensure the model optimality,we eliminated all attributes which have zero or constant values that might bias the results of utilized classifiers.The preprocessing step led to 161 out of 279 attributes(features).Thereafter,a fuzzy-based feature-selection fusion method is applied to fuse high-ranked features obtained from different heuristic feature-selection algorithms.In short,our study comprises three main blocks:(1)sensing data and preprocessing;(2)feature queuing,selection,and extraction;and(3)the predictive model.Our proposed method improves classification performance in terms of accuracy,F1measure,recall,and precision when compared to state-of-the-art techniques.It achieves 98.5%accuracy for binary class mode and 98.9%accuracy for categorized class mode.展开更多
IoT usage in healthcare is one of the fastest growing domains all over the world which applies to every age group.Internet of Medical Things(IoMT)bridges the gap between the medical and IoT field where medical devices...IoT usage in healthcare is one of the fastest growing domains all over the world which applies to every age group.Internet of Medical Things(IoMT)bridges the gap between the medical and IoT field where medical devices communicate with each other through a wireless communication network.Advancement in IoMT makes human lives easy and better.This paper provides a comprehensive detailed literature survey to investigate different IoMT-driven applications,methodologies,and techniques to ensure the sustainability of IoMT-driven systems.The limitations of existing IoMTframeworks are also analyzed concerning their applicability in real-time driven systems or applications.In addition to this,various issues(gaps),challenges,and needs in the context of such systems are highlighted.The purpose of this paper is to interpret a rigorous review concept related to IoMT and present significant contributions in the field across the research fraternity.Lastly,this paper discusses the opportunities and prospects of IoMT and discusses various open research problems.展开更多
In this paper,a systematic description of the artificial intelligence(AI)-based channel estimation track of the 2nd Wireless Communication AI Competition(WAIC)is provided,which is hosted by IMT-2020(5G)Promotion Group...In this paper,a systematic description of the artificial intelligence(AI)-based channel estimation track of the 2nd Wireless Communication AI Competition(WAIC)is provided,which is hosted by IMT-2020(5G)Promotion Group 5G+AIWork Group.Firstly,the system model of demodulation reference signal(DMRS)based channel estimation problem and its corresponding dataset are introduced.Then the potential approaches for enhancing the performance of AI based channel estimation are discussed from the viewpoints of data analysis,pre-processing,key components and backbone network structures.At last,the final competition results composed of different solutions are concluded.It is expected that the AI-based channel estimation track of the 2nd WAIC could provide insightful guidance for both the academia and industry.展开更多
Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to pr...Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to protect the network from vulnerabilities and attacks.Although extensive research was reported in IDS,detecting novel intrusions with optimal features and reducing false alarm rates are still challenging.Therefore,we developed a novel fusion-based feature importance method to reduce the high dimensional feature space,which helps to identify attacks accurately with less false alarm rate.Initially,to improve training data quality,various preprocessing techniques are utilized.The Adaptive Synthetic oversampling technique generates synthetic samples for minority classes.In the proposed fusion-based feature importance,we use different approaches from the filter,wrapper,and embedded methods like mutual information,random forest importance,permutation importance,Shapley Additive exPlanations(SHAP)-based feature importance,and statistical feature importance methods like the difference of mean and median and standard deviation to rank each feature according to its rank.Then by simple plurality voting,the most optimal features are retrieved.Then the optimal features are fed to various models like Extra Tree(ET),Logistic Regression(LR),Support vector Machine(SVM),Decision Tree(DT),and Extreme Gradient Boosting Machine(XGBM).Then the hyperparameters of classification models are tuned with Halving Random Search cross-validation to enhance the performance.The experiments were carried out on the original imbalanced data and balanced data.The outcomes demonstrate that the balanced data scenario knocked out the imbalanced data.Finally,the experimental analysis proved that our proposed fusionbased feature importance performed well with XGBM giving an accuracy of 99.86%,99.68%,and 92.4%,with 9,7 and 8 features by training time of 1.5,4.5 and 5.5 s on Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD),Canadian Institute for Cybersecurity(CIC-IDS 2017),and UNSW-NB15,datasets respectively.In addition,the suggested technique has been examined and contrasted with the state of art methods on three datasets.展开更多
Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been wi...Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been widely used in various biomedical applications such as arrhythmia detection,disease-specific detection,mortality prediction,and biometric recognition.In recent years,ECG-related studies have been carried out using a variety of publicly available datasets,with many differences in the datasets used,data preprocessing methods,targeted challenges,and modeling and analysis techniques.Here we systematically summarize and analyze the ECGbased automatic analysis methods and applications.Specifically,we first reviewed 22 commonly used ECG public datasets and provided an overview of data preprocessing processes.Then we described some of the most widely used applications of ECG signals and analyzed the advanced methods involved in these applications.Finally,we elucidated some of the challenges in ECG analysis and provided suggestions for further research.展开更多
A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs.Chest X-ray(CXR)gained much interest after the COVID-19 outbreak thanks to its rapid imagin...A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs.Chest X-ray(CXR)gained much interest after the COVID-19 outbreak thanks to its rapid imaging time,widespread availability,low cost,and portability.In radiological investigations,computer-aided diagnostic tools are implemented to reduce intra-and inter-observer variability.Using lately industrialized Artificial Intelligence(AI)algorithms and radiological techniques to diagnose and classify disease is advantageous.The current study develops an automatic identification and classification model for CXR pictures using Gaussian Fil-tering based Optimized Synergic Deep Learning using Remora Optimization Algorithm(GF-OSDL-ROA).This method is inclusive of preprocessing and classification based on optimization.The data is preprocessed using Gaussian filtering(GF)to remove any extraneous noise from the image’s edges.Then,the OSDL model is applied to classify the CXRs under different severity levels based on CXR data.The learning rate of OSDL is optimized with the help of ROA for COVID-19 diagnosis showing the novelty of the work.OSDL model,applied in this study,was validated using the COVID-19 dataset.The experiments were conducted upon the proposed OSDL model,which achieved a classification accuracy of 99.83%,while the current Convolutional Neural Network achieved less classification accuracy,i.e.,98.14%.展开更多
A brain tumor is the uncharacteristic progression of tissues in the brain.These are very deadly,and if it is not diagnosed at an early stage,it might shorten the affected patient’s life span.Hence,their classificatio...A brain tumor is the uncharacteristic progression of tissues in the brain.These are very deadly,and if it is not diagnosed at an early stage,it might shorten the affected patient’s life span.Hence,their classification and detection play a critical role in treatment.Traditional Brain tumor detection is done by biopsy which is quite challenging.It is usually not preferred at an early stage of the disease.The detection involvesMagneticResonance Imaging(MRI),which is essential for evaluating the tumor.This paper aims to identify and detect brain tumors based on their location in the brain.In order to achieve this,the paper proposes a model that uses an extended deep Convolutional Neural Network(CNN)named Contour Extraction based Extended EfficientNet-B0(CE-EEN-B0)which is a feed-forward neural network with the efficient net layers;three convolutional layers and max-pooling layers;and finally,the global average pooling layer.The site of tumors in the brain is one feature that determines its effect on the functioning of an individual.Thus,this CNN architecture classifies brain tumors into four categories:No tumor,Pituitary tumor,Meningioma tumor,andGlioma tumor.This network provides an accuracy of 97.24%,a precision of 96.65%,and an F1 score of 96.86%which is better than already existing pre-trained networks and aims to help health professionals to cross-diagnose an MRI image.This model will undoubtedly reduce the complications in detection and aid radiologists without taking invasive steps.展开更多
Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial succ...Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial success.However,they still faced some challenges such as similarity in disease symptoms and irrelevant features extraction.In this article,we proposed a new deep learning architecture with optimization algorithm for cucumber and potato leaf diseases recognition.The proposed architecture consists of five steps.In the first step,data augmentation is performed to increase the numbers of training samples.In the second step,pre-trained DarkNet19 deep model is opted and fine-tuned that later utilized for the training of fine-tuned model through transfer learning.Deep features are extracted from the global pooling layer in the next step that is refined using Improved Cuckoo search algorithm.The best selected features are finally classified using machine learning classifiers such as SVM,and named a few more for final classification results.The proposed architecture is tested using publicly available datasets–Cucumber National Dataset and Plant Village.The proposed architecture achieved an accuracy of 100.0%,92.9%,and 99.2%,respectively.Acomparison with recent techniques is also performed,revealing that the proposed method achieved improved accuracy while consuming less computational time.展开更多
In agricultural engineering,the main challenge is on methodologies used for disease detection.The manual methods depend on the experience of the personal.Due to large variation in environmental condition,disease diagn...In agricultural engineering,the main challenge is on methodologies used for disease detection.The manual methods depend on the experience of the personal.Due to large variation in environmental condition,disease diagnosis and classification becomes a challenging task.Apart from the disease,the leaves are affected by climate changes which is hard for the image processing method to discriminate the disease from the other background.In Cucurbita gourd family,the disease severity examination of leaf samples through computer vision,and deep learning methodologies have gained popularity in recent years.In this paper,a hybrid method based on Convolutional Neural Network(CNN)is proposed for automatic pumpkin leaf image classification.The Proposed Denoising and deep Convolutional Neural Network(CNN)method enhances the Pumpkin Leaf Pre-processing and diagnosis.Real time data base was used for training and testing of the proposed work.Investigation on existing pre-trained network Alexnet and googlenet was investigated is done to evaluate the performance of the pro-posed method.The system and computer simulations were performed using Matlab tool.展开更多
Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainab...Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainable growth,primarily the use of wind and solar power.To achieve the prediction of wind power generation,several deep and machine learning models are constructed in this article as base models.These regression models are Deep neural network(DNN),k-nearest neighbor(KNN)regressor,long short-term memory(LSTM),averaging model,random forest(RF)regressor,bagging regressor,and gradient boosting(GB)regressor.In addition,data cleaning and data preprocessing were performed to the data.The dataset used in this study includes 4 features and 50530 instances.To accurately predict the wind power values,we propose in this paper a new optimization technique based on stochastic fractal search and particle swarm optimization(SFSPSO)to optimize the parameters of LSTM network.Five evaluation criteria were utilized to estimate the efficiency of the regression models,namely,mean absolute error(MAE),Nash Sutcliffe Efficiency(NSE),mean square error(MSE),coefficient of determination(R2),root mean squared error(RMSE).The experimental results illustrated that the proposed optimization of LSTM using SFS-PSO model achieved the best results with R2 equals 99.99%in predicting the wind power values.展开更多
The tendency toward achieving more sustainable and green buildings turned several passive buildings into more dynamic ones.Mosques are the type of buildings that have a unique energy usage pattern.Nevertheless,these t...The tendency toward achieving more sustainable and green buildings turned several passive buildings into more dynamic ones.Mosques are the type of buildings that have a unique energy usage pattern.Nevertheless,these types of buildings have minimal consideration in the ongoing energy efficiency applications.This is due to the unpredictability in the electrical consumption of the mosques affecting the stability of the distribution networks.Therefore,this study addresses this issue by developing a framework for a short-term electricity load forecast for a mosque load located in Riyadh,Saudi Arabia.In this study,and by harvesting the load consumption of the mosque and meteorological datasets,the performance of four forecasting algorithms is investigated,namely Artificial Neural Network and Support Vector Regression(SVR)based on three kernel functions:Radial Basis(RB),Polynomial,and Linear.In addition,this research work examines the impact of 13 different combinations of input attributes since selecting the optimal features has a major influence on yielding precise forecasting outcomes.For the mosque load,the(SVR-RB)with eleven features appeared to be the best forecasting model with the lowest forecasting errors metrics giving RMSE,nRMSE,MAE,and nMAE values of 4.207 kW,2.522%,2.938 kW,and 1.761%,respectively.展开更多
One of the leading cancers for both genders worldwide is lung cancer.The occurrence of lung cancer has fully augmented since the early 19th century.In this manuscript,we have discussed various data mining techniques t...One of the leading cancers for both genders worldwide is lung cancer.The occurrence of lung cancer has fully augmented since the early 19th century.In this manuscript,we have discussed various data mining techniques that have been employed for cancer diagnosis.Exposure to air pollution has been related to various adverse health effects.This work is subject to analysis of various air pollutants and associated health hazards and intends to evaluate the impact of air pollution caused by lung cancer.We have introduced data mining in lung cancer to air pollution,and our approach includes preprocessing,data mining,testing and evaluation,and knowledge discovery.Initially,we will eradicate the noise and irrelevant data,and following that,we will join the multiple informed sources into a common source.From that source,we will designate the information relevant to our investigation to be regained from that assortment.Following that,we will convert the designated data into a suitable mining process.The patterns are abstracted by utilizing a relational suggestion rule mining process.These patterns have revealed information,and this information is categorized with the help of an Auto Associative Neural Network classification method(AANN).The proposed method is compared with the existing method in various factors.In conclusion,the projected Auto associative neural network and relational suggestion rule mining methods accomplish a high accuracy status.展开更多
Miscanthus is an emerging dedicated energy crop, which can provide excellent yield on marginal lands. However, this crop is more difficult to harvest than many conventional energy crops such as corn stover and switchg...Miscanthus is an emerging dedicated energy crop, which can provide excellent yield on marginal lands. However, this crop is more difficult to harvest than many conventional energy crops such as corn stover and switchgrass due to its tall and rigid stalks. Crop samples for laboratory studies were collected from the field and the effects of roll spacing, roll speed, and crop input of a mechanical conditioning device on the physical conditions of miscanthus were studied in a lab setting. Test results showed that mechanical conditioning is effective to change the physical conditions of miscanthus to make baling possible or easier. Results also showed that the roll spacing had the most significant impact on the physical conditions of miscanthus, shown by a 115% increase in conditioning over a 0.95 cm (75%) reduction in roll spacing. Increased roll spacing and speed were shown to decrease the amount of torque required to condition the miscanthus.展开更多
Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a ...Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a novel efficient and effective preprocessing algorithm with three methods for text classification combining the Orthogonal Matching Pursuit algorithm to perform the classification. The main idea of the novel preprocessing strategy is that it combined stopword removal and/or regular filtering with tokenization and lowercase conversion, which can effectively reduce the feature dimension and improve the text feature matrix quality. Simulation tests on the 20 newsgroups dataset show that compared with the existing state-of-the-art method, the new method reduces the number of features by 19.85%, 34.35%, 26.25% and 38.67%, improves accuracy by 7.36%, 8.8%, 5.71% and 7.73%, and increases the speed of text classification by 17.38%, 25.64%, 23.76% and 33.38% on the four data, respectively.展开更多
Analyzing human facial expressions using machine vision systems is indeed a challenging yet fascinating problem in the field of computer vision and artificial intelligence. Facial expressions are a primary means throu...Analyzing human facial expressions using machine vision systems is indeed a challenging yet fascinating problem in the field of computer vision and artificial intelligence. Facial expressions are a primary means through which humans convey emotions, making their automated recognition valuable for various applications including man-computer interaction, affective computing, and psychological research. Pre-processing techniques are applied to every image with the aim of standardizing the images. Frequently used techniques include scaling, blurring, rotating, altering the contour of the image, changing the color to grayscale and normalization. Followed by feature extraction and then the traditional classifiers are applied to infer facial expressions. Increasing the performance of the system is difficult in the typical machine learning approach because feature extraction and classification phases are separate. But in Deep Neural Networks (DNN), the two phases are combined into a single phase. Therefore, the Convolutional Neural Network (CNN) models give better accuracy in Facial Expression Recognition than the traditional classifiers. But still the performance of CNN is hampered by noisy and deviated images in the dataset. This work utilized the preprocessing methods such as resizing, gray-scale conversion and normalization. Also, this research work is motivated by these drawbacks to study the use of image pre-processing techniques to enhance the performance of deep learning methods to implement facial expression recognition. Also, this research aims to recognize emotions using deep learning and show the influences of data pre-processing for further processing of images. The accuracy of each pre-processing methods is compared, then combination between them is analysed and the appropriate preprocessing techniques are identified and implemented to see the variability of accuracies in predicting facial expressions. .展开更多
As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the ...As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the preprocessing meth- ods reported in the literatures were not the same. In order to screen the best pre- processing method, this paper took three typical treatments to explore the effect of different preprocessing methods on soil microbial community functional diversity. The results showed that, method B's overall trend of AWCD values was better than A and C's. Method B's microbial utilization of six carbon sources was higher, and the result was relatively stable. The Simpson index, Shannon richness index and Car- bon source utilization richness index of the two treatments were B〉C〉A, while the Mclntosh index and Shannon evenness were not very stable, but the difference of variance analysis was not significant, and the method B was always with a smallest variance. Method B's principal component analysis was better than A and C's. In a word, the method using 250 r/min shaking for 30 minutes and cultivating at 28 ℃ was the best one, because it was simple, convenient, and with good repeatability.展开更多
文摘The intrinsic heterogeneity of metabolic dysfunction-associated fatty liver disease(MASLD)and the intricate pathogenesis have impeded the advancement and clinical implementation of therapeutic interventions,underscoring the critical demand for novel treatments.A recent publication by Li et al proposes mesenchymal stem cells as promising effectors for the treatment of MASLD.This editorial is a continuum of the article published by Jiang et al which focuses on the significance of strategies to enhance the functionality of mesenchymal stem cells to improve efficacy in curing MASLD,including physical pretreatment,drug or chemical pretreatment,pretreatment with bioactive substances,and genetic engineering.
基金sponsored by the Institute of Information Technology(Vietnam Academy of Science and Technology)with Project Code“CS24.01”.
文摘Cancer is one of the most dangerous diseaseswith highmortality.One of the principal treatments is radiotherapy by using radiation beams to destroy cancer cells and this workflow requires a lot of experience and skill from doctors and technicians.In our study,we focused on the 3D dose prediction problem in radiotherapy by applying the deeplearning approach to computed tomography(CT)images of cancer patients.Medical image data has more complex characteristics than normal image data,and this research aims to explore the effectiveness of data preprocessing and augmentation in the context of the 3D dose prediction problem.We proposed four strategies to clarify our hypothesis in different aspects of applying data preprocessing and augmentation.In strategies,we trained our custom convolutional neural network model which has a structure inspired by the U-net,and residual blocks were also applied to the architecture.The output of the network is added with a rectified linear unit(Re-Lu)function for each pixel to ensure there are no negative values,which are absurd with radiation doses.Our experiments were conducted on the dataset of the Open Knowledge-Based Planning Challenge which was collected from head and neck cancer patients treatedwith radiation therapy.The results of four strategies showthat our hypothesis is rational by evaluating metrics in terms of the Dose-score and the Dose-volume histogram score(DVH-score).In the best training cases,the Dose-score is 3.08 and the DVH-score is 1.78.In addition,we also conducted a comparison with the results of another study in the same context of using the loss function.
文摘Network intrusion detection systems need to be updated due to the rise in cyber threats. In order to improve detection accuracy, this research presents a strong strategy that makes use of a stacked ensemble method, which combines the advantages of several machine learning models. The ensemble is made up of various base models, such as Decision Trees, K-Nearest Neighbors (KNN), Multi-Layer Perceptrons (MLP), and Naive Bayes, each of which offers a distinct perspective on the properties of the data. The research adheres to a methodical workflow that begins with thorough data preprocessing to guarantee the accuracy and applicability of the data. In order to extract useful attributes from network traffic data—which are essential for efficient model training—feature engineering is used. The ensemble approach combines these models by training a Logistic Regression model meta-learner on base model predictions. In addition to increasing prediction accuracy, this tiered approach helps get around the drawbacks that come with using individual models. High accuracy, precision, and recall are shown in the model’s evaluation of a network intrusion dataset, indicating the model’s efficacy in identifying malicious activity. Cross-validation is used to make sure the models are reliable and well-generalized to new, untested data. In addition to advancing cybersecurity, the research establishes a foundation for the implementation of flexible and scalable intrusion detection systems. This hybrid, stacked ensemble model has a lot of potential for improving cyberattack prevention, lowering the likelihood of cyberattacks, and offering a scalable solution that can be adjusted to meet new threats and technological advancements.
文摘In order to reduce the risk of non-performing loans, losses, and improve the loan approval efficiency, it is necessary to establish an intelligent loan risk and approval prediction system. A hybrid deep learning model with 1DCNN-attention network and the enhanced preprocessing techniques is proposed for loan approval prediction. Our proposed model consists of the enhanced data preprocessing and stacking of multiple hybrid modules. Initially, the enhanced data preprocessing techniques using a combination of methods such as standardization, SMOTE oversampling, feature construction, recursive feature elimination (RFE), information value (IV) and principal component analysis (PCA), which not only eliminates the effects of data jitter and non-equilibrium, but also removes redundant features while improving the representation of features. Subsequently, a hybrid module that combines a 1DCNN with an attention mechanism is proposed to extract local and global spatio-temporal features. Finally, the comprehensive experiments conducted validate that the proposed model surpasses state-of-the-art baseline models across various performance metrics, including accuracy, precision, recall, F1 score, and AUC. Our proposed model helps to automate the loan approval process and provides scientific guidance to financial institutions for loan risk control.
文摘Cardiac diseases are one of the greatest global health challenges.Due to the high annual mortality rates,cardiac diseases have attracted the attention of numerous researchers in recent years.This article proposes a hybrid fuzzy fusion classification model for cardiac arrhythmia diseases.The fusion model is utilized to optimally select the highest-ranked features generated by a variety of well-known feature-selection algorithms.An ensemble of classifiers is then applied to the fusion’s results.The proposed model classifies the arrhythmia dataset from the University of California,Irvine into normal/abnormal classes as well as 16 classes of arrhythmia.Initially,at the preprocessing steps,for the miss-valued attributes,we used the average value in the linear attributes group by the same class and the most frequent value for nominal attributes.However,in order to ensure the model optimality,we eliminated all attributes which have zero or constant values that might bias the results of utilized classifiers.The preprocessing step led to 161 out of 279 attributes(features).Thereafter,a fuzzy-based feature-selection fusion method is applied to fuse high-ranked features obtained from different heuristic feature-selection algorithms.In short,our study comprises three main blocks:(1)sensing data and preprocessing;(2)feature queuing,selection,and extraction;and(3)the predictive model.Our proposed method improves classification performance in terms of accuracy,F1measure,recall,and precision when compared to state-of-the-art techniques.It achieves 98.5%accuracy for binary class mode and 98.9%accuracy for categorized class mode.
文摘IoT usage in healthcare is one of the fastest growing domains all over the world which applies to every age group.Internet of Medical Things(IoMT)bridges the gap between the medical and IoT field where medical devices communicate with each other through a wireless communication network.Advancement in IoMT makes human lives easy and better.This paper provides a comprehensive detailed literature survey to investigate different IoMT-driven applications,methodologies,and techniques to ensure the sustainability of IoMT-driven systems.The limitations of existing IoMTframeworks are also analyzed concerning their applicability in real-time driven systems or applications.In addition to this,various issues(gaps),challenges,and needs in the context of such systems are highlighted.The purpose of this paper is to interpret a rigorous review concept related to IoMT and present significant contributions in the field across the research fraternity.Lastly,this paper discusses the opportunities and prospects of IoMT and discusses various open research problems.
文摘In this paper,a systematic description of the artificial intelligence(AI)-based channel estimation track of the 2nd Wireless Communication AI Competition(WAIC)is provided,which is hosted by IMT-2020(5G)Promotion Group 5G+AIWork Group.Firstly,the system model of demodulation reference signal(DMRS)based channel estimation problem and its corresponding dataset are introduced.Then the potential approaches for enhancing the performance of AI based channel estimation are discussed from the viewpoints of data analysis,pre-processing,key components and backbone network structures.At last,the final competition results composed of different solutions are concluded.It is expected that the AI-based channel estimation track of the 2nd WAIC could provide insightful guidance for both the academia and industry.
文摘Expanding internet-connected services has increased cyberattacks,many of which have grave and disastrous repercussions.An Intrusion Detection System(IDS)plays an essential role in network security since it helps to protect the network from vulnerabilities and attacks.Although extensive research was reported in IDS,detecting novel intrusions with optimal features and reducing false alarm rates are still challenging.Therefore,we developed a novel fusion-based feature importance method to reduce the high dimensional feature space,which helps to identify attacks accurately with less false alarm rate.Initially,to improve training data quality,various preprocessing techniques are utilized.The Adaptive Synthetic oversampling technique generates synthetic samples for minority classes.In the proposed fusion-based feature importance,we use different approaches from the filter,wrapper,and embedded methods like mutual information,random forest importance,permutation importance,Shapley Additive exPlanations(SHAP)-based feature importance,and statistical feature importance methods like the difference of mean and median and standard deviation to rank each feature according to its rank.Then by simple plurality voting,the most optimal features are retrieved.Then the optimal features are fed to various models like Extra Tree(ET),Logistic Regression(LR),Support vector Machine(SVM),Decision Tree(DT),and Extreme Gradient Boosting Machine(XGBM).Then the hyperparameters of classification models are tuned with Halving Random Search cross-validation to enhance the performance.The experiments were carried out on the original imbalanced data and balanced data.The outcomes demonstrate that the balanced data scenario knocked out the imbalanced data.Finally,the experimental analysis proved that our proposed fusionbased feature importance performed well with XGBM giving an accuracy of 99.86%,99.68%,and 92.4%,with 9,7 and 8 features by training time of 1.5,4.5 and 5.5 s on Network Security Laboratory-Knowledge Discovery in Databases(NSL-KDD),Canadian Institute for Cybersecurity(CIC-IDS 2017),and UNSW-NB15,datasets respectively.In addition,the suggested technique has been examined and contrasted with the state of art methods on three datasets.
基金Supported by the NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization(U1909208)the Science and Technology Major Project of Changsha(kh2202004)the Changsha Municipal Natural Science Foundation(kq2202106).
文摘Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been widely used in various biomedical applications such as arrhythmia detection,disease-specific detection,mortality prediction,and biometric recognition.In recent years,ECG-related studies have been carried out using a variety of publicly available datasets,with many differences in the datasets used,data preprocessing methods,targeted challenges,and modeling and analysis techniques.Here we systematically summarize and analyze the ECGbased automatic analysis methods and applications.Specifically,we first reviewed 22 commonly used ECG public datasets and provided an overview of data preprocessing processes.Then we described some of the most widely used applications of ECG signals and analyzed the advanced methods involved in these applications.Finally,we elucidated some of the challenges in ECG analysis and provided suggestions for further research.
文摘A chest radiology scan can significantly aid the early diagnosis and management of COVID-19 since the virus attacks the lungs.Chest X-ray(CXR)gained much interest after the COVID-19 outbreak thanks to its rapid imaging time,widespread availability,low cost,and portability.In radiological investigations,computer-aided diagnostic tools are implemented to reduce intra-and inter-observer variability.Using lately industrialized Artificial Intelligence(AI)algorithms and radiological techniques to diagnose and classify disease is advantageous.The current study develops an automatic identification and classification model for CXR pictures using Gaussian Fil-tering based Optimized Synergic Deep Learning using Remora Optimization Algorithm(GF-OSDL-ROA).This method is inclusive of preprocessing and classification based on optimization.The data is preprocessed using Gaussian filtering(GF)to remove any extraneous noise from the image’s edges.Then,the OSDL model is applied to classify the CXRs under different severity levels based on CXR data.The learning rate of OSDL is optimized with the help of ROA for COVID-19 diagnosis showing the novelty of the work.OSDL model,applied in this study,was validated using the COVID-19 dataset.The experiments were conducted upon the proposed OSDL model,which achieved a classification accuracy of 99.83%,while the current Convolutional Neural Network achieved less classification accuracy,i.e.,98.14%.
文摘A brain tumor is the uncharacteristic progression of tissues in the brain.These are very deadly,and if it is not diagnosed at an early stage,it might shorten the affected patient’s life span.Hence,their classification and detection play a critical role in treatment.Traditional Brain tumor detection is done by biopsy which is quite challenging.It is usually not preferred at an early stage of the disease.The detection involvesMagneticResonance Imaging(MRI),which is essential for evaluating the tumor.This paper aims to identify and detect brain tumors based on their location in the brain.In order to achieve this,the paper proposes a model that uses an extended deep Convolutional Neural Network(CNN)named Contour Extraction based Extended EfficientNet-B0(CE-EEN-B0)which is a feed-forward neural network with the efficient net layers;three convolutional layers and max-pooling layers;and finally,the global average pooling layer.The site of tumors in the brain is one feature that determines its effect on the functioning of an individual.Thus,this CNN architecture classifies brain tumors into four categories:No tumor,Pituitary tumor,Meningioma tumor,andGlioma tumor.This network provides an accuracy of 97.24%,a precision of 96.65%,and an F1 score of 96.86%which is better than already existing pre-trained networks and aims to help health professionals to cross-diagnose an MRI image.This model will undoubtedly reduce the complications in detection and aid radiologists without taking invasive steps.
文摘Manual diagnosis of crops diseases is not an easy process;thus,a computerized method is widely used.Froma couple of years,advancements in the domain ofmachine learning,such as deep learning,have shown substantial success.However,they still faced some challenges such as similarity in disease symptoms and irrelevant features extraction.In this article,we proposed a new deep learning architecture with optimization algorithm for cucumber and potato leaf diseases recognition.The proposed architecture consists of five steps.In the first step,data augmentation is performed to increase the numbers of training samples.In the second step,pre-trained DarkNet19 deep model is opted and fine-tuned that later utilized for the training of fine-tuned model through transfer learning.Deep features are extracted from the global pooling layer in the next step that is refined using Improved Cuckoo search algorithm.The best selected features are finally classified using machine learning classifiers such as SVM,and named a few more for final classification results.The proposed architecture is tested using publicly available datasets–Cucumber National Dataset and Plant Village.The proposed architecture achieved an accuracy of 100.0%,92.9%,and 99.2%,respectively.Acomparison with recent techniques is also performed,revealing that the proposed method achieved improved accuracy while consuming less computational time.
文摘In agricultural engineering,the main challenge is on methodologies used for disease detection.The manual methods depend on the experience of the personal.Due to large variation in environmental condition,disease diagnosis and classification becomes a challenging task.Apart from the disease,the leaves are affected by climate changes which is hard for the image processing method to discriminate the disease from the other background.In Cucurbita gourd family,the disease severity examination of leaf samples through computer vision,and deep learning methodologies have gained popularity in recent years.In this paper,a hybrid method based on Convolutional Neural Network(CNN)is proposed for automatic pumpkin leaf image classification.The Proposed Denoising and deep Convolutional Neural Network(CNN)method enhances the Pumpkin Leaf Pre-processing and diagnosis.Real time data base was used for training and testing of the proposed work.Investigation on existing pre-trained network Alexnet and googlenet was investigated is done to evaluate the performance of the pro-posed method.The system and computer simulations were performed using Matlab tool.
文摘Wind power is one of the sustainable ways to generate renewable energy.In recent years,some countries have set renewables to meet future energy needs,with the primary goal of reducing emissions and promoting sustainable growth,primarily the use of wind and solar power.To achieve the prediction of wind power generation,several deep and machine learning models are constructed in this article as base models.These regression models are Deep neural network(DNN),k-nearest neighbor(KNN)regressor,long short-term memory(LSTM),averaging model,random forest(RF)regressor,bagging regressor,and gradient boosting(GB)regressor.In addition,data cleaning and data preprocessing were performed to the data.The dataset used in this study includes 4 features and 50530 instances.To accurately predict the wind power values,we propose in this paper a new optimization technique based on stochastic fractal search and particle swarm optimization(SFSPSO)to optimize the parameters of LSTM network.Five evaluation criteria were utilized to estimate the efficiency of the regression models,namely,mean absolute error(MAE),Nash Sutcliffe Efficiency(NSE),mean square error(MSE),coefficient of determination(R2),root mean squared error(RMSE).The experimental results illustrated that the proposed optimization of LSTM using SFS-PSO model achieved the best results with R2 equals 99.99%in predicting the wind power values.
基金The author extends his appreciation to the Deputyship for Research&Innovation,Ministry of Education and Qassim University,Saudi Arabia for funding this research work through the Project Number(QU-IF-4-3-3-30013).
文摘The tendency toward achieving more sustainable and green buildings turned several passive buildings into more dynamic ones.Mosques are the type of buildings that have a unique energy usage pattern.Nevertheless,these types of buildings have minimal consideration in the ongoing energy efficiency applications.This is due to the unpredictability in the electrical consumption of the mosques affecting the stability of the distribution networks.Therefore,this study addresses this issue by developing a framework for a short-term electricity load forecast for a mosque load located in Riyadh,Saudi Arabia.In this study,and by harvesting the load consumption of the mosque and meteorological datasets,the performance of four forecasting algorithms is investigated,namely Artificial Neural Network and Support Vector Regression(SVR)based on three kernel functions:Radial Basis(RB),Polynomial,and Linear.In addition,this research work examines the impact of 13 different combinations of input attributes since selecting the optimal features has a major influence on yielding precise forecasting outcomes.For the mosque load,the(SVR-RB)with eleven features appeared to be the best forecasting model with the lowest forecasting errors metrics giving RMSE,nRMSE,MAE,and nMAE values of 4.207 kW,2.522%,2.938 kW,and 1.761%,respectively.
基金support from Taif University Researchers supporting Project Number(TURSP-2020/215),Taif University,Taif,Saudi Arabia.
文摘One of the leading cancers for both genders worldwide is lung cancer.The occurrence of lung cancer has fully augmented since the early 19th century.In this manuscript,we have discussed various data mining techniques that have been employed for cancer diagnosis.Exposure to air pollution has been related to various adverse health effects.This work is subject to analysis of various air pollutants and associated health hazards and intends to evaluate the impact of air pollution caused by lung cancer.We have introduced data mining in lung cancer to air pollution,and our approach includes preprocessing,data mining,testing and evaluation,and knowledge discovery.Initially,we will eradicate the noise and irrelevant data,and following that,we will join the multiple informed sources into a common source.From that source,we will designate the information relevant to our investigation to be regained from that assortment.Following that,we will convert the designated data into a suitable mining process.The patterns are abstracted by utilizing a relational suggestion rule mining process.These patterns have revealed information,and this information is categorized with the help of an Auto Associative Neural Network classification method(AANN).The proposed method is compared with the existing method in various factors.In conclusion,the projected Auto associative neural network and relational suggestion rule mining methods accomplish a high accuracy status.
文摘Miscanthus is an emerging dedicated energy crop, which can provide excellent yield on marginal lands. However, this crop is more difficult to harvest than many conventional energy crops such as corn stover and switchgrass due to its tall and rigid stalks. Crop samples for laboratory studies were collected from the field and the effects of roll spacing, roll speed, and crop input of a mechanical conditioning device on the physical conditions of miscanthus were studied in a lab setting. Test results showed that mechanical conditioning is effective to change the physical conditions of miscanthus to make baling possible or easier. Results also showed that the roll spacing had the most significant impact on the physical conditions of miscanthus, shown by a 115% increase in conditioning over a 0.95 cm (75%) reduction in roll spacing. Increased roll spacing and speed were shown to decrease the amount of torque required to condition the miscanthus.
文摘Text classification is an essential task of natural language processing. Preprocessing, which determines the representation of text features, is one of the key steps of text classification architecture. It proposed a novel efficient and effective preprocessing algorithm with three methods for text classification combining the Orthogonal Matching Pursuit algorithm to perform the classification. The main idea of the novel preprocessing strategy is that it combined stopword removal and/or regular filtering with tokenization and lowercase conversion, which can effectively reduce the feature dimension and improve the text feature matrix quality. Simulation tests on the 20 newsgroups dataset show that compared with the existing state-of-the-art method, the new method reduces the number of features by 19.85%, 34.35%, 26.25% and 38.67%, improves accuracy by 7.36%, 8.8%, 5.71% and 7.73%, and increases the speed of text classification by 17.38%, 25.64%, 23.76% and 33.38% on the four data, respectively.
文摘Analyzing human facial expressions using machine vision systems is indeed a challenging yet fascinating problem in the field of computer vision and artificial intelligence. Facial expressions are a primary means through which humans convey emotions, making their automated recognition valuable for various applications including man-computer interaction, affective computing, and psychological research. Pre-processing techniques are applied to every image with the aim of standardizing the images. Frequently used techniques include scaling, blurring, rotating, altering the contour of the image, changing the color to grayscale and normalization. Followed by feature extraction and then the traditional classifiers are applied to infer facial expressions. Increasing the performance of the system is difficult in the typical machine learning approach because feature extraction and classification phases are separate. But in Deep Neural Networks (DNN), the two phases are combined into a single phase. Therefore, the Convolutional Neural Network (CNN) models give better accuracy in Facial Expression Recognition than the traditional classifiers. But still the performance of CNN is hampered by noisy and deviated images in the dataset. This work utilized the preprocessing methods such as resizing, gray-scale conversion and normalization. Also, this research work is motivated by these drawbacks to study the use of image pre-processing techniques to enhance the performance of deep learning methods to implement facial expression recognition. Also, this research aims to recognize emotions using deep learning and show the influences of data pre-processing for further processing of images. The accuracy of each pre-processing methods is compared, then combination between them is analysed and the appropriate preprocessing techniques are identified and implemented to see the variability of accuracies in predicting facial expressions. .
基金Supported by National and International Scientific and Technological Cooperation Project"The application of Microbial Agents on Mining Reclamation and Ecological Recovery"(2011DFR31230)Key Project of Shanxi academy of Agricultural Science"The Research and Application of Bio-organic Fertilizer on Mining Reclamation and Soil Remediation"(2013zd12)Major Science and Technology Programs of Shanxi Province"Key Technology Research and Demonstration of mining waste land ecosystem Restoration and Reconstruction"(20121101009)~~
文摘As one of the main methods of microbial community functional diversity measurement, biolog method was favored by many researchers for its simple oper- ation, high sensitivity, strong resolution and rich data. But the preprocessing meth- ods reported in the literatures were not the same. In order to screen the best pre- processing method, this paper took three typical treatments to explore the effect of different preprocessing methods on soil microbial community functional diversity. The results showed that, method B's overall trend of AWCD values was better than A and C's. Method B's microbial utilization of six carbon sources was higher, and the result was relatively stable. The Simpson index, Shannon richness index and Car- bon source utilization richness index of the two treatments were B〉C〉A, while the Mclntosh index and Shannon evenness were not very stable, but the difference of variance analysis was not significant, and the method B was always with a smallest variance. Method B's principal component analysis was better than A and C's. In a word, the method using 250 r/min shaking for 30 minutes and cultivating at 28 ℃ was the best one, because it was simple, convenient, and with good repeatability.