COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of en...COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics.展开更多
Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit...Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.展开更多
Electric vehicles use electric motors, which turn electrical energy into mechanical energy. As electric motors are conventionally used in all the industry, it is an established development site. It’s a mature technol...Electric vehicles use electric motors, which turn electrical energy into mechanical energy. As electric motors are conventionally used in all the industry, it is an established development site. It’s a mature technology with ideal power and torque curves for vehicular operation. Conventional vehicles use oil and gas as fuel or energy storage. Although they also have an excellent economic impact, the continuous use of oil and gas threatened the world’s reservation of total oil and gas. Also, they emit carbon dioxide and some toxic ingredients through the vehicle’s tailpipe, which causes the greenhouse effect and seriously impacts the environment. So, as an alternative, electric car refers to a green technology of decarbonization with zero emission of greenhouse gases through the tailpipe. So, they can remove the problem of greenhouse gas emissions and solve the world’s remaining non-renewable energy storage problem. Pure electric vehicles (PEV) can be applied in all spheres, but their special implementation can only be seen in downhole operations. They are used for low noise and less pollution in the downhole process. In this study, the basic structure of the pure electric command vehicle is studied, the main components of the command vehicle power system, namely the selection of the drive motor and the power battery, are analyzed, and the main parameters of the drive motor and the power battery are designed and calculated. The checking calculation results show that the power and transmission system developed in this paper meets the design requirements, and the design scheme is feasible and reasonable.展开更多
In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR i...In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR is a reliable measure for the early diagnosis of Glaucoma.In this study,we developed a lightweight DNN model for OC and OD segmentation in retinal fundus images.Our DNN model is based on modifications to Anam-Net,incorporating an anamorphic depth embedding block.To reduce computational complexity,we employ a fixed filter size for all convolution layers in the encoder and decoder stages as the network deepens.This modification significantly reduces the number of trainable parameters,making the model lightweight and suitable for resource-constrained applications.We evaluate the performance of the developed model using two publicly available retinal image databases,namely RIM-ONE and Drishti-GS.The results demonstrate promising OC segmentation performance across most standard evaluation metrics while achieving analogous results for OD segmentation.We used two retinal fundus image databases named RIM-ONE and Drishti-GS that contained 159 images and 101 retinal images,respectively.For OD segmentation using the RIM-ONE we obtain an f1-score(F1),Jaccard coefficient(JC),and overlapping error(OE)of 0.950,0.9219,and 0.0781,respectively.Similarly,for OC segmentation using the same databases,we achieve scores of 0.8481(F1),0.7428(JC),and 0.2572(OE).Based on these experimental results and the significantly lower number of trainable parameters,we conclude that the developed model is highly suitable for the early diagnosis of glaucoma by accurately estimating the CDR.展开更多
The dominance of Android in the global mobile market and the open development characteristics of this platform have resulted in a significant increase in malware.These malicious applications have become a serious conc...The dominance of Android in the global mobile market and the open development characteristics of this platform have resulted in a significant increase in malware.These malicious applications have become a serious concern to the security of Android systems.To address this problem,researchers have proposed several machine-learning models to detect and classify Android malware based on analyzing features extracted from Android samples.However,most existing studies have focused on the classification task and overlooked the feature selection process,which is crucial to reduce the training time and maintain or improve the classification results.The current paper proposes a new Android malware detection and classification approach that identifies the most important features to improve classification performance and reduce training time.The proposed approach consists of two main steps.First,a feature selection method based on the Attention mechanism is used to select the most important features.Then,an optimized Light Gradient Boosting Machine(LightGBM)classifier is applied to classify the Android samples and identify the malware.The feature selection method proposed in this paper is to integrate an Attention layer into a multilayer perceptron neural network.The role of the Attention layer is to compute the weighted values of each feature based on its importance for the classification process.Experimental evaluation of the approach has shown that combining the Attention-based technique with an optimized classification algorithm for Android malware detection has improved the accuracy from 98.64%to 98.71%while reducing the training time from 80 to 28 s.展开更多
The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed wo...The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.展开更多
Under the influence of anthropogenic and climate change,the problems caused by urban heat island(UHI)has become increasingly prominent.In order to promote urban sustainable development and improve the quality of human...Under the influence of anthropogenic and climate change,the problems caused by urban heat island(UHI)has become increasingly prominent.In order to promote urban sustainable development and improve the quality of human settlements,it is significant for exploring the evolution characteristics of urban thermal environment and analyzing its driving forces.Taking the Landsat series images as the basic data sources,the winter land surface temperature(LST)of the rapid urbanization area of Fuzhou City in China was quantitatively retrieved from 2001 to 2021.Combing comprehensively the standard deviation ellipse model,profile analysis and GeoDetector model,the spatio-temporal evolution characteristics and influencing factors of the winter urban thermal environment were systematically analyzed.The results showed that the winter LST presented an increasing trend in the study area during 2001–2021,and the winter LST of the central urban regions was significantly higher than the suburbs.There was a strong UHI effect from 2001 to 2021with an expansion trend from the central urban regions to the suburbs and coastal areas in space scale.The LST of green lands and wetlands are significantly lower than croplands,artificial surface and unvegetated lands.Vegetation and water bodies had a significant mitigation effect on UHI,especially in the micro-scale.The winter UHI had been jointly driven by the underlying surface and socio-economic factors in a nonlinear or two-factor interactive enhancement mode,and socio-economic factors had played a leading role.This research could provide data support and decision-making references for rationally planning urban layout and promoting sustainable urban development.展开更多
With the rapid development of digital information technology,images are increasingly used in various fields.To ensure the security of image data,prevent unauthorized tampering and leakage,maintain personal privacy,and...With the rapid development of digital information technology,images are increasingly used in various fields.To ensure the security of image data,prevent unauthorized tampering and leakage,maintain personal privacy,and protect intellectual property rights,this study proposes an innovative color image encryption algorithm.Initially,the Mersenne Twister algorithm is utilized to generate high-quality pseudo-random numbers,establishing a robust basis for subsequent operations.Subsequently,two distinct chaotic systems,the autonomous non-Hamiltonian chaotic system and the tentlogistic-cosine chaotic mapping,are employed to produce chaotic random sequences.These chaotic sequences are used to control the encoding and decoding process of the DNA,effectively scrambling the image pixels.Furthermore,the complexity of the encryption process is enhanced through improved Joseph block scrambling.Thorough experimental verification,research,and analysis,the average value of the information entropy test data reaches as high as 7.999.Additionally,the average value of the number of pixels change rate(NPCR)test data is 99.6101%,which closely approaches the ideal value of 99.6094%.This algorithm not only guarantees image quality but also substantially raises the difficulty of decryption.展开更多
Purpose:The disseminating of academic knowledge to nonacademic audiences partly relies on the transition of subsequent citing papers.This study aims to investigate direct and indirect impact on technology and policy o...Purpose:The disseminating of academic knowledge to nonacademic audiences partly relies on the transition of subsequent citing papers.This study aims to investigate direct and indirect impact on technology and policy originating from transformative research based on ego citation network.Design/methodology/approach:Key Nobel Prize-winning publications(NPs)in fields of gene engineering and astrophysics are regarded as a proxy for transformative research.In this contribution,we introduce a network-structural indicator of citing patents to measure technological impact of a target article and use policy citations as a preliminary tool for policy impact.Findings:The results show that the impact on technology and policy of NPs are higher than that of their subsequent citation generations in gene engineering but not in astrophysics.Research limitations:The selection of Nobel Prizes is not balanced and the database used in this study,Dimensions,suffers from incompleteness and inaccuracy of citation links.Practical implications:Our findings provide useful clues to better understand the characteristics of transformative research in technological and policy impact.Originality/value:This study proposes a new framework to explore the direct and indirect impact on technology and policy originating from transformative research.展开更多
Recognizing human activity(HAR)from data in a smartphone sensor plays an important role in the field of health to prevent chronic diseases.Daily and weekly physical activities are recorded on the smartphone and tell t...Recognizing human activity(HAR)from data in a smartphone sensor plays an important role in the field of health to prevent chronic diseases.Daily and weekly physical activities are recorded on the smartphone and tell the user whether he is moving well or not.Typically,smartphones and their associated sensing devices operate in distributed and unstable environments.Therefore,collecting their data and extracting useful information is a significant challenge.In this context,the aimof this paper is twofold:The first is to analyze human behavior based on the recognition of physical activities.Using the results of physical activity detection and classification,the second part aims to develop a health recommendation system to notify smartphone users about their healthy physical behavior related to their physical activities.This system is based on the calculation of calories burned by each user during physical activities.In this way,conclusions can be drawn about a person’s physical behavior by estimating the number of calories burned after evaluating data collected daily or even weekly following a series of physical workouts.To identify and classify human behavior our methodology is based on artificial intelligence models specifically deep learning techniques like Long Short-Term Memory(LSTM),stacked LSTM,and bidirectional LSTM.Since human activity data contains both spatial and temporal information,we proposed,in this paper,to use of an architecture allowing the extraction of the two types of information simultaneously.While Convolutional Neural Networks(CNN)has an architecture designed for spatial information,our idea is to combine CNN with LSTM to increase classification accuracy by taking into consideration the extraction of both spatial and temporal data.The results obtained achieved an accuracy of 96%.On the other side,the data learned by these algorithms is prone to error and uncertainty.To overcome this constraint and improve performance(96%),we proposed to use the fusion mechanisms.The last combines deep learning classifiers tomodel non-accurate and ambiguous data to obtain synthetic information to aid in decision-making.The Voting and Dempster-Shafer(DS)approaches are employed.The results showed that fused classifiers based on DS theory outperformed individual classifiers(96%)with the highest accuracy level of 98%.Also,the findings disclosed that participants engaging in physical activities are healthy,showcasing a disparity in the distribution of physical activities between men and women.展开更多
Embracing software product lines(SPLs)is pivotal in the dynamic landscape of contemporary software devel-opment.However,the flexibility and global distribution inherent in modern systems pose significant challenges to...Embracing software product lines(SPLs)is pivotal in the dynamic landscape of contemporary software devel-opment.However,the flexibility and global distribution inherent in modern systems pose significant challenges to managing SPL variability,underscoring the critical importance of robust cybersecurity measures.This paper advocates for leveraging machine learning(ML)to address variability management issues and fortify the security of SPL.In the context of the broader special issue theme on innovative cybersecurity approaches,our proposed ML-based framework offers an interdisciplinary perspective,blending insights from computing,social sciences,and business.Specifically,it employs ML for demand analysis,dynamic feature extraction,and enhanced feature selection in distributed settings,contributing to cyber-resilient ecosystems.Our experiments demonstrate the framework’s superiority,emphasizing its potential to boost productivity and security in SPLs.As digital threats evolve,this research catalyzes interdisciplinary collaborations,aligning with the special issue’s goal of breaking down academic barriers to strengthen digital ecosystems against sophisticated attacks while upholding ethics,privacy,and human values.展开更多
This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning ap...This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.展开更多
Strong impact does serious harm to the military industries so it is necessary to choose reasonable cushioning material and design effective buffers to prevent the impact of equipment.Based on the capillary property en...Strong impact does serious harm to the military industries so it is necessary to choose reasonable cushioning material and design effective buffers to prevent the impact of equipment.Based on the capillary property entangled porous metallic wire materials(EPMWM),this paper designed a composite buffer which uses EPMWM and viscous fluid as cushioning materials under the low-speed impact of the recoil force device of weapon equipment(such as artillery,mortar,etc.).Combined with the capillary model,porosity,hydraulic diameter,maximum pore diameter and pore distribution were used to characterize the pore structure characteristics of EPMWM.The calculation model of the damping force of the composite buffer was established.The low-speed impact test of the composite buffer was conducted.The parameters of the buffer under low-speed impact were identified according to the model,and the nonlinear model of damping force was obtained.The test results show that the composite buffer with EPMWM and viscous fluid can absorb the impact energy from the recoil movement effectively,and provide a new method for the buffer design of weapon equipment(such as artillery,mortar,etc.).展开更多
Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As re...Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.展开更多
Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease ...Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease is hard to control because wind,rain,and insects carry spores.Colombian researchers utilized a deep learning system to identify CBD in coffee cherries at three growth stages and classify photographs of infected and uninfected cherries with 93%accuracy using a random forest method.If the dataset is too small and noisy,the algorithm may not learn data patterns and generate accurate predictions.To overcome the existing challenge,early detection of Colletotrichum Kahawae disease in coffee cherries requires automated processes,prompt recognition,and accurate classifications.The proposed methodology selects CBD image datasets through four different stages for training and testing.XGBoost to train a model on datasets of coffee berries,with each image labeled as healthy or diseased.Once themodel is trained,SHAP algorithmto figure out which features were essential formaking predictions with the proposed model.Some of these characteristics were the cherry’s colour,whether it had spots or other damage,and how big the Lesions were.Virtual inception is important for classification to virtualize the relationship between the colour of the berry is correlated with the presence of disease.To evaluate themodel’s performance andmitigate excess fitting,a 10-fold cross-validation approach is employed.This involves partitioning the dataset into ten subsets,training the model on each subset,and evaluating its performance.In comparison to other contemporary methodologies,the model put forth achieved an accuracy of 98.56%.展开更多
Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,ha...Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,have been developed for early detection of this disease.However,accurately segmenting the Region of Interest(ROI)fromthermograms remains challenging.This paper presents an approach that leverages image acquisition protocol parameters to identify the lateral breast region and estimate its bottomboundary using a second-degree polynomial.The proposed method demonstrated high efficacy,achieving an impressive Jaccard coefficient of 86%and a Dice index of 92%when evaluated against manually created ground truths.Textural features were extracted from each view’s ROI,with significant features selected via Mutual Information for training Multi-Layer Perceptron(MLP)and K-Nearest Neighbors(KNN)classifiers.Our findings revealed that the MLP classifier outperformed the KNN,achieving an accuracy of 86%,a specificity of 100%,and an Area Under the Curve(AUC)of 0.85.The consistency of the method across both sides of the breast suggests its viability as an auto-segmentation tool.Furthermore,the classification results suggests that lateral views of breast thermograms harbor valuable features that can significantly aid in the early detection of breast cancer.展开更多
Many deep learning-based registration methods rely on a single-stream encoder-decoder network for computing deformation fields between 3D volumes.However,these methods often lack constraint information and overlook se...Many deep learning-based registration methods rely on a single-stream encoder-decoder network for computing deformation fields between 3D volumes.However,these methods often lack constraint information and overlook semantic consistency,limiting their performance.To address these issues,we present a novel approach for medical image registration called theDual-VoxelMorph,featuring a dual-channel cross-constraint network.This innovative network utilizes both intensity and segmentation images,which share identical semantic information and feature representations.Two encoder-decoder structures calculate deformation fields for intensity and segmentation images,as generated by the dual-channel cross-constraint network.This design facilitates bidirectional communication between grayscale and segmentation information,enabling the model to better learn the corresponding grayscale and segmentation details of the same anatomical structures.To ensure semantic and directional consistency,we introduce constraints and apply the cosine similarity function to enhance semantic consistency.Evaluation on four public datasets demonstrates superior performance compared to the baselinemethod,achieving Dice scores of 79.9%,64.5%,69.9%,and 63.5%for OASIS-1,OASIS-3,LPBA40,and ADNI,respectively.展开更多
Genetic algorithms(GAs)are very good metaheuristic algorithms that are suitable for solving NP-hard combinatorial optimization problems.AsimpleGAbeginswith a set of solutions represented by a population of chromosomes...Genetic algorithms(GAs)are very good metaheuristic algorithms that are suitable for solving NP-hard combinatorial optimization problems.AsimpleGAbeginswith a set of solutions represented by a population of chromosomes and then uses the idea of survival of the fittest in the selection process to select some fitter chromosomes.It uses a crossover operator to create better offspring chromosomes and thus,converges the population.Also,it uses a mutation operator to explore the unexplored areas by the crossover operator,and thus,diversifies the GA search space.A combination of crossover and mutation operators makes the GA search strong enough to reach the optimal solution.However,appropriate selection and combination of crossover operator and mutation operator can lead to a very good GA for solving an optimization problem.In this present paper,we aim to study the benchmark traveling salesman problem(TSP).We developed several genetic algorithms using seven crossover operators and six mutation operators for the TSP and then compared them to some benchmark TSPLIB instances.The experimental studies show the effectiveness of the combination of a comprehensive sequential constructive crossover operator and insertion mutation operator for the problem.The GA using the comprehensive sequential constructive crossover with insertion mutation could find average solutions whose average percentage of excesses from the best-known solutions are between 0.22 and 14.94 for our experimented problem instances.展开更多
BACKGROUND Liver cancer is one of the deadliest malignant tumors worldwide.Immunotherapy has provided hope to patients with advanced liver cancer,but only a small fraction of patients benefit from this treatment due t...BACKGROUND Liver cancer is one of the deadliest malignant tumors worldwide.Immunotherapy has provided hope to patients with advanced liver cancer,but only a small fraction of patients benefit from this treatment due to individual differences.Identifying immune-related gene signatures in liver cancer patients not only aids physicians in cancer diagnosis but also offers personalized treatment strategies,thereby improving patient survival rates.Although several methods have been developed to predict the prognosis and immunotherapeutic efficacy in patients with liver cancer,the impact of cell-cell interactions in the tumor microenvir-onment has not been adequately considered.AIM To identify immune-related gene signals for predicting liver cancer prognosis and immunotherapy efficacy.METHODS Cell grouping and cell-cell communication analysis were performed on single-cell RNA-sequencing data to identify highly active cell groups in immune-related pathways.Highly active immune cells were identified by intersecting the highly active cell groups with B cells and T cells.The significantly differentially expressed genes between highly active immune cells and other cells were subsequently selected as features,and a least absolute shrinkage and selection operator(LASSO)regression model was constructed to screen for diagnostic-related features.Fourteen genes that were selected more than 5 times in 10 LASSO regression experiments were included in a multivariable Cox regression model.Finally,3 genes(stathmin 1,cofilin 1,and C-C chemokine ligand 5)significantly associated with survival were identified and used to construct an immune-related gene signature.RESULTS The immune-related gene signature composed of stathmin 1,cofilin 1,and C-C chemokine ligand 5 was identified through cell-cell communication.The effectiveness of the identified gene signature was validated based on experi-mental results of predictive immunotherapy response,tumor mutation burden analysis,immune cell infiltration analysis,survival analysis,and expression analysis.CONCLUSION The findings suggest that the identified gene signature may contribute to a deeper understanding of the activity patterns of immune cells in the liver tumor microenvironment,providing insights for personalized treatment strategies.展开更多
Author Profiling (AP) is a subsection of digital forensics that focuses on the detection of the author’s personalinformation, such as age, gender, occupation, and education, based on various linguistic features, e.g....Author Profiling (AP) is a subsection of digital forensics that focuses on the detection of the author’s personalinformation, such as age, gender, occupation, and education, based on various linguistic features, e.g., stylistic,semantic, and syntactic. The importance of AP lies in various fields, including forensics, security, medicine, andmarketing. In previous studies, many works have been done using different languages, e.g., English, Arabic, French,etc.However, the research on RomanUrdu is not up to the mark.Hence, this study focuses on detecting the author’sage and gender based on Roman Urdu text messages. The dataset used in this study is Fire’18-MaponSMS. Thisstudy proposed an ensemble model based on AdaBoostM1 and Random Forest (AMBRF) for AP using multiplelinguistic features that are stylistic, character-based, word-based, and sentence-based. The proposed model iscontrasted with several of the well-known models fromthe literature, including J48-Decision Tree (J48),Na飗e Bays(NB), K Nearest Neighbor (KNN), and Composite Hypercube on Random Projection (CHIRP), NB-Updatable,RF, and AdaboostM1. The overall outcome shows the better performance of the proposed AdaboostM1 withRandom Forest (ABMRF) with an accuracy of 54.2857% for age prediction and 71.1429% for gender predictioncalculated on stylistic features. Regarding word-based features, age and gender were considered in 50.5714% and60%, respectively. On the other hand, KNN and CHIRP show the weakest performance using all the linguisticfeatures for age and gender prediction.展开更多
文摘COVID-19 pandemic restrictions limited all social activities to curtail the spread of the virus.The foremost and most prime sector among those affected were schools,colleges,and universities.The education system of entire nations had shifted to online education during this time.Many shortcomings of Learning Management Systems(LMSs)were detected to support education in an online mode that spawned the research in Artificial Intelligence(AI)based tools that are being developed by the research community to improve the effectiveness of LMSs.This paper presents a detailed survey of the different enhancements to LMSs,which are led by key advances in the area of AI to enhance the real-time and non-real-time user experience.The AI-based enhancements proposed to the LMSs start from the Application layer and Presentation layer in the form of flipped classroom models for the efficient learning environment and appropriately designed UI/UX for efficient utilization of LMS utilities and resources,including AI-based chatbots.Session layer enhancements are also required,such as AI-based online proctoring and user authentication using Biometrics.These extend to the Transport layer to support real-time and rate adaptive encrypted video transmission for user security/privacy and satisfactory working of AI-algorithms.It also needs the support of the Networking layer for IP-based geolocation features,the Virtual Private Network(VPN)feature,and the support of Software-Defined Networks(SDN)for optimum Quality of Service(QoS).Finally,in addition to these,non-real-time user experience is enhanced by other AI-based enhancements such as Plagiarism detection algorithms and Data Analytics.
文摘Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.
文摘Electric vehicles use electric motors, which turn electrical energy into mechanical energy. As electric motors are conventionally used in all the industry, it is an established development site. It’s a mature technology with ideal power and torque curves for vehicular operation. Conventional vehicles use oil and gas as fuel or energy storage. Although they also have an excellent economic impact, the continuous use of oil and gas threatened the world’s reservation of total oil and gas. Also, they emit carbon dioxide and some toxic ingredients through the vehicle’s tailpipe, which causes the greenhouse effect and seriously impacts the environment. So, as an alternative, electric car refers to a green technology of decarbonization with zero emission of greenhouse gases through the tailpipe. So, they can remove the problem of greenhouse gas emissions and solve the world’s remaining non-renewable energy storage problem. Pure electric vehicles (PEV) can be applied in all spheres, but their special implementation can only be seen in downhole operations. They are used for low noise and less pollution in the downhole process. In this study, the basic structure of the pure electric command vehicle is studied, the main components of the command vehicle power system, namely the selection of the drive motor and the power battery, are analyzed, and the main parameters of the drive motor and the power battery are designed and calculated. The checking calculation results show that the power and transmission system developed in this paper meets the design requirements, and the design scheme is feasible and reasonable.
基金funded byResearchers Supporting Project Number(RSPD2024R 553),King Saud University,Riyadh,Saudi Arabia.
文摘In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR is a reliable measure for the early diagnosis of Glaucoma.In this study,we developed a lightweight DNN model for OC and OD segmentation in retinal fundus images.Our DNN model is based on modifications to Anam-Net,incorporating an anamorphic depth embedding block.To reduce computational complexity,we employ a fixed filter size for all convolution layers in the encoder and decoder stages as the network deepens.This modification significantly reduces the number of trainable parameters,making the model lightweight and suitable for resource-constrained applications.We evaluate the performance of the developed model using two publicly available retinal image databases,namely RIM-ONE and Drishti-GS.The results demonstrate promising OC segmentation performance across most standard evaluation metrics while achieving analogous results for OD segmentation.We used two retinal fundus image databases named RIM-ONE and Drishti-GS that contained 159 images and 101 retinal images,respectively.For OD segmentation using the RIM-ONE we obtain an f1-score(F1),Jaccard coefficient(JC),and overlapping error(OE)of 0.950,0.9219,and 0.0781,respectively.Similarly,for OC segmentation using the same databases,we achieve scores of 0.8481(F1),0.7428(JC),and 0.2572(OE).Based on these experimental results and the significantly lower number of trainable parameters,we conclude that the developed model is highly suitable for the early diagnosis of glaucoma by accurately estimating the CDR.
基金This work was funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under Grant No.(DGSSR-2023-02-02178).
文摘The dominance of Android in the global mobile market and the open development characteristics of this platform have resulted in a significant increase in malware.These malicious applications have become a serious concern to the security of Android systems.To address this problem,researchers have proposed several machine-learning models to detect and classify Android malware based on analyzing features extracted from Android samples.However,most existing studies have focused on the classification task and overlooked the feature selection process,which is crucial to reduce the training time and maintain or improve the classification results.The current paper proposes a new Android malware detection and classification approach that identifies the most important features to improve classification performance and reduce training time.The proposed approach consists of two main steps.First,a feature selection method based on the Attention mechanism is used to select the most important features.Then,an optimized Light Gradient Boosting Machine(LightGBM)classifier is applied to classify the Android samples and identify the malware.The feature selection method proposed in this paper is to integrate an Attention layer into a multilayer perceptron neural network.The role of the Attention layer is to compute the weighted values of each feature based on its importance for the classification process.Experimental evaluation of the approach has shown that combining the Attention-based technique with an optimized classification algorithm for Android malware detection has improved the accuracy from 98.64%to 98.71%while reducing the training time from 80 to 28 s.
文摘The Internet of Things(IoT)is a growing technology that allows the sharing of data with other devices across wireless networks.Specifically,IoT systems are vulnerable to cyberattacks due to its opennes The proposed work intends to implement a new security framework for detecting the most specific and harmful intrusions in IoT networks.In this framework,a Covariance Linear Learning Embedding Selection(CL2ES)methodology is used at first to extract the features highly associated with the IoT intrusions.Then,the Kernel Distributed Bayes Classifier(KDBC)is created to forecast attacks based on the probability distribution value precisely.In addition,a unique Mongolian Gazellas Optimization(MGO)algorithm is used to optimize the weight value for the learning of the classifier.The effectiveness of the proposed CL2ES-KDBC framework has been assessed using several IoT cyber-attack datasets,The obtained results are then compared with current classification methods regarding accuracy(97%),precision(96.5%),and other factors.Computational analysis of the CL2ES-KDBC system on IoT intrusion datasets is performed,which provides valuable insight into its performance,efficiency,and suitability for securing IoT networks.
基金Under the auspices of the Social Science and Humanity on Young Fund of the Ministry of Education of China(No.21YJCZH100)the Scientific Research Project on Outstanding Young of the Fujian Agriculture and Forestry University(No.XJQ201920)+1 种基金the Science and Technology Innovation Special Fund Project of Fujian Agriculture and Forestry University(No.CXZX2021032)the Forestry Peak Discipline Construction Project of Fujian Agriculture and Forestry University(No.72202200205)。
文摘Under the influence of anthropogenic and climate change,the problems caused by urban heat island(UHI)has become increasingly prominent.In order to promote urban sustainable development and improve the quality of human settlements,it is significant for exploring the evolution characteristics of urban thermal environment and analyzing its driving forces.Taking the Landsat series images as the basic data sources,the winter land surface temperature(LST)of the rapid urbanization area of Fuzhou City in China was quantitatively retrieved from 2001 to 2021.Combing comprehensively the standard deviation ellipse model,profile analysis and GeoDetector model,the spatio-temporal evolution characteristics and influencing factors of the winter urban thermal environment were systematically analyzed.The results showed that the winter LST presented an increasing trend in the study area during 2001–2021,and the winter LST of the central urban regions was significantly higher than the suburbs.There was a strong UHI effect from 2001 to 2021with an expansion trend from the central urban regions to the suburbs and coastal areas in space scale.The LST of green lands and wetlands are significantly lower than croplands,artificial surface and unvegetated lands.Vegetation and water bodies had a significant mitigation effect on UHI,especially in the micro-scale.The winter UHI had been jointly driven by the underlying surface and socio-economic factors in a nonlinear or two-factor interactive enhancement mode,and socio-economic factors had played a leading role.This research could provide data support and decision-making references for rationally planning urban layout and promoting sustainable urban development.
基金supported by the Open Fund of Advanced Cryptography and System Security Key Laboratory of Sichuan Province(Grant No.SKLACSS-202208)the Natural Science Foundation of Chongqing(Grant No.CSTB2023NSCQLZX0139)the National Natural Science Foundation of China(Grant No.61772295).
文摘With the rapid development of digital information technology,images are increasingly used in various fields.To ensure the security of image data,prevent unauthorized tampering and leakage,maintain personal privacy,and protect intellectual property rights,this study proposes an innovative color image encryption algorithm.Initially,the Mersenne Twister algorithm is utilized to generate high-quality pseudo-random numbers,establishing a robust basis for subsequent operations.Subsequently,two distinct chaotic systems,the autonomous non-Hamiltonian chaotic system and the tentlogistic-cosine chaotic mapping,are employed to produce chaotic random sequences.These chaotic sequences are used to control the encoding and decoding process of the DNA,effectively scrambling the image pixels.Furthermore,the complexity of the encryption process is enhanced through improved Joseph block scrambling.Thorough experimental verification,research,and analysis,the average value of the information entropy test data reaches as high as 7.999.Additionally,the average value of the number of pixels change rate(NPCR)test data is 99.6101%,which closely approaches the ideal value of 99.6094%.This algorithm not only guarantees image quality but also substantially raises the difficulty of decryption.
基金supported by the National Natural Science Foundation of China(Grant No.71974167).
文摘Purpose:The disseminating of academic knowledge to nonacademic audiences partly relies on the transition of subsequent citing papers.This study aims to investigate direct and indirect impact on technology and policy originating from transformative research based on ego citation network.Design/methodology/approach:Key Nobel Prize-winning publications(NPs)in fields of gene engineering and astrophysics are regarded as a proxy for transformative research.In this contribution,we introduce a network-structural indicator of citing patents to measure technological impact of a target article and use policy citations as a preliminary tool for policy impact.Findings:The results show that the impact on technology and policy of NPs are higher than that of their subsequent citation generations in gene engineering but not in astrophysics.Research limitations:The selection of Nobel Prizes is not balanced and the database used in this study,Dimensions,suffers from incompleteness and inaccuracy of citation links.Practical implications:Our findings provide useful clues to better understand the characteristics of transformative research in technological and policy impact.Originality/value:This study proposes a new framework to explore the direct and indirect impact on technology and policy originating from transformative research.
基金the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number 223202.
文摘Recognizing human activity(HAR)from data in a smartphone sensor plays an important role in the field of health to prevent chronic diseases.Daily and weekly physical activities are recorded on the smartphone and tell the user whether he is moving well or not.Typically,smartphones and their associated sensing devices operate in distributed and unstable environments.Therefore,collecting their data and extracting useful information is a significant challenge.In this context,the aimof this paper is twofold:The first is to analyze human behavior based on the recognition of physical activities.Using the results of physical activity detection and classification,the second part aims to develop a health recommendation system to notify smartphone users about their healthy physical behavior related to their physical activities.This system is based on the calculation of calories burned by each user during physical activities.In this way,conclusions can be drawn about a person’s physical behavior by estimating the number of calories burned after evaluating data collected daily or even weekly following a series of physical workouts.To identify and classify human behavior our methodology is based on artificial intelligence models specifically deep learning techniques like Long Short-Term Memory(LSTM),stacked LSTM,and bidirectional LSTM.Since human activity data contains both spatial and temporal information,we proposed,in this paper,to use of an architecture allowing the extraction of the two types of information simultaneously.While Convolutional Neural Networks(CNN)has an architecture designed for spatial information,our idea is to combine CNN with LSTM to increase classification accuracy by taking into consideration the extraction of both spatial and temporal data.The results obtained achieved an accuracy of 96%.On the other side,the data learned by these algorithms is prone to error and uncertainty.To overcome this constraint and improve performance(96%),we proposed to use the fusion mechanisms.The last combines deep learning classifiers tomodel non-accurate and ambiguous data to obtain synthetic information to aid in decision-making.The Voting and Dempster-Shafer(DS)approaches are employed.The results showed that fused classifiers based on DS theory outperformed individual classifiers(96%)with the highest accuracy level of 98%.Also,the findings disclosed that participants engaging in physical activities are healthy,showcasing a disparity in the distribution of physical activities between men and women.
基金supported via funding from Ministry of Defense,Government of Pakistan under Project Number AHQ/95013/6/4/8/NASTP(ACP).Titled:Development of ICT and Artificial Intelligence Based Precision Agriculture Systems Utilizing Dual-Use Aerospace Technologies-GREENAI.
文摘Embracing software product lines(SPLs)is pivotal in the dynamic landscape of contemporary software devel-opment.However,the flexibility and global distribution inherent in modern systems pose significant challenges to managing SPL variability,underscoring the critical importance of robust cybersecurity measures.This paper advocates for leveraging machine learning(ML)to address variability management issues and fortify the security of SPL.In the context of the broader special issue theme on innovative cybersecurity approaches,our proposed ML-based framework offers an interdisciplinary perspective,blending insights from computing,social sciences,and business.Specifically,it employs ML for demand analysis,dynamic feature extraction,and enhanced feature selection in distributed settings,contributing to cyber-resilient ecosystems.Our experiments demonstrate the framework’s superiority,emphasizing its potential to boost productivity and security in SPLs.As digital threats evolve,this research catalyzes interdisciplinary collaborations,aligning with the special issue’s goal of breaking down academic barriers to strengthen digital ecosystems against sophisticated attacks while upholding ethics,privacy,and human values.
基金supported and funded by the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU),Grant Number IMSIU-RG23151.
文摘This study explores the impact of hyperparameter optimization on machine learning models for predicting cardiovascular disease using data from an IoST(Internet of Sensing Things)device.Ten distinct machine learning approaches were implemented and systematically evaluated before and after hyperparameter tuning.Significant improvements were observed across various models,with SVM and Neural Networks consistently showing enhanced performance metrics such as F1-Score,recall,and precision.The study underscores the critical role of tailored hyperparameter tuning in optimizing these models,revealing diverse outcomes among algorithms.Decision Trees and Random Forests exhibited stable performance throughout the evaluation.While enhancing accuracy,hyperparameter optimization also led to increased execution time.Visual representations and comprehensive results support the findings,confirming the hypothesis that optimizing parameters can effectively enhance predictive capabilities in cardiovascular disease.This research contributes to advancing the understanding and application of machine learning in healthcare,particularly in improving predictive accuracy for cardiovascular disease management and intervention strategies.
基金supported by the National Natural Science Foundation of China (Grant No.51805086)。
文摘Strong impact does serious harm to the military industries so it is necessary to choose reasonable cushioning material and design effective buffers to prevent the impact of equipment.Based on the capillary property entangled porous metallic wire materials(EPMWM),this paper designed a composite buffer which uses EPMWM and viscous fluid as cushioning materials under the low-speed impact of the recoil force device of weapon equipment(such as artillery,mortar,etc.).Combined with the capillary model,porosity,hydraulic diameter,maximum pore diameter and pore distribution were used to characterize the pore structure characteristics of EPMWM.The calculation model of the damping force of the composite buffer was established.The low-speed impact test of the composite buffer was conducted.The parameters of the buffer under low-speed impact were identified according to the model,and the nonlinear model of damping force was obtained.The test results show that the composite buffer with EPMWM and viscous fluid can absorb the impact energy from the recoil movement effectively,and provide a new method for the buffer design of weapon equipment(such as artillery,mortar,etc.).
文摘Software testing is a critical phase due to misconceptions about ambiguities in the requirements during specification,which affect the testing process.Therefore,it is difficult to identify all faults in software.As requirement changes continuously,it increases the irrelevancy and redundancy during testing.Due to these challenges;fault detection capability decreases and there arises a need to improve the testing process,which is based on changes in requirements specification.In this research,we have developed a model to resolve testing challenges through requirement prioritization and prediction in an agile-based environment.The research objective is to identify the most relevant and meaningful requirements through semantic analysis for correct change analysis.Then compute the similarity of requirements through case-based reasoning,which predicted the requirements for reuse and restricted to error-based requirements.Afterward,the apriori algorithm mapped out requirement frequency to select relevant test cases based on frequently reused or not reused test cases to increase the fault detection rate.Furthermore,the proposed model was evaluated by conducting experiments.The results showed that requirement redundancy and irrelevancy improved due to semantic analysis,which correctly predicted the requirements,increasing the fault detection rate and resulting in high user satisfaction.The predicted requirements are mapped into test cases,increasing the fault detection rate after changes to achieve higher user satisfaction.Therefore,the model improves the redundancy and irrelevancy of requirements by more than 90%compared to other clustering methods and the analytical hierarchical process,achieving an 80%fault detection rate at an earlier stage.Hence,it provides guidelines for practitioners and researchers in the modern era.In the future,we will provide the working prototype of this model for proof of concept.
基金support from the Deanship for Research&Innovation,Ministry of Education in Saudi Arabia,under the Auspices of Project Number:IFP22UQU4281768DSR122.
文摘Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease is hard to control because wind,rain,and insects carry spores.Colombian researchers utilized a deep learning system to identify CBD in coffee cherries at three growth stages and classify photographs of infected and uninfected cherries with 93%accuracy using a random forest method.If the dataset is too small and noisy,the algorithm may not learn data patterns and generate accurate predictions.To overcome the existing challenge,early detection of Colletotrichum Kahawae disease in coffee cherries requires automated processes,prompt recognition,and accurate classifications.The proposed methodology selects CBD image datasets through four different stages for training and testing.XGBoost to train a model on datasets of coffee berries,with each image labeled as healthy or diseased.Once themodel is trained,SHAP algorithmto figure out which features were essential formaking predictions with the proposed model.Some of these characteristics were the cherry’s colour,whether it had spots or other damage,and how big the Lesions were.Virtual inception is important for classification to virtualize the relationship between the colour of the berry is correlated with the presence of disease.To evaluate themodel’s performance andmitigate excess fitting,a 10-fold cross-validation approach is employed.This involves partitioning the dataset into ten subsets,training the model on each subset,and evaluating its performance.In comparison to other contemporary methodologies,the model put forth achieved an accuracy of 98.56%.
基金supported by the research grant(SEED-CCIS-2024-166),Prince Sultan University,Saudi Arabia。
文摘Breast cancer is one of the major health issues with high mortality rates and a substantial impact on patients and healthcare systems worldwide.Various Computer-Aided Diagnosis(CAD)tools,based on breast thermograms,have been developed for early detection of this disease.However,accurately segmenting the Region of Interest(ROI)fromthermograms remains challenging.This paper presents an approach that leverages image acquisition protocol parameters to identify the lateral breast region and estimate its bottomboundary using a second-degree polynomial.The proposed method demonstrated high efficacy,achieving an impressive Jaccard coefficient of 86%and a Dice index of 92%when evaluated against manually created ground truths.Textural features were extracted from each view’s ROI,with significant features selected via Mutual Information for training Multi-Layer Perceptron(MLP)and K-Nearest Neighbors(KNN)classifiers.Our findings revealed that the MLP classifier outperformed the KNN,achieving an accuracy of 86%,a specificity of 100%,and an Area Under the Curve(AUC)of 0.85.The consistency of the method across both sides of the breast suggests its viability as an auto-segmentation tool.Furthermore,the classification results suggests that lateral views of breast thermograms harbor valuable features that can significantly aid in the early detection of breast cancer.
基金National Natural Science Foundation of China(Grant Nos.62171130,62172197,61972093)the Natural Science Foundation of Fujian Province(Grant Nos.2020J01573,2022J01131257,2022J01607)+3 种基金Fujian University Industry University Research Joint Innovation Project(No.2022H6006)in part by the Fund of Cloud Computing and BigData for SmartAgriculture(GrantNo.117-612014063)NationalNatural Science Foundation of China(Grant No.62301160)Nature Science Foundation of Fujian Province(Grant No.2022J01607).
文摘Many deep learning-based registration methods rely on a single-stream encoder-decoder network for computing deformation fields between 3D volumes.However,these methods often lack constraint information and overlook semantic consistency,limiting their performance.To address these issues,we present a novel approach for medical image registration called theDual-VoxelMorph,featuring a dual-channel cross-constraint network.This innovative network utilizes both intensity and segmentation images,which share identical semantic information and feature representations.Two encoder-decoder structures calculate deformation fields for intensity and segmentation images,as generated by the dual-channel cross-constraint network.This design facilitates bidirectional communication between grayscale and segmentation information,enabling the model to better learn the corresponding grayscale and segmentation details of the same anatomical structures.To ensure semantic and directional consistency,we introduce constraints and apply the cosine similarity function to enhance semantic consistency.Evaluation on four public datasets demonstrates superior performance compared to the baselinemethod,achieving Dice scores of 79.9%,64.5%,69.9%,and 63.5%for OASIS-1,OASIS-3,LPBA40,and ADNI,respectively.
基金the Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University(IMSIU)(Grant Number IMSIU-RP23030).
文摘Genetic algorithms(GAs)are very good metaheuristic algorithms that are suitable for solving NP-hard combinatorial optimization problems.AsimpleGAbeginswith a set of solutions represented by a population of chromosomes and then uses the idea of survival of the fittest in the selection process to select some fitter chromosomes.It uses a crossover operator to create better offspring chromosomes and thus,converges the population.Also,it uses a mutation operator to explore the unexplored areas by the crossover operator,and thus,diversifies the GA search space.A combination of crossover and mutation operators makes the GA search strong enough to reach the optimal solution.However,appropriate selection and combination of crossover operator and mutation operator can lead to a very good GA for solving an optimization problem.In this present paper,we aim to study the benchmark traveling salesman problem(TSP).We developed several genetic algorithms using seven crossover operators and six mutation operators for the TSP and then compared them to some benchmark TSPLIB instances.The experimental studies show the effectiveness of the combination of a comprehensive sequential constructive crossover operator and insertion mutation operator for the problem.The GA using the comprehensive sequential constructive crossover with insertion mutation could find average solutions whose average percentage of excesses from the best-known solutions are between 0.22 and 14.94 for our experimented problem instances.
基金Supported by Scientific and Technological Project of Henan Province,No.212102210140.
文摘BACKGROUND Liver cancer is one of the deadliest malignant tumors worldwide.Immunotherapy has provided hope to patients with advanced liver cancer,but only a small fraction of patients benefit from this treatment due to individual differences.Identifying immune-related gene signatures in liver cancer patients not only aids physicians in cancer diagnosis but also offers personalized treatment strategies,thereby improving patient survival rates.Although several methods have been developed to predict the prognosis and immunotherapeutic efficacy in patients with liver cancer,the impact of cell-cell interactions in the tumor microenvir-onment has not been adequately considered.AIM To identify immune-related gene signals for predicting liver cancer prognosis and immunotherapy efficacy.METHODS Cell grouping and cell-cell communication analysis were performed on single-cell RNA-sequencing data to identify highly active cell groups in immune-related pathways.Highly active immune cells were identified by intersecting the highly active cell groups with B cells and T cells.The significantly differentially expressed genes between highly active immune cells and other cells were subsequently selected as features,and a least absolute shrinkage and selection operator(LASSO)regression model was constructed to screen for diagnostic-related features.Fourteen genes that were selected more than 5 times in 10 LASSO regression experiments were included in a multivariable Cox regression model.Finally,3 genes(stathmin 1,cofilin 1,and C-C chemokine ligand 5)significantly associated with survival were identified and used to construct an immune-related gene signature.RESULTS The immune-related gene signature composed of stathmin 1,cofilin 1,and C-C chemokine ligand 5 was identified through cell-cell communication.The effectiveness of the identified gene signature was validated based on experi-mental results of predictive immunotherapy response,tumor mutation burden analysis,immune cell infiltration analysis,survival analysis,and expression analysis.CONCLUSION The findings suggest that the identified gene signature may contribute to a deeper understanding of the activity patterns of immune cells in the liver tumor microenvironment,providing insights for personalized treatment strategies.
基金the support of Prince Sultan University for the Article Processing Charges(APC)of this publication。
文摘Author Profiling (AP) is a subsection of digital forensics that focuses on the detection of the author’s personalinformation, such as age, gender, occupation, and education, based on various linguistic features, e.g., stylistic,semantic, and syntactic. The importance of AP lies in various fields, including forensics, security, medicine, andmarketing. In previous studies, many works have been done using different languages, e.g., English, Arabic, French,etc.However, the research on RomanUrdu is not up to the mark.Hence, this study focuses on detecting the author’sage and gender based on Roman Urdu text messages. The dataset used in this study is Fire’18-MaponSMS. Thisstudy proposed an ensemble model based on AdaBoostM1 and Random Forest (AMBRF) for AP using multiplelinguistic features that are stylistic, character-based, word-based, and sentence-based. The proposed model iscontrasted with several of the well-known models fromthe literature, including J48-Decision Tree (J48),Na飗e Bays(NB), K Nearest Neighbor (KNN), and Composite Hypercube on Random Projection (CHIRP), NB-Updatable,RF, and AdaboostM1. The overall outcome shows the better performance of the proposed AdaboostM1 withRandom Forest (ABMRF) with an accuracy of 54.2857% for age prediction and 71.1429% for gender predictioncalculated on stylistic features. Regarding word-based features, age and gender were considered in 50.5714% and60%, respectively. On the other hand, KNN and CHIRP show the weakest performance using all the linguisticfeatures for age and gender prediction.