In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in it...In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in its own right,boundaries necessarily arose between it and other disciplines,in a way that is now often detrimental to progress.Therefore,it is necessary to reinvigorate the relationship between computer science and other academic disciplines and celebrate exploration and creativity in research.To do this,the structures of the academic department have to act as supporting scaffolding rather than barriers.Some examples are given that show the efforts being made at the University of Cambridge to approach this problem.展开更多
At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in t...At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in the field.6 heads of university computer science departments participated in the discussions,including the moderator,Professor Andrew Yao.The first issue was how universities are managing the growing number of applicants in addition to swelling class sizes.Several approaches were suggested,including increasing faculty hiring,implementing scalable teaching tools,and working closer with other departments through degree programs that integrate computer science with other fields.The second issue was about the position and role of computer science within broader science.Participants generally agreed that all fields are increasingly relying on computer science techniques,and that effectively disseminating these techniques to others is a key to unlocking broader scientific progress.展开更多
Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease ...Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease is hard to control because wind,rain,and insects carry spores.Colombian researchers utilized a deep learning system to identify CBD in coffee cherries at three growth stages and classify photographs of infected and uninfected cherries with 93%accuracy using a random forest method.If the dataset is too small and noisy,the algorithm may not learn data patterns and generate accurate predictions.To overcome the existing challenge,early detection of Colletotrichum Kahawae disease in coffee cherries requires automated processes,prompt recognition,and accurate classifications.The proposed methodology selects CBD image datasets through four different stages for training and testing.XGBoost to train a model on datasets of coffee berries,with each image labeled as healthy or diseased.Once themodel is trained,SHAP algorithmto figure out which features were essential formaking predictions with the proposed model.Some of these characteristics were the cherry’s colour,whether it had spots or other damage,and how big the Lesions were.Virtual inception is important for classification to virtualize the relationship between the colour of the berry is correlated with the presence of disease.To evaluate themodel’s performance andmitigate excess fitting,a 10-fold cross-validation approach is employed.This involves partitioning the dataset into ten subsets,training the model on each subset,and evaluating its performance.In comparison to other contemporary methodologies,the model put forth achieved an accuracy of 98.56%.展开更多
The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or...The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or a Ph.D.program is simply not scalable.To address this problem,the Department of Computing at the University of Illinois has introduced a multidisciplinary approach to computing,which is a scalable and collaborative approach to capitalize on the tremendous demand for computer science education.The key component of the approach is the blended major,also referred to as“CS+X”,where CS denotes computer science and X denotes a non-computing field.These CS+X blended degrees enable win-win partnerships among multiple subject areas,distributing the educational responsibilities while growing the entire university.To meet the demand from non-CS majors,another pathway that is offered is a graduate certificate program in addition to the traditional minor program.To accommodate the large number of students,scalable teaching tools,such as automatic graders,have also been developed.展开更多
Aging is a natural process that leads to debility,disease,and dependency.Alzheimer’s disease(AD)causes degeneration of the brain cells leading to cognitive decline and memory loss,as well as dependence on others to f...Aging is a natural process that leads to debility,disease,and dependency.Alzheimer’s disease(AD)causes degeneration of the brain cells leading to cognitive decline and memory loss,as well as dependence on others to fulfill basic daily needs.AD is the major cause of dementia.Computer-aided diagnosis(CADx)tools aid medical practitioners in accurately identifying diseases such as AD in patients.This study aimed to develop a CADx tool for the early detection of AD using the Intelligent Water Drop(IWD)algorithm and the Random Forest(RF)classifier.The IWD algorithm an efficient feature selection method,was used to identify the most deterministic features of AD in the dataset.RF is an ensemble method that leverages multiple weak learners to classify a patient’s disease as either demented(DN)or cognitively normal(CN).The proposed tool also classifies patients as mild cognitive impairment(MCI)or CN.The dataset on which the performance of the proposed CADx was evaluated was sourced from the Alzheimer’s Disease Neuroimaging Initiative(ADNI).The RF ensemble method achieves 100%accuracy in identifying DN patients from CN patients.The classification accuracy for classifying patients as MCI or CN is 92%.This study emphasizes the significance of pre-processing prior to classification to improve the classification results of the proposed CADx tool.展开更多
In various fields,different networks are used,most of the time not of a single kind;but rather a mix of at least two networks.These kinds of networks are called bridge networks which are utilized in interconnection ne...In various fields,different networks are used,most of the time not of a single kind;but rather a mix of at least two networks.These kinds of networks are called bridge networks which are utilized in interconnection networks of PC,portable networks,spine of internet,networks engaged with advanced mechanics,power generation interconnection,bio-informatics and substance intensify structures.Any number that can be entirely calculated by a graph is called graph invariants.Countless mathematical graph invariants have been portrayed and utilized for connection investigation during the latest twenty years.Nevertheless,no trustworthy evaluation has been embraced to pick,how much these invariants are associated with a network graph or subatomic graph.In this paper,it will discuss three unmistakable varieties of bridge networks with an incredible capacity of assumption in the field of computer science,chemistry,physics,drug industry,informatics and arithmetic in setting with physical and manufactured developments and networks,since Contraharmonic-quadratic invariants(CQIs)are recently presented and have different figure qualities for different varieties of bridge graphs or networks.The study settled the geography of bridge graphs/networks of three novel sorts with two kinds of CQI and Quadratic-Contraharmonic Indices(QCIs).The deduced results can be used for the modeling of the above-mentioned networks.展开更多
In the current era of information technology,students need to learn modern programming languages efficiently.The art of teaching/learning program-ming requires many logical and conceptual skills.So it’s a challenging ...In the current era of information technology,students need to learn modern programming languages efficiently.The art of teaching/learning program-ming requires many logical and conceptual skills.So it’s a challenging task for the instructors/learners to teach/learn these programming languages effectively and efficiently.Mind mapping is a useful visual tool for establishing ideas and connecting them to solve problems.This research proposed an effective way to teach programming languages through visual tools.This experimental study uses a mind mapping tool to teach two programming environments:Text-based Programming and Blocks-based Programming.We performed the experiments with one hundred and sixty undergraduate students of two public sector universities in the Asia Pacific region.Four different instructional approaches,including block-based language(BBL),text-based languages(TBL),mind map with text-based language(MMTBL)and mind mapping with block-based(MMBBL)are used for this purpose.The results show that instructional approaches using a mind mapping tool to help students solve given tasks in their critical thinking are more effective than other instructional techniques.展开更多
Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.T...Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.To understand the dynamics of the virus propagation in a better way,a computer virus spread model with fuzzy parameters is presented in this work.It is assumed that all infected computers do not have the same contribution to the virus transmission process and each computer has a different degree of infectivity,which depends on the quantity of virus.Considering this,the parametersβandγbeing functions of the computer virus load,are considered fuzzy numbers.Using fuzzy theory helps us understand the spread of computer viruses more realistically as these parameters have fixed values in classical models.The essential features of the model,like reproduction number and equilibrium analysis,are discussed in fuzzy senses.Moreover,with fuzziness,two numerical methods,the forward Euler technique,and a nonstandard finite difference(NSFD)scheme,respectively,are developed and analyzed.In the evidence of the numerical simulations,the proposed NSFD method preserves the main features of the dynamic system.It can be considered a reliable tool to predict such types of solutions.展开更多
During the prodromal stage of Alzheimer’s disease (AD), neurodegenerative changes can be identified by measuring volumetric loss in AD-prone brain regions on MRI. Cognitive assessments that are sensitive enough to me...During the prodromal stage of Alzheimer’s disease (AD), neurodegenerative changes can be identified by measuring volumetric loss in AD-prone brain regions on MRI. Cognitive assessments that are sensitive enough to measure the early brain-behavior manifestations of AD and that correlate with biomarkers of neurodegeneration are needed to identify and monitor individuals at risk for dementia. Weak sensitivity to early cognitive change has been a major limitation of traditional cognitive assessments. In this study, we focused on expanding our previous work by determining whether a digitized cognitive stress test, the Loewenstein-Acevedo Scales for Semantic Interference and Learning, Brief Computerized Version (LASSI-BC) could differentiate between Cognitively Unimpaired (CU) and amnestic Mild Cognitive Impairment (aMCI) groups. A second focus was to correlate LASSI-BC performance to volumetric reductions in AD-prone brain regions. Data was gathered from 111 older adults who were comprehensively evaluated and administered the LASSI-BC. Eighty-seven of these participants (51 CU;36 aMCI) underwent MR imaging. The volumes of 12 AD-prone brain regions were related to LASSI-BC and other memory tests correcting for False Discovery Rate (FDR). Results indicated that, even after adjusting for initial learning ability, the failure to recover from proactive semantic interference (frPSI) on the LASSI-BC differentiated between CU and aMCI groups. An optimal combination of frPSI and initial learning strength on the LASSI-BC yielded an area under the ROC curve of 0.876 (76.1% sensitivity, 82.7% specificity). Further, frPSI on the LASSI-BC was associated with volumetric reductions in the hippocampus, amygdala, inferior temporal lobes, precuneus, and posterior cingulate.展开更多
The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can brin...The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon.展开更多
The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplo...The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models.展开更多
Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Mag...Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors.展开更多
The dominance of Android in the global mobile market and the open development characteristics of this platform have resulted in a significant increase in malware.These malicious applications have become a serious conc...The dominance of Android in the global mobile market and the open development characteristics of this platform have resulted in a significant increase in malware.These malicious applications have become a serious concern to the security of Android systems.To address this problem,researchers have proposed several machine-learning models to detect and classify Android malware based on analyzing features extracted from Android samples.However,most existing studies have focused on the classification task and overlooked the feature selection process,which is crucial to reduce the training time and maintain or improve the classification results.The current paper proposes a new Android malware detection and classification approach that identifies the most important features to improve classification performance and reduce training time.The proposed approach consists of two main steps.First,a feature selection method based on the Attention mechanism is used to select the most important features.Then,an optimized Light Gradient Boosting Machine(LightGBM)classifier is applied to classify the Android samples and identify the malware.The feature selection method proposed in this paper is to integrate an Attention layer into a multilayer perceptron neural network.The role of the Attention layer is to compute the weighted values of each feature based on its importance for the classification process.Experimental evaluation of the approach has shown that combining the Attention-based technique with an optimized classification algorithm for Android malware detection has improved the accuracy from 98.64%to 98.71%while reducing the training time from 80 to 28 s.展开更多
This research proposes a highly effective soft computing paradigm for estimating the compressive strength(CS)of metakaolin-contained cemented materials.The proposed approach is a combination of an enhanced grey wolf o...This research proposes a highly effective soft computing paradigm for estimating the compressive strength(CS)of metakaolin-contained cemented materials.The proposed approach is a combination of an enhanced grey wolf optimizer(EGWO)and an extreme learning machine(ELM).EGWO is an augmented form of the classic grey wolf optimizer(GWO).Compared to standard GWO,EGWO has a better hunting mechanism and produces an optimal performance.The EGWO was used to optimize the ELM structure and a hybrid model,ELM-EGWO,was built.To train and validate the proposed ELM-EGWO model,a sum of 361 experimental results featuring five influencing factors was collected.Based on sensitivity analysis,three distinct cases of influencing parameters were considered to investigate the effect of influencing factors on predictive precision.Experimental consequences show that the constructed ELM-EGWO achieved the most accurate precision in both training(RMSE=0.0959)and testing(RMSE=0.0912)phases.The outcomes of the ELM-EGWO are significantly superior to those of deep neural networks(DNN),k-nearest neighbors(KNN),long short-term memory(LSTM),and other hybrid ELMs constructed with GWO,particle swarm optimization(PSO),harris hawks optimization(HHO),salp swarm algorithm(SSA),marine predators algorithm(MPA),and colony predation algorithm(CPA).The overall results demonstrate that the newly suggested ELM-EGWO has the potential to estimate the CS of metakaolin-contained cemented materials with a high degree of precision and robustness.展开更多
Accurate detection and classification of artifacts within the gastrointestinal(GI)tract frames remain a significant challenge in medical image processing.Medical science combined with artificial intelligence is advanc...Accurate detection and classification of artifacts within the gastrointestinal(GI)tract frames remain a significant challenge in medical image processing.Medical science combined with artificial intelligence is advancing to automate the diagnosis and treatment of numerous diseases.Key to this is the development of robust algorithms for image classification and detection,crucial in designing sophisticated systems for diagnosis and treatment.This study makes a small contribution to endoscopic image classification.The proposed approach involves multiple operations,including extracting deep features from endoscopy images using pre-trained neural networks such as Darknet-53 and Xception.Additionally,feature optimization utilizes the binary dragonfly algorithm(BDA),with the fusion of the obtained feature vectors.The fused feature set is input into the ensemble subspace k nearest neighbors(ESKNN)classifier.The Kvasir-V2 benchmark dataset,and the COMSATS University Islamabad(CUI)Wah private dataset,featuring three classes of endoscopic stomach images were used.Performance assessments considered various feature selection techniques,including genetic algorithm(GA),particle swarm optimization(PSO),salp swarm algorithm(SSA),sine cosine algorithm(SCA),and grey wolf optimizer(GWO).The proposed model excels,achieving an overall classification accuracy of 98.25% on the Kvasir-V2 benchmark and 99.90% on the CUI Wah private dataset.This approach holds promise for developing an automated computer-aided system for classifying GI tract syndromes through endoscopy images.展开更多
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ...The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.展开更多
In traditional digital twin communication system testing,we can apply test cases as completely as possible in order to ensure the correctness of the system implementation,and even then,there is no guarantee that the d...In traditional digital twin communication system testing,we can apply test cases as completely as possible in order to ensure the correctness of the system implementation,and even then,there is no guarantee that the digital twin communication system implementation is completely correct.Formal verification is currently recognized as a method to ensure the correctness of software system for communication in digital twins because it uses rigorous mathematical methods to verify the correctness of systems for communication in digital twins and can effectively help system designers determine whether the system is designed and implemented correctly.In this paper,we use the interactive theorem proving tool Isabelle/HOL to construct the formal model of the X86 architecture,and to model the related assembly instructions.The verification result shows that the system states obtained after the operations of relevant assembly instructions is consistent with the expected states,indicating that the system meets the design expectations.展开更多
Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent ...Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.展开更多
Internet of Health Things(IoHT)is a subset of Internet of Things(IoT)technology that includes interconnected medical devices and sensors used in medical and healthcare information systems.However,IoHT is susceptible t...Internet of Health Things(IoHT)is a subset of Internet of Things(IoT)technology that includes interconnected medical devices and sensors used in medical and healthcare information systems.However,IoHT is susceptible to cybersecurity threats due to its reliance on low-power biomedical devices and the use of open wireless channels for communication.In this article,we intend to address this shortcoming,and as a result,we propose a new scheme called,the certificateless anonymous authentication(CAA)scheme.The proposed scheme is based on hyperelliptic curve cryptography(HECC),an enhanced variant of elliptic curve cryptography(ECC)that employs a smaller key size of 80 bits as compared to 160 bits.The proposed scheme is secure against various attacks in both formal and informal security analyses.The formal study makes use of the Real-or-Random(ROR)model.A thorough comparative study of the proposed scheme is conducted for the security and efficiency of the proposed scheme with the relevant existing schemes.The results demonstrate that the proposed scheme not only ensures high security for health-related data but also increases efficiency.The proposed scheme’s computation cost is 2.88 ms,and the communication cost is 1440 bits,which shows its better efficiency compared to its counterpart schemes.展开更多
Internet of Things(IoTs)provides better solutions in various fields,namely healthcare,smart transportation,home,etc.Recognizing Denial of Service(DoS)outbreaks in IoT platforms is significant in certifying the accessi...Internet of Things(IoTs)provides better solutions in various fields,namely healthcare,smart transportation,home,etc.Recognizing Denial of Service(DoS)outbreaks in IoT platforms is significant in certifying the accessibility and integrity of IoT systems.Deep learning(DL)models outperform in detecting complex,non-linear relationships,allowing them to effectually severe slight deviations fromnormal IoT activities that may designate a DoS outbreak.The uninterrupted observation and real-time detection actions of DL participate in accurate and rapid detection,permitting proactive reduction events to be executed,hence securing the IoT network’s safety and functionality.Subsequently,this study presents pigeon-inspired optimization with a DL-based attack detection and classification(PIODL-ADC)approach in an IoT environment.The PIODL-ADC approach implements a hyperparameter-tuned DL method for Distributed Denial-of-Service(DDoS)attack detection in an IoT platform.Initially,the PIODL-ADC model utilizes Z-score normalization to scale input data into a uniformformat.For handling the convolutional and adaptive behaviors of IoT,the PIODL-ADCmodel employs the pigeon-inspired optimization(PIO)method for feature selection to detect the related features,considerably enhancing the recognition’s accuracy.Also,the Elman Recurrent Neural Network(ERNN)model is utilized to recognize and classify DDoS attacks.Moreover,reptile search algorithm(RSA)based hyperparameter tuning is employed to improve the precision and robustness of the ERNN method.A series of investigational validations is made to ensure the accomplishment of the PIODL-ADC method.The experimental outcome exhibited that the PIODL-ADC method shows greater accomplishment when related to existing models,with a maximum accuracy of 99.81%.展开更多
文摘In the very beginning,the Computer Laboratory of the University of Cambridge was founded to provide computing service for different disciplines across the university.As computer science developed as a discipline in its own right,boundaries necessarily arose between it and other disciplines,in a way that is now often detrimental to progress.Therefore,it is necessary to reinvigorate the relationship between computer science and other academic disciplines and celebrate exploration and creativity in research.To do this,the structures of the academic department have to act as supporting scaffolding rather than barriers.Some examples are given that show the efforts being made at the University of Cambridge to approach this problem.
文摘At the panel session of the 3rd Global Forum on the Development of Computer Science,attendees had an opportunity to deliberate recent issues affecting computer science departments as a result of the recent growth in the field.6 heads of university computer science departments participated in the discussions,including the moderator,Professor Andrew Yao.The first issue was how universities are managing the growing number of applicants in addition to swelling class sizes.Several approaches were suggested,including increasing faculty hiring,implementing scalable teaching tools,and working closer with other departments through degree programs that integrate computer science with other fields.The second issue was about the position and role of computer science within broader science.Participants generally agreed that all fields are increasingly relying on computer science techniques,and that effectively disseminating these techniques to others is a key to unlocking broader scientific progress.
基金support from the Deanship for Research&Innovation,Ministry of Education in Saudi Arabia,under the Auspices of Project Number:IFP22UQU4281768DSR122.
文摘Colletotrichum kahawae(Coffee Berry Disease)spreads through spores that can be carried by wind,rain,and insects affecting coffee plantations,and causes 80%yield losses and poor-quality coffee beans.The deadly disease is hard to control because wind,rain,and insects carry spores.Colombian researchers utilized a deep learning system to identify CBD in coffee cherries at three growth stages and classify photographs of infected and uninfected cherries with 93%accuracy using a random forest method.If the dataset is too small and noisy,the algorithm may not learn data patterns and generate accurate predictions.To overcome the existing challenge,early detection of Colletotrichum Kahawae disease in coffee cherries requires automated processes,prompt recognition,and accurate classifications.The proposed methodology selects CBD image datasets through four different stages for training and testing.XGBoost to train a model on datasets of coffee berries,with each image labeled as healthy or diseased.Once themodel is trained,SHAP algorithmto figure out which features were essential formaking predictions with the proposed model.Some of these characteristics were the cherry’s colour,whether it had spots or other damage,and how big the Lesions were.Virtual inception is important for classification to virtualize the relationship between the colour of the berry is correlated with the presence of disease.To evaluate themodel’s performance andmitigate excess fitting,a 10-fold cross-validation approach is employed.This involves partitioning the dataset into ten subsets,training the model on each subset,and evaluating its performance.In comparison to other contemporary methodologies,the model put forth achieved an accuracy of 98.56%.
文摘The number of students demanding computer science(CS)education is rapidly rising,and while faculty sizes are also growing,the traditional pipeline consisting of a CS major,a CS master’s,and then a move to industry or a Ph.D.program is simply not scalable.To address this problem,the Department of Computing at the University of Illinois has introduced a multidisciplinary approach to computing,which is a scalable and collaborative approach to capitalize on the tremendous demand for computer science education.The key component of the approach is the blended major,also referred to as“CS+X”,where CS denotes computer science and X denotes a non-computing field.These CS+X blended degrees enable win-win partnerships among multiple subject areas,distributing the educational responsibilities while growing the entire university.To meet the demand from non-CS majors,another pathway that is offered is a graduate certificate program in addition to the traditional minor program.To accommodate the large number of students,scalable teaching tools,such as automatic graders,have also been developed.
基金The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(IF-PSAU-2021/01/18596).
文摘Aging is a natural process that leads to debility,disease,and dependency.Alzheimer’s disease(AD)causes degeneration of the brain cells leading to cognitive decline and memory loss,as well as dependence on others to fulfill basic daily needs.AD is the major cause of dementia.Computer-aided diagnosis(CADx)tools aid medical practitioners in accurately identifying diseases such as AD in patients.This study aimed to develop a CADx tool for the early detection of AD using the Intelligent Water Drop(IWD)algorithm and the Random Forest(RF)classifier.The IWD algorithm an efficient feature selection method,was used to identify the most deterministic features of AD in the dataset.RF is an ensemble method that leverages multiple weak learners to classify a patient’s disease as either demented(DN)or cognitively normal(CN).The proposed tool also classifies patients as mild cognitive impairment(MCI)or CN.The dataset on which the performance of the proposed CADx was evaluated was sourced from the Alzheimer’s Disease Neuroimaging Initiative(ADNI).The RF ensemble method achieves 100%accuracy in identifying DN patients from CN patients.The classification accuracy for classifying patients as MCI or CN is 92%.This study emphasizes the significance of pre-processing prior to classification to improve the classification results of the proposed CADx tool.
基金the University of Jeddah,Jeddah,Saudi Arabia,under Grant No.(UJ-22-DR-14).
文摘In various fields,different networks are used,most of the time not of a single kind;but rather a mix of at least two networks.These kinds of networks are called bridge networks which are utilized in interconnection networks of PC,portable networks,spine of internet,networks engaged with advanced mechanics,power generation interconnection,bio-informatics and substance intensify structures.Any number that can be entirely calculated by a graph is called graph invariants.Countless mathematical graph invariants have been portrayed and utilized for connection investigation during the latest twenty years.Nevertheless,no trustworthy evaluation has been embraced to pick,how much these invariants are associated with a network graph or subatomic graph.In this paper,it will discuss three unmistakable varieties of bridge networks with an incredible capacity of assumption in the field of computer science,chemistry,physics,drug industry,informatics and arithmetic in setting with physical and manufactured developments and networks,since Contraharmonic-quadratic invariants(CQIs)are recently presented and have different figure qualities for different varieties of bridge graphs or networks.The study settled the geography of bridge graphs/networks of three novel sorts with two kinds of CQI and Quadratic-Contraharmonic Indices(QCIs).The deduced results can be used for the modeling of the above-mentioned networks.
文摘In the current era of information technology,students need to learn modern programming languages efficiently.The art of teaching/learning program-ming requires many logical and conceptual skills.So it’s a challenging task for the instructors/learners to teach/learn these programming languages effectively and efficiently.Mind mapping is a useful visual tool for establishing ideas and connecting them to solve problems.This research proposed an effective way to teach programming languages through visual tools.This experimental study uses a mind mapping tool to teach two programming environments:Text-based Programming and Blocks-based Programming.We performed the experiments with one hundred and sixty undergraduate students of two public sector universities in the Asia Pacific region.Four different instructional approaches,including block-based language(BBL),text-based languages(TBL),mind map with text-based language(MMTBL)and mind mapping with block-based(MMBBL)are used for this purpose.The results show that instructional approaches using a mind mapping tool to help students solve given tasks in their critical thinking are more effective than other instructional techniques.
文摘Typically,a computer has infectivity as soon as it is infected.It is a reality that no antivirus programming can identify and eliminate all kinds of viruses,suggesting that infections would persevere on the Internet.To understand the dynamics of the virus propagation in a better way,a computer virus spread model with fuzzy parameters is presented in this work.It is assumed that all infected computers do not have the same contribution to the virus transmission process and each computer has a different degree of infectivity,which depends on the quantity of virus.Considering this,the parametersβandγbeing functions of the computer virus load,are considered fuzzy numbers.Using fuzzy theory helps us understand the spread of computer viruses more realistically as these parameters have fixed values in classical models.The essential features of the model,like reproduction number and equilibrium analysis,are discussed in fuzzy senses.Moreover,with fuzziness,two numerical methods,the forward Euler technique,and a nonstandard finite difference(NSFD)scheme,respectively,are developed and analyzed.In the evidence of the numerical simulations,the proposed NSFD method preserves the main features of the dynamic system.It can be considered a reliable tool to predict such types of solutions.
文摘During the prodromal stage of Alzheimer’s disease (AD), neurodegenerative changes can be identified by measuring volumetric loss in AD-prone brain regions on MRI. Cognitive assessments that are sensitive enough to measure the early brain-behavior manifestations of AD and that correlate with biomarkers of neurodegeneration are needed to identify and monitor individuals at risk for dementia. Weak sensitivity to early cognitive change has been a major limitation of traditional cognitive assessments. In this study, we focused on expanding our previous work by determining whether a digitized cognitive stress test, the Loewenstein-Acevedo Scales for Semantic Interference and Learning, Brief Computerized Version (LASSI-BC) could differentiate between Cognitively Unimpaired (CU) and amnestic Mild Cognitive Impairment (aMCI) groups. A second focus was to correlate LASSI-BC performance to volumetric reductions in AD-prone brain regions. Data was gathered from 111 older adults who were comprehensively evaluated and administered the LASSI-BC. Eighty-seven of these participants (51 CU;36 aMCI) underwent MR imaging. The volumes of 12 AD-prone brain regions were related to LASSI-BC and other memory tests correcting for False Discovery Rate (FDR). Results indicated that, even after adjusting for initial learning ability, the failure to recover from proactive semantic interference (frPSI) on the LASSI-BC differentiated between CU and aMCI groups. An optimal combination of frPSI and initial learning strength on the LASSI-BC yielded an area under the ROC curve of 0.876 (76.1% sensitivity, 82.7% specificity). Further, frPSI on the LASSI-BC was associated with volumetric reductions in the hippocampus, amygdala, inferior temporal lobes, precuneus, and posterior cingulate.
文摘The need for information systems in organizations and economic units increases as there is a great deal of data that arise from doing many of the processes in order to be addressed to provide information that can bring interest to multi-users, the new and distinctive management accounting systems which meet in a manner easily all the needs of institutions and individuals from financial business, accounting and management, which take into account the accuracy, speed and confidentiality of the information for which the system is designed. The paper aims to describe a computerized system that is able to predict the budget for the new year based on past budgets by using time series analysis, which gives results with errors to a minimum and controls the budget during the year, through the ability to control exchange, compared to the scheme with the investigator and calculating the deviation, measurement of performance ratio and the expense of a number of indicators relating to budgets, such as the rate of condensation of capital, the growth rate and profitability ratio and gives a clear indication whether these ratios are good or not. There is a positive impact on information systems through this system for its ability to accomplish complex calculations and process paperwork, which is faster than it was previously and there is also a high flexibility, where the system can do any adjustments required in helping relevant parties to control the financial matters of the decision-making appropriate action thereon.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models.
基金This research is funded by the Researchers Supporting Project Number(RSPD2024R1027),King Saud University,Riyadh,Saudi Arabia.
文摘Brain tumors pose a significant threat to human lives and have gained increasing attention as the tenth leading cause of global mortality.This study addresses the pressing issue of brain tumor classification using Magnetic resonance imaging(MRI).It focuses on distinguishing between Low-Grade Gliomas(LGG)and High-Grade Gliomas(HGG).LGGs are benign and typically manageable with surgical resection,while HGGs are malignant and more aggressive.The research introduces an innovative custom convolutional neural network(CNN)model,Glioma-CNN.GliomaCNN stands out as a lightweight CNN model compared to its predecessors.The research utilized the BraTS 2020 dataset for its experiments.Integrated with the gradient-boosting algorithm,GliomaCNN has achieved an impressive accuracy of 99.1569%.The model’s interpretability is ensured through SHapley Additive exPlanations(SHAP)and Gradient-weighted Class Activation Mapping(Grad-CAM++).They provide insights into critical decision-making regions for classification outcomes.Despite challenges in identifying tumors in images without visible signs,the model demonstrates remarkable performance in this critical medical application,offering a promising tool for accurate brain tumor diagnosis which paves the way for enhanced early detection and treatment of brain tumors.
基金This work was funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under Grant No.(DGSSR-2023-02-02178).
文摘The dominance of Android in the global mobile market and the open development characteristics of this platform have resulted in a significant increase in malware.These malicious applications have become a serious concern to the security of Android systems.To address this problem,researchers have proposed several machine-learning models to detect and classify Android malware based on analyzing features extracted from Android samples.However,most existing studies have focused on the classification task and overlooked the feature selection process,which is crucial to reduce the training time and maintain or improve the classification results.The current paper proposes a new Android malware detection and classification approach that identifies the most important features to improve classification performance and reduce training time.The proposed approach consists of two main steps.First,a feature selection method based on the Attention mechanism is used to select the most important features.Then,an optimized Light Gradient Boosting Machine(LightGBM)classifier is applied to classify the Android samples and identify the malware.The feature selection method proposed in this paper is to integrate an Attention layer into a multilayer perceptron neural network.The role of the Attention layer is to compute the weighted values of each feature based on its importance for the classification process.Experimental evaluation of the approach has shown that combining the Attention-based technique with an optimized classification algorithm for Android malware detection has improved the accuracy from 98.64%to 98.71%while reducing the training time from 80 to 28 s.
基金supported via funding from Prince Sattam Bin Abdulaziz University Project Number(PSAU/2023/R/1445).
文摘This research proposes a highly effective soft computing paradigm for estimating the compressive strength(CS)of metakaolin-contained cemented materials.The proposed approach is a combination of an enhanced grey wolf optimizer(EGWO)and an extreme learning machine(ELM).EGWO is an augmented form of the classic grey wolf optimizer(GWO).Compared to standard GWO,EGWO has a better hunting mechanism and produces an optimal performance.The EGWO was used to optimize the ELM structure and a hybrid model,ELM-EGWO,was built.To train and validate the proposed ELM-EGWO model,a sum of 361 experimental results featuring five influencing factors was collected.Based on sensitivity analysis,three distinct cases of influencing parameters were considered to investigate the effect of influencing factors on predictive precision.Experimental consequences show that the constructed ELM-EGWO achieved the most accurate precision in both training(RMSE=0.0959)and testing(RMSE=0.0912)phases.The outcomes of the ELM-EGWO are significantly superior to those of deep neural networks(DNN),k-nearest neighbors(KNN),long short-term memory(LSTM),and other hybrid ELMs constructed with GWO,particle swarm optimization(PSO),harris hawks optimization(HHO),salp swarm algorithm(SSA),marine predators algorithm(MPA),and colony predation algorithm(CPA).The overall results demonstrate that the newly suggested ELM-EGWO has the potential to estimate the CS of metakaolin-contained cemented materials with a high degree of precision and robustness.
基金supported by the“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP)and Granted Financial Resources from the Ministry of Trade,Industry,and Energy,Korea(No.20204010600090).
文摘Accurate detection and classification of artifacts within the gastrointestinal(GI)tract frames remain a significant challenge in medical image processing.Medical science combined with artificial intelligence is advancing to automate the diagnosis and treatment of numerous diseases.Key to this is the development of robust algorithms for image classification and detection,crucial in designing sophisticated systems for diagnosis and treatment.This study makes a small contribution to endoscopic image classification.The proposed approach involves multiple operations,including extracting deep features from endoscopy images using pre-trained neural networks such as Darknet-53 and Xception.Additionally,feature optimization utilizes the binary dragonfly algorithm(BDA),with the fusion of the obtained feature vectors.The fused feature set is input into the ensemble subspace k nearest neighbors(ESKNN)classifier.The Kvasir-V2 benchmark dataset,and the COMSATS University Islamabad(CUI)Wah private dataset,featuring three classes of endoscopic stomach images were used.Performance assessments considered various feature selection techniques,including genetic algorithm(GA),particle swarm optimization(PSO),salp swarm algorithm(SSA),sine cosine algorithm(SCA),and grey wolf optimizer(GWO).The proposed model excels,achieving an overall classification accuracy of 98.25% on the Kvasir-V2 benchmark and 99.90% on the CUI Wah private dataset.This approach holds promise for developing an automated computer-aided system for classifying GI tract syndromes through endoscopy images.
文摘The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method.
基金supported in part by the Natural Science Foundation of Jiangsu Province in China under grant No.BK20191475the fifth phase of“333 Project”scientific research funding project of Jiangsu Province in China under grant No.BRA2020306the Qing Lan Project of Jiangsu Province in China under grant No.2019.
文摘In traditional digital twin communication system testing,we can apply test cases as completely as possible in order to ensure the correctness of the system implementation,and even then,there is no guarantee that the digital twin communication system implementation is completely correct.Formal verification is currently recognized as a method to ensure the correctness of software system for communication in digital twins because it uses rigorous mathematical methods to verify the correctness of systems for communication in digital twins and can effectively help system designers determine whether the system is designed and implemented correctly.In this paper,we use the interactive theorem proving tool Isabelle/HOL to construct the formal model of the X86 architecture,and to model the related assembly instructions.The verification result shows that the system states obtained after the operations of relevant assembly instructions is consistent with the expected states,indicating that the system meets the design expectations.
文摘Bayesian networks are a powerful class of graphical decision models used to represent causal relationships among variables.However,the reliability and integrity of learned Bayesian network models are highly dependent on the quality of incoming data streams.One of the primary challenges with Bayesian networks is their vulnerability to adversarial data poisoning attacks,wherein malicious data is injected into the training dataset to negatively influence the Bayesian network models and impair their performance.In this research paper,we propose an efficient framework for detecting data poisoning attacks against Bayesian network structure learning algorithms.Our framework utilizes latent variables to quantify the amount of belief between every two nodes in each causal model over time.We use our innovative methodology to tackle an important issue with data poisoning assaults in the context of Bayesian networks.With regard to four different forms of data poisoning attacks,we specifically aim to strengthen the security and dependability of Bayesian network structure learning techniques,such as the PC algorithm.By doing this,we explore the complexity of this area and offer workablemethods for identifying and reducing these sneaky dangers.Additionally,our research investigates one particular use case,the“Visit to Asia Network.”The practical consequences of using uncertainty as a way to spot cases of data poisoning are explored in this inquiry,which is of utmost relevance.Our results demonstrate the promising efficacy of latent variables in detecting and mitigating the threat of data poisoning attacks.Additionally,our proposed latent-based framework proves to be sensitive in detecting malicious data poisoning attacks in the context of stream data.
文摘Internet of Health Things(IoHT)is a subset of Internet of Things(IoT)technology that includes interconnected medical devices and sensors used in medical and healthcare information systems.However,IoHT is susceptible to cybersecurity threats due to its reliance on low-power biomedical devices and the use of open wireless channels for communication.In this article,we intend to address this shortcoming,and as a result,we propose a new scheme called,the certificateless anonymous authentication(CAA)scheme.The proposed scheme is based on hyperelliptic curve cryptography(HECC),an enhanced variant of elliptic curve cryptography(ECC)that employs a smaller key size of 80 bits as compared to 160 bits.The proposed scheme is secure against various attacks in both formal and informal security analyses.The formal study makes use of the Real-or-Random(ROR)model.A thorough comparative study of the proposed scheme is conducted for the security and efficiency of the proposed scheme with the relevant existing schemes.The results demonstrate that the proposed scheme not only ensures high security for health-related data but also increases efficiency.The proposed scheme’s computation cost is 2.88 ms,and the communication cost is 1440 bits,which shows its better efficiency compared to its counterpart schemes.
文摘Internet of Things(IoTs)provides better solutions in various fields,namely healthcare,smart transportation,home,etc.Recognizing Denial of Service(DoS)outbreaks in IoT platforms is significant in certifying the accessibility and integrity of IoT systems.Deep learning(DL)models outperform in detecting complex,non-linear relationships,allowing them to effectually severe slight deviations fromnormal IoT activities that may designate a DoS outbreak.The uninterrupted observation and real-time detection actions of DL participate in accurate and rapid detection,permitting proactive reduction events to be executed,hence securing the IoT network’s safety and functionality.Subsequently,this study presents pigeon-inspired optimization with a DL-based attack detection and classification(PIODL-ADC)approach in an IoT environment.The PIODL-ADC approach implements a hyperparameter-tuned DL method for Distributed Denial-of-Service(DDoS)attack detection in an IoT platform.Initially,the PIODL-ADC model utilizes Z-score normalization to scale input data into a uniformformat.For handling the convolutional and adaptive behaviors of IoT,the PIODL-ADCmodel employs the pigeon-inspired optimization(PIO)method for feature selection to detect the related features,considerably enhancing the recognition’s accuracy.Also,the Elman Recurrent Neural Network(ERNN)model is utilized to recognize and classify DDoS attacks.Moreover,reptile search algorithm(RSA)based hyperparameter tuning is employed to improve the precision and robustness of the ERNN method.A series of investigational validations is made to ensure the accomplishment of the PIODL-ADC method.The experimental outcome exhibited that the PIODL-ADC method shows greater accomplishment when related to existing models,with a maximum accuracy of 99.81%.