In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information...In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information technology is becoming more and more mature, and as a result, its use across numerous industries is now standard. China is still in the early stages of developing its integration of emergency medical services with modern information technology;despite our progress, there are still numerous obstacles and constraints to overcome. Our goal is to integrate information technology into every aspect of emergency patient care, offering robust assistance for both patient rescue and the efforts of medical personnel. Information may be communicated in a fast, multiple, and effective manner by utilizing modern information technology. This study aims to examine the current state of this field’s development, current issues, and the field’s future course of development.展开更多
Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate w...Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate with one another and with roadside infrastructure to enhance safety and traffic flow provide a range of value-added services,as they are an essential component of modern smart transportation systems.VANETs steganography has been suggested by many authors for secure,reliable message transfer between terminal/hope to terminal/hope and also to secure it from attack for privacy protection.This paper aims to determine whether using steganography is possible to improve data security and secrecy in VANET applications and to analyze effective steganography techniques for incorporating data into images while minimizing visual quality loss.According to simulations in literature and real-world studies,Image steganography proved to be an effectivemethod for secure communication on VANETs,even in difficult network conditions.In this research,we also explore a variety of steganography approaches for vehicular ad-hoc network transportation systems like vector embedding,statistics,spatial domain(SD),transform domain(TD),distortion,masking,and filtering.This study possibly shall help researchers to improve vehicle networks’ability to communicate securely and lay the door for innovative steganography methods.展开更多
The Kingdom of Saudi Arabia(KSA)has achieved significant milestones in cybersecurity.KSA has maintained solid regulatorymechanisms to prevent,trace,and punish offenders to protect the interests of both individual user...The Kingdom of Saudi Arabia(KSA)has achieved significant milestones in cybersecurity.KSA has maintained solid regulatorymechanisms to prevent,trace,and punish offenders to protect the interests of both individual users and organizations from the online threats of data poaching and pilferage.The widespread usage of Information Technology(IT)and IT Enable Services(ITES)reinforces securitymeasures.The constantly evolving cyber threats are a topic that is generating a lot of discussion.In this league,the present article enlists a broad perspective on how cybercrime is developing in KSA at present and also takes a look at some of the most significant attacks that have taken place in the region.The existing legislative framework and measures in the KSA are geared toward deterring criminal activity online.Different competency models have been devised to address the necessary cybercrime competencies in this context.The research specialists in this domain can benefit more by developing a master competency level for achieving optimum security.To address this research query,the present assessment uses the Fuzzy Decision-Making Trial and Evaluation Laboratory(Fuzzy-DMTAEL),Fuzzy Analytic Hierarchy Process(F.AHP),and Fuzzy TOPSIS methodology to achieve segment-wise competency development in cyber security policy.The similarities and differences between the three methods are also discussed.This cybersecurity analysis determined that the National Cyber Security Centre got the highest priority.The study concludes by perusing the challenges that still need to be examined and resolved in effectuating more credible and efficacious online security mechanisms to offer amoreempowered ITES-driven economy for SaudiArabia.Moreover,cybersecurity specialists and policymakers need to collate their efforts to protect the country’s digital assets in the era of overt and covert cyber warfare.展开更多
The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses ...The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.展开更多
Internet Exchange Point(IXP)is a system that increases network bandwidth performance.Internet exchange points facilitate interconnection among network providers,including Internet Service Providers(ISPs)andContent Del...Internet Exchange Point(IXP)is a system that increases network bandwidth performance.Internet exchange points facilitate interconnection among network providers,including Internet Service Providers(ISPs)andContent Delivery Providers(CDNs).To improve service management,Internet exchange point providers have adopted the Software Defined Network(SDN)paradigm.This implementation is known as a Software-Defined Exchange Point(SDX).It improves network providers’operations and management.However,performance issues still exist,particularly with multi-hop topologies.These issues include switch memory costs,packet processing latency,and link failure recovery delays.The paper proposes Enhanced Link Failure Rerouting(ELFR),an improved mechanism for rerouting link failures in software-defined exchange point networks.The proposed mechanism aims to minimize packet processing time for fast link failure recovery and enhance path calculation efficiency while reducing switch storage overhead by exploiting the Programming Protocol-independent Packet Processors(P4)features.The paper presents the proposed mechanisms’efficiency by utilizing advanced algorithms and demonstrating improved performance in packet processing speed,path calculation effectiveness,and switch storage management compared to current mechanisms.The proposed mechanism shows significant improvements,leading to a 37.5%decrease in Recovery Time(RT)and a 33.33%decrease in both Calculation Time(CT)and Computational Overhead(CO)when compared to current mechanisms.The study highlights the effectiveness and resource efficiency of the proposed mechanism in effectively resolving crucial issues inmulti-hop software-defined exchange point networks.展开更多
Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a s...Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms.展开更多
BACKGROUND Radiomics is a promising tool that may increase the value of magnetic resonance imaging(MRI)for different tasks related to the management of patients with hepatocellular carcinoma(HCC).However,its implement...BACKGROUND Radiomics is a promising tool that may increase the value of magnetic resonance imaging(MRI)for different tasks related to the management of patients with hepatocellular carcinoma(HCC).However,its implementation in clinical practice is still far,with many issues related to the methodological quality of radiomic studies.AIM To systematically review the current status of MRI radiomic studies concerning HCC using the Radiomics Quality Score(RQS).METHODS A systematic literature search of PubMed,Google Scholar,and Web of Science databases was performed to identify original articles focusing on the use of MRI radiomics for HCC management published between 2017 and 2023.The methodological quality of radiomic studies was assessed using the RQS tool.Spearman’s correlation(ρ)analysis was performed to explore if RQS was correlated with journal metrics and characteristics of the studies.The level of statistical significance was set at P<0.05.RESULTS One hundred and twenty-seven articles were included,of which 43 focused on HCC prognosis,39 on prediction of pathological findings,16 on prediction of the expression of molecular markers outcomes,18 had a diagnostic purpose,and 11 had multiple purposes.The mean RQS was 8±6.22,and the corresponding percentage was 24.15%±15.25%(ranging from 0.0% to 58.33%).RQS was positively correlated with journal impact factor(IF;ρ=0.36,P=2.98×10^(-5)),5-years IF(ρ=0.33,P=1.56×10^(-4)),number of patients included in the study(ρ=0.51,P<9.37×10^(-10))and number of radiomics features extracted in the study(ρ=0.59,P<4.59×10^(-13)),and time of publication(ρ=-0.23,P<0.0072).CONCLUSION Although MRI radiomics in HCC represents a promising tool to develop adequate personalized treatment as a noninvasive approach in HCC patients,our study revealed that studies in this field still lack the quality required to allow its introduction into clinical practice.展开更多
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c...The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.展开更多
Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Ne...Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Networks(MANETs)based real-time prediction paradigm for urban traffic challenges.MANETs are wireless networks that are based on mobile devices and may self-organize.The distributed nature of MANETs and the power of AI approaches are leveraged in this framework to provide reliable and timely traffic congestion forecasts.This study suggests a unique Chaotic Spatial Fuzzy Polynomial Neural Network(CSFPNN)technique to assess real-time data acquired from various sources within theMANETs.The framework uses the proposed approach to learn from the data and create predictionmodels to detect possible traffic problems and their severity in real time.Real-time traffic prediction allows for proactive actions like resource allocation,dynamic route advice,and traffic signal optimization to reduce congestion.The framework supports effective decision-making,decreases travel time,lowers fuel use,and enhances overall urban mobility by giving timely information to pedestrians,drivers,and urban planners.Extensive simulations and real-world datasets are used to test the proposed framework’s prediction accuracy,responsiveness,and scalability.Experimental results show that the suggested framework successfully anticipates urban traffic issues in real-time,enables proactive traffic management,and aids in creating smarter,more sustainable cities.展开更多
This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed...This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors.展开更多
Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit...Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.展开更多
In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encr...In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encryption methods have been proposed by researchers to secure communications.Most of the proposed steganography techniques achieve higher embedding capacities without compromising visual imperceptibility using LSB substitution.In this work,we have an approach that utilizes a combinationofMost SignificantBit(MSB)matching andLeast Significant Bit(LSB)substitution.The proposed algorithm divides confidential messages into pairs of bits and connects them with the MSBs of individual pixels using pair matching,enabling the storage of 6 bits in one pixel by modifying a maximum of three bits.The proposed technique is evaluated using embedding capacity and Peak Signal-to-Noise Ratio(PSNR)score,we compared our work with the Zakariya scheme the results showed a significant increase in data concealment capacity.The achieved results of ourwork showthat our algorithmdemonstrates an improvement in hiding capacity from11%to 22%for different data samples while maintaining a minimumPeak Signal-to-Noise Ratio(PSNR)of 37 dB.These findings highlight the effectiveness and trustworthiness of the proposed algorithm in securing the communication process and maintaining visual integrity.展开更多
Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely h...Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.展开更多
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime...Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.展开更多
Nowadays, devices are connected across all areas, from intelligent buildings and smart cities to Industry 4.0 andsmart healthcare. With the exponential growth of Internet of Things usage in our world, IoT security is ...Nowadays, devices are connected across all areas, from intelligent buildings and smart cities to Industry 4.0 andsmart healthcare. With the exponential growth of Internet of Things usage in our world, IoT security is still thebiggest challenge for its deployment. The main goal of IoT security is to ensure the accessibility of services providedby an IoT environment, protect privacy, and confidentiality, and guarantee the safety of IoT users, infrastructures,data, and devices. Authentication, as the first line of defense against security threats, becomes the priority ofeveryone. It can either grant or deny users access to resources according to their legitimacy. As a result, studyingand researching authentication issues within IoT is extremely important. As a result, studying and researchingauthentication issues within IoT is extremely important. This article presents a comparative study of recent researchin IoT security;it provides an analysis of recent authentication protocols from2019 to 2023 that cover several areaswithin IoT (such as smart cities, healthcare, and industry). This survey sought to provide an IoT security researchsummary, the biggest susceptibilities, and attacks, the appropriate technologies, and the most used simulators. Itillustrates that the resistance of protocols against attacks, and their computational and communication cost arelinked directly to the cryptography technique used to build it. Furthermore, it discusses the gaps in recent schemesand provides some future research directions.展开更多
A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is stil...A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is still a majorconcern. Encryption, network security, and adherence to data protection laws are key to ensuring the confidentialityand integrity of healthcare data in the cloud. The computational overhead of encryption technologies could leadto delays in data access and processing rates. To address these challenges, we introduced the Enhanced ParallelMulti-Key Encryption Algorithm (EPM-KEA), aiming to bolster healthcare data security and facilitate the securestorage of critical patient records in the cloud. The data was gathered from two categories Authorization forHospital Admission (AIH) and Authorization for High Complexity Operations.We use Z-score normalization forpreprocessing. The primary goal of implementing encryption techniques is to secure and store massive amountsof data on the cloud. It is feasible that cloud storage alternatives for protecting healthcare data will become morewidely available if security issues can be successfully fixed. As a result of our analysis using specific parametersincluding Execution time (42%), Encryption time (45%), Decryption time (40%), Security level (97%), and Energyconsumption (53%), the system demonstrated favorable performance when compared to the traditional method.This suggests that by addressing these security concerns, there is the potential for broader accessibility to cloudstorage solutions for safeguarding healthcare data.展开更多
This paper contributes a sophisticated statistical method for the assessment of performance in routing protocols salient Mobile Ad Hoc Network(MANET)routing protocols:Destination Sequenced Distance Vector(DSDV),Ad hoc...This paper contributes a sophisticated statistical method for the assessment of performance in routing protocols salient Mobile Ad Hoc Network(MANET)routing protocols:Destination Sequenced Distance Vector(DSDV),Ad hoc On-Demand Distance Vector(AODV),Dynamic Source Routing(DSR),and Zone Routing Protocol(ZRP).In this paper,the evaluation will be carried out using complete sets of statistical tests such as Kruskal-Wallis,Mann-Whitney,and Friedman.It articulates a systematic evaluation of how the performance of the previous protocols varies with the number of nodes and the mobility patterns.The study is premised upon the Quality of Service(QoS)metrics of throughput,packet delivery ratio,and end-to-end delay to gain an adequate understanding of the operational efficiency of each protocol under different network scenarios.The findings explained significant differences in the performance of different routing protocols;as a result,decisions for the selection and optimization of routing protocols can be taken effectively according to different network requirements.This paper is a step forward in the general understanding of the routing dynamics of MANETs and contributes significantly to the strategic deployment of robust and efficient network infrastructures.展开更多
Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventio...Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering.展开更多
Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major ...Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances.展开更多
The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potentia...The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potential security risks that malicious actors can exploit. QR code Phishing, or “Quishing”, is a type of phishing attack that leverages QR codes to deceive individuals into visiting malicious websites or downloading harmful software. These attacks can be particularly effective due to the growing popularity and trust in QR codes. This paper examines the importance of enhancing the security of QR codes through the utilization of artificial intelligence (AI). The abstract investigates the integration of AI methods for identifying and mitigating security threats associated with QR code usage. By assessing the current state of QR code security and evaluating the effectiveness of AI-driven solutions, this research aims to propose comprehensive strategies for strengthening QR code technology’s resilience. The study contributes to discussions on secure data encoding and retrieval, providing valuable insights into the evolving synergy between QR codes and AI for the advancement of secure digital communication.展开更多
文摘In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information technology is becoming more and more mature, and as a result, its use across numerous industries is now standard. China is still in the early stages of developing its integration of emergency medical services with modern information technology;despite our progress, there are still numerous obstacles and constraints to overcome. Our goal is to integrate information technology into every aspect of emergency patient care, offering robust assistance for both patient rescue and the efforts of medical personnel. Information may be communicated in a fast, multiple, and effective manner by utilizing modern information technology. This study aims to examine the current state of this field’s development, current issues, and the field’s future course of development.
基金Dr.Arshiya Sajid Ansari would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2023-910.
文摘Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate with one another and with roadside infrastructure to enhance safety and traffic flow provide a range of value-added services,as they are an essential component of modern smart transportation systems.VANETs steganography has been suggested by many authors for secure,reliable message transfer between terminal/hope to terminal/hope and also to secure it from attack for privacy protection.This paper aims to determine whether using steganography is possible to improve data security and secrecy in VANET applications and to analyze effective steganography techniques for incorporating data into images while minimizing visual quality loss.According to simulations in literature and real-world studies,Image steganography proved to be an effectivemethod for secure communication on VANETs,even in difficult network conditions.In this research,we also explore a variety of steganography approaches for vehicular ad-hoc network transportation systems like vector embedding,statistics,spatial domain(SD),transform domain(TD),distortion,masking,and filtering.This study possibly shall help researchers to improve vehicle networks’ability to communicate securely and lay the door for innovative steganography methods.
文摘The Kingdom of Saudi Arabia(KSA)has achieved significant milestones in cybersecurity.KSA has maintained solid regulatorymechanisms to prevent,trace,and punish offenders to protect the interests of both individual users and organizations from the online threats of data poaching and pilferage.The widespread usage of Information Technology(IT)and IT Enable Services(ITES)reinforces securitymeasures.The constantly evolving cyber threats are a topic that is generating a lot of discussion.In this league,the present article enlists a broad perspective on how cybercrime is developing in KSA at present and also takes a look at some of the most significant attacks that have taken place in the region.The existing legislative framework and measures in the KSA are geared toward deterring criminal activity online.Different competency models have been devised to address the necessary cybercrime competencies in this context.The research specialists in this domain can benefit more by developing a master competency level for achieving optimum security.To address this research query,the present assessment uses the Fuzzy Decision-Making Trial and Evaluation Laboratory(Fuzzy-DMTAEL),Fuzzy Analytic Hierarchy Process(F.AHP),and Fuzzy TOPSIS methodology to achieve segment-wise competency development in cyber security policy.The similarities and differences between the three methods are also discussed.This cybersecurity analysis determined that the National Cyber Security Centre got the highest priority.The study concludes by perusing the challenges that still need to be examined and resolved in effectuating more credible and efficacious online security mechanisms to offer amoreempowered ITES-driven economy for SaudiArabia.Moreover,cybersecurity specialists and policymakers need to collate their efforts to protect the country’s digital assets in the era of overt and covert cyber warfare.
文摘The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date.
文摘Internet Exchange Point(IXP)is a system that increases network bandwidth performance.Internet exchange points facilitate interconnection among network providers,including Internet Service Providers(ISPs)andContent Delivery Providers(CDNs).To improve service management,Internet exchange point providers have adopted the Software Defined Network(SDN)paradigm.This implementation is known as a Software-Defined Exchange Point(SDX).It improves network providers’operations and management.However,performance issues still exist,particularly with multi-hop topologies.These issues include switch memory costs,packet processing latency,and link failure recovery delays.The paper proposes Enhanced Link Failure Rerouting(ELFR),an improved mechanism for rerouting link failures in software-defined exchange point networks.The proposed mechanism aims to minimize packet processing time for fast link failure recovery and enhance path calculation efficiency while reducing switch storage overhead by exploiting the Programming Protocol-independent Packet Processors(P4)features.The paper presents the proposed mechanisms’efficiency by utilizing advanced algorithms and demonstrating improved performance in packet processing speed,path calculation effectiveness,and switch storage management compared to current mechanisms.The proposed mechanism shows significant improvements,leading to a 37.5%decrease in Recovery Time(RT)and a 33.33%decrease in both Calculation Time(CT)and Computational Overhead(CO)when compared to current mechanisms.The study highlights the effectiveness and resource efficiency of the proposed mechanism in effectively resolving crucial issues inmulti-hop software-defined exchange point networks.
文摘Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms.
基金Supported by the“Ricerca Corrente”Grant from Italian Ministry of Health,No.IRCCS SYNLAB SDN.
文摘BACKGROUND Radiomics is a promising tool that may increase the value of magnetic resonance imaging(MRI)for different tasks related to the management of patients with hepatocellular carcinoma(HCC).However,its implementation in clinical practice is still far,with many issues related to the methodological quality of radiomic studies.AIM To systematically review the current status of MRI radiomic studies concerning HCC using the Radiomics Quality Score(RQS).METHODS A systematic literature search of PubMed,Google Scholar,and Web of Science databases was performed to identify original articles focusing on the use of MRI radiomics for HCC management published between 2017 and 2023.The methodological quality of radiomic studies was assessed using the RQS tool.Spearman’s correlation(ρ)analysis was performed to explore if RQS was correlated with journal metrics and characteristics of the studies.The level of statistical significance was set at P<0.05.RESULTS One hundred and twenty-seven articles were included,of which 43 focused on HCC prognosis,39 on prediction of pathological findings,16 on prediction of the expression of molecular markers outcomes,18 had a diagnostic purpose,and 11 had multiple purposes.The mean RQS was 8±6.22,and the corresponding percentage was 24.15%±15.25%(ranging from 0.0% to 58.33%).RQS was positively correlated with journal impact factor(IF;ρ=0.36,P=2.98×10^(-5)),5-years IF(ρ=0.33,P=1.56×10^(-4)),number of patients included in the study(ρ=0.51,P<9.37×10^(-10))and number of radiomics features extracted in the study(ρ=0.59,P<4.59×10^(-13)),and time of publication(ρ=-0.23,P<0.0072).CONCLUSION Although MRI radiomics in HCC represents a promising tool to develop adequate personalized treatment as a noninvasive approach in HCC patients,our study revealed that studies in this field still lack the quality required to allow its introduction into clinical practice.
文摘The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.
基金the Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2024-1008.
文摘Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Networks(MANETs)based real-time prediction paradigm for urban traffic challenges.MANETs are wireless networks that are based on mobile devices and may self-organize.The distributed nature of MANETs and the power of AI approaches are leveraged in this framework to provide reliable and timely traffic congestion forecasts.This study suggests a unique Chaotic Spatial Fuzzy Polynomial Neural Network(CSFPNN)technique to assess real-time data acquired from various sources within theMANETs.The framework uses the proposed approach to learn from the data and create predictionmodels to detect possible traffic problems and their severity in real time.Real-time traffic prediction allows for proactive actions like resource allocation,dynamic route advice,and traffic signal optimization to reduce congestion.The framework supports effective decision-making,decreases travel time,lowers fuel use,and enhances overall urban mobility by giving timely information to pedestrians,drivers,and urban planners.Extensive simulations and real-world datasets are used to test the proposed framework’s prediction accuracy,responsiveness,and scalability.Experimental results show that the suggested framework successfully anticipates urban traffic issues in real-time,enables proactive traffic management,and aids in creating smarter,more sustainable cities.
文摘This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors.
文摘Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.
基金in part by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2021R1A6A1A03039493)by the 2024 Yeungnam University Research Grant.
文摘In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encryption methods have been proposed by researchers to secure communications.Most of the proposed steganography techniques achieve higher embedding capacities without compromising visual imperceptibility using LSB substitution.In this work,we have an approach that utilizes a combinationofMost SignificantBit(MSB)matching andLeast Significant Bit(LSB)substitution.The proposed algorithm divides confidential messages into pairs of bits and connects them with the MSBs of individual pixels using pair matching,enabling the storage of 6 bits in one pixel by modifying a maximum of three bits.The proposed technique is evaluated using embedding capacity and Peak Signal-to-Noise Ratio(PSNR)score,we compared our work with the Zakariya scheme the results showed a significant increase in data concealment capacity.The achieved results of ourwork showthat our algorithmdemonstrates an improvement in hiding capacity from11%to 22%for different data samples while maintaining a minimumPeak Signal-to-Noise Ratio(PSNR)of 37 dB.These findings highlight the effectiveness and trustworthiness of the proposed algorithm in securing the communication process and maintaining visual integrity.
文摘Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.
文摘Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language.
文摘Nowadays, devices are connected across all areas, from intelligent buildings and smart cities to Industry 4.0 andsmart healthcare. With the exponential growth of Internet of Things usage in our world, IoT security is still thebiggest challenge for its deployment. The main goal of IoT security is to ensure the accessibility of services providedby an IoT environment, protect privacy, and confidentiality, and guarantee the safety of IoT users, infrastructures,data, and devices. Authentication, as the first line of defense against security threats, becomes the priority ofeveryone. It can either grant or deny users access to resources according to their legitimacy. As a result, studyingand researching authentication issues within IoT is extremely important. As a result, studying and researchingauthentication issues within IoT is extremely important. This article presents a comparative study of recent researchin IoT security;it provides an analysis of recent authentication protocols from2019 to 2023 that cover several areaswithin IoT (such as smart cities, healthcare, and industry). This survey sought to provide an IoT security researchsummary, the biggest susceptibilities, and attacks, the appropriate technologies, and the most used simulators. Itillustrates that the resistance of protocols against attacks, and their computational and communication cost arelinked directly to the cryptography technique used to build it. Furthermore, it discusses the gaps in recent schemesand provides some future research directions.
文摘A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is still a majorconcern. Encryption, network security, and adherence to data protection laws are key to ensuring the confidentialityand integrity of healthcare data in the cloud. The computational overhead of encryption technologies could leadto delays in data access and processing rates. To address these challenges, we introduced the Enhanced ParallelMulti-Key Encryption Algorithm (EPM-KEA), aiming to bolster healthcare data security and facilitate the securestorage of critical patient records in the cloud. The data was gathered from two categories Authorization forHospital Admission (AIH) and Authorization for High Complexity Operations.We use Z-score normalization forpreprocessing. The primary goal of implementing encryption techniques is to secure and store massive amountsof data on the cloud. It is feasible that cloud storage alternatives for protecting healthcare data will become morewidely available if security issues can be successfully fixed. As a result of our analysis using specific parametersincluding Execution time (42%), Encryption time (45%), Decryption time (40%), Security level (97%), and Energyconsumption (53%), the system demonstrated favorable performance when compared to the traditional method.This suggests that by addressing these security concerns, there is the potential for broader accessibility to cloudstorage solutions for safeguarding healthcare data.
基金supported by Northern Border University,Arar,KSA,through the Project Number“NBU-FFR-2024-2248-02”.
文摘This paper contributes a sophisticated statistical method for the assessment of performance in routing protocols salient Mobile Ad Hoc Network(MANET)routing protocols:Destination Sequenced Distance Vector(DSDV),Ad hoc On-Demand Distance Vector(AODV),Dynamic Source Routing(DSR),and Zone Routing Protocol(ZRP).In this paper,the evaluation will be carried out using complete sets of statistical tests such as Kruskal-Wallis,Mann-Whitney,and Friedman.It articulates a systematic evaluation of how the performance of the previous protocols varies with the number of nodes and the mobility patterns.The study is premised upon the Quality of Service(QoS)metrics of throughput,packet delivery ratio,and end-to-end delay to gain an adequate understanding of the operational efficiency of each protocol under different network scenarios.The findings explained significant differences in the performance of different routing protocols;as a result,decisions for the selection and optimization of routing protocols can be taken effectively according to different network requirements.This paper is a step forward in the general understanding of the routing dynamics of MANETs and contributes significantly to the strategic deployment of robust and efficient network infrastructures.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2023R809).
文摘Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering.
文摘Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances.
文摘The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potential security risks that malicious actors can exploit. QR code Phishing, or “Quishing”, is a type of phishing attack that leverages QR codes to deceive individuals into visiting malicious websites or downloading harmful software. These attacks can be particularly effective due to the growing popularity and trust in QR codes. This paper examines the importance of enhancing the security of QR codes through the utilization of artificial intelligence (AI). The abstract investigates the integration of AI methods for identifying and mitigating security threats associated with QR code usage. By assessing the current state of QR code security and evaluating the effectiveness of AI-driven solutions, this research aims to propose comprehensive strategies for strengthening QR code technology’s resilience. The study contributes to discussions on secure data encoding and retrieval, providing valuable insights into the evolving synergy between QR codes and AI for the advancement of secure digital communication.