期刊文献+
共找到1,002篇文章
< 1 2 51 >
每页显示 20 50 100
The Value of Modern Information Technology in Nursing Emergencies
1
作者 Xushan Zhou Ying Li +2 位作者 Henglong Xu Chenyan Li Hongling Sun 《Health》 2024年第9期849-855,共7页
In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information... In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information technology is becoming more and more mature, and as a result, its use across numerous industries is now standard. China is still in the early stages of developing its integration of emergency medical services with modern information technology;despite our progress, there are still numerous obstacles and constraints to overcome. Our goal is to integrate information technology into every aspect of emergency patient care, offering robust assistance for both patient rescue and the efforts of medical personnel. Information may be communicated in a fast, multiple, and effective manner by utilizing modern information technology. This study aims to examine the current state of this field’s development, current issues, and the field’s future course of development. 展开更多
关键词 Modern Information Technology Nursing First Aid Critical Care
下载PDF
A Review on the Recent Trends of Image Steganography for VANET Applications
2
作者 Arshiya S.Ansari 《Computers, Materials & Continua》 SCIE EI 2024年第3期2865-2892,共28页
Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate w... Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate with one another and with roadside infrastructure to enhance safety and traffic flow provide a range of value-added services,as they are an essential component of modern smart transportation systems.VANETs steganography has been suggested by many authors for secure,reliable message transfer between terminal/hope to terminal/hope and also to secure it from attack for privacy protection.This paper aims to determine whether using steganography is possible to improve data security and secrecy in VANET applications and to analyze effective steganography techniques for incorporating data into images while minimizing visual quality loss.According to simulations in literature and real-world studies,Image steganography proved to be an effectivemethod for secure communication on VANETs,even in difficult network conditions.In this research,we also explore a variety of steganography approaches for vehicular ad-hoc network transportation systems like vector embedding,statistics,spatial domain(SD),transform domain(TD),distortion,masking,and filtering.This study possibly shall help researchers to improve vehicle networks’ability to communicate securely and lay the door for innovative steganography methods. 展开更多
关键词 STEGANOGRAPHY image steganography image steganography techniques information exchange data embedding and extracting vehicular ad hoc network(VANET) transportation system
下载PDF
Enhancing Cybersecurity Competency in the Kingdom of Saudi Arabia:A Fuzzy Decision-Making Approach
3
作者 Wajdi Alhakami 《Computers, Materials & Continua》 SCIE EI 2024年第5期3211-3237,共27页
The Kingdom of Saudi Arabia(KSA)has achieved significant milestones in cybersecurity.KSA has maintained solid regulatorymechanisms to prevent,trace,and punish offenders to protect the interests of both individual user... The Kingdom of Saudi Arabia(KSA)has achieved significant milestones in cybersecurity.KSA has maintained solid regulatorymechanisms to prevent,trace,and punish offenders to protect the interests of both individual users and organizations from the online threats of data poaching and pilferage.The widespread usage of Information Technology(IT)and IT Enable Services(ITES)reinforces securitymeasures.The constantly evolving cyber threats are a topic that is generating a lot of discussion.In this league,the present article enlists a broad perspective on how cybercrime is developing in KSA at present and also takes a look at some of the most significant attacks that have taken place in the region.The existing legislative framework and measures in the KSA are geared toward deterring criminal activity online.Different competency models have been devised to address the necessary cybercrime competencies in this context.The research specialists in this domain can benefit more by developing a master competency level for achieving optimum security.To address this research query,the present assessment uses the Fuzzy Decision-Making Trial and Evaluation Laboratory(Fuzzy-DMTAEL),Fuzzy Analytic Hierarchy Process(F.AHP),and Fuzzy TOPSIS methodology to achieve segment-wise competency development in cyber security policy.The similarities and differences between the three methods are also discussed.This cybersecurity analysis determined that the National Cyber Security Centre got the highest priority.The study concludes by perusing the challenges that still need to be examined and resolved in effectuating more credible and efficacious online security mechanisms to offer amoreempowered ITES-driven economy for SaudiArabia.Moreover,cybersecurity specialists and policymakers need to collate their efforts to protect the country’s digital assets in the era of overt and covert cyber warfare. 展开更多
关键词 Cyber security fuzzy DMTAEL security policy cyber crime MCDM
下载PDF
A New Framework for Software Vulnerability Detection Based on an Advanced Computing
4
作者 Bui Van Cong Cho Do Xuan 《Computers, Materials & Continua》 SCIE EI 2024年第6期3699-3723,共25页
The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses ... The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date. 展开更多
关键词 Source code vulnerability source code vulnerability detection code property graph feature profile contrastive learning data rebalancing
下载PDF
Enhanced Mechanism for Link Failure Rerouting in Software-Defined Exchange Point Networks
5
作者 Abdijalil Abdullahi Selvakumar Manickam 《Computers, Materials & Continua》 SCIE EI 2024年第9期4361-4385,共25页
Internet Exchange Point(IXP)is a system that increases network bandwidth performance.Internet exchange points facilitate interconnection among network providers,including Internet Service Providers(ISPs)andContent Del... Internet Exchange Point(IXP)is a system that increases network bandwidth performance.Internet exchange points facilitate interconnection among network providers,including Internet Service Providers(ISPs)andContent Delivery Providers(CDNs).To improve service management,Internet exchange point providers have adopted the Software Defined Network(SDN)paradigm.This implementation is known as a Software-Defined Exchange Point(SDX).It improves network providers’operations and management.However,performance issues still exist,particularly with multi-hop topologies.These issues include switch memory costs,packet processing latency,and link failure recovery delays.The paper proposes Enhanced Link Failure Rerouting(ELFR),an improved mechanism for rerouting link failures in software-defined exchange point networks.The proposed mechanism aims to minimize packet processing time for fast link failure recovery and enhance path calculation efficiency while reducing switch storage overhead by exploiting the Programming Protocol-independent Packet Processors(P4)features.The paper presents the proposed mechanisms’efficiency by utilizing advanced algorithms and demonstrating improved performance in packet processing speed,path calculation effectiveness,and switch storage management compared to current mechanisms.The proposed mechanism shows significant improvements,leading to a 37.5%decrease in Recovery Time(RT)and a 33.33%decrease in both Calculation Time(CT)and Computational Overhead(CO)when compared to current mechanisms.The study highlights the effectiveness and resource efficiency of the proposed mechanism in effectively resolving crucial issues inmulti-hop software-defined exchange point networks. 展开更多
关键词 Link failure recovery Internet exchange point software-defined exchange point software-defined network multihop topologies
下载PDF
A Review of Image Steganography Based on Multiple Hashing Algorithm
6
作者 Abdullah Alenizi Mohammad Sajid Mohammadi +1 位作者 Ahmad A.Al-Hajji Arshiya Sajid Ansari 《Computers, Materials & Continua》 SCIE EI 2024年第8期2463-2494,共32页
Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a s... Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms. 展开更多
关键词 Image steganography multiple hashing algorithms Hash-LSB approach RSA algorithm discrete cosine transform(DCT)algorithm blowfish algorithm
下载PDF
Current status of magnetic resonance imaging radiomics in hepatocellular carcinoma:A quantitative review with Radiomics Quality Score
7
作者 Valentina Brancato Marco Cerrone +2 位作者 Nunzia Garbino Marco Salvatore Carlo Cavaliere 《World Journal of Gastroenterology》 SCIE CAS 2024年第4期381-417,共37页
BACKGROUND Radiomics is a promising tool that may increase the value of magnetic resonance imaging(MRI)for different tasks related to the management of patients with hepatocellular carcinoma(HCC).However,its implement... BACKGROUND Radiomics is a promising tool that may increase the value of magnetic resonance imaging(MRI)for different tasks related to the management of patients with hepatocellular carcinoma(HCC).However,its implementation in clinical practice is still far,with many issues related to the methodological quality of radiomic studies.AIM To systematically review the current status of MRI radiomic studies concerning HCC using the Radiomics Quality Score(RQS).METHODS A systematic literature search of PubMed,Google Scholar,and Web of Science databases was performed to identify original articles focusing on the use of MRI radiomics for HCC management published between 2017 and 2023.The methodological quality of radiomic studies was assessed using the RQS tool.Spearman’s correlation(ρ)analysis was performed to explore if RQS was correlated with journal metrics and characteristics of the studies.The level of statistical significance was set at P<0.05.RESULTS One hundred and twenty-seven articles were included,of which 43 focused on HCC prognosis,39 on prediction of pathological findings,16 on prediction of the expression of molecular markers outcomes,18 had a diagnostic purpose,and 11 had multiple purposes.The mean RQS was 8±6.22,and the corresponding percentage was 24.15%±15.25%(ranging from 0.0% to 58.33%).RQS was positively correlated with journal impact factor(IF;ρ=0.36,P=2.98×10^(-5)),5-years IF(ρ=0.33,P=1.56×10^(-4)),number of patients included in the study(ρ=0.51,P<9.37×10^(-10))and number of radiomics features extracted in the study(ρ=0.59,P<4.59×10^(-13)),and time of publication(ρ=-0.23,P<0.0072).CONCLUSION Although MRI radiomics in HCC represents a promising tool to develop adequate personalized treatment as a noninvasive approach in HCC patients,our study revealed that studies in this field still lack the quality required to allow its introduction into clinical practice. 展开更多
关键词 Hepatocellular carcinoma Systematic review Magnetic resonance imaging Radiomics Radiomics quality score
下载PDF
Improving the Transmission Security of Vein Images Using a Bezier Curve and Long Short-Term Memory
8
作者 Ahmed H.Alhadethi Ikram Smaoui +1 位作者 Ahmed Fakhfakh Saad M.Darwish 《Computers, Materials & Continua》 SCIE EI 2024年第6期4825-4844,共20页
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c... The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%. 展开更多
关键词 Image transmission image compression text hiding Bezier curve Histogram of Oriented Gradients(HOG) LSTM image enhancement Gaussian noise ROTATION
下载PDF
Real-Time Prediction of Urban Traffic Problems Based on Artificial Intelligence-Enhanced Mobile Ad Hoc Networks(MANETS)
9
作者 Ahmed Alhussen Arshiya S.Ansari 《Computers, Materials & Continua》 SCIE EI 2024年第5期1903-1923,共21页
Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Ne... Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Networks(MANETs)based real-time prediction paradigm for urban traffic challenges.MANETs are wireless networks that are based on mobile devices and may self-organize.The distributed nature of MANETs and the power of AI approaches are leveraged in this framework to provide reliable and timely traffic congestion forecasts.This study suggests a unique Chaotic Spatial Fuzzy Polynomial Neural Network(CSFPNN)technique to assess real-time data acquired from various sources within theMANETs.The framework uses the proposed approach to learn from the data and create predictionmodels to detect possible traffic problems and their severity in real time.Real-time traffic prediction allows for proactive actions like resource allocation,dynamic route advice,and traffic signal optimization to reduce congestion.The framework supports effective decision-making,decreases travel time,lowers fuel use,and enhances overall urban mobility by giving timely information to pedestrians,drivers,and urban planners.Extensive simulations and real-world datasets are used to test the proposed framework’s prediction accuracy,responsiveness,and scalability.Experimental results show that the suggested framework successfully anticipates urban traffic issues in real-time,enables proactive traffic management,and aids in creating smarter,more sustainable cities. 展开更多
关键词 Mobile AdHocNetworks(MANET) urban traffic prediction artificial intelligence(AI) traffic congestion chaotic spatial fuzzy polynomial neural network(CSFPNN)
下载PDF
Automatic Rule Discovery for Data Transformation Using Fusion of Diversified Feature Formats
10
作者 G.Sunil Santhosh Kumar M.Rudra Kumar 《Computers, Materials & Continua》 SCIE EI 2024年第7期695-713,共19页
This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed... This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors. 展开更多
关键词 XGBoost business rule engine machine learning categorical query language humanitarian computing environment
下载PDF
Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems
11
作者 Rabia Abid Muhammad Rizwan +3 位作者 Abdulatif Alabdulatif Abdullah Alnajim Meznah Alamro Mourade Azrour 《Computers, Materials & Continua》 SCIE EI 2024年第3期3413-3429,共17页
Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit... Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system. 展开更多
关键词 Artificial intelligence data privacy federated machine learning healthcare system SECURITY
下载PDF
Elevating Image Steganography:A Fusion of MSB Matching and LSB Substitution for Enhanced Concealment Capabilities
12
作者 Muhammad Zaman Ali Omer Riaz +3 位作者 Hafiz Muhammad Hasnain Waqas Sharif Tenvir Ali Gyu Sang Choi 《Computers, Materials & Continua》 SCIE EI 2024年第5期2923-2943,共21页
In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encr... In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encryption methods have been proposed by researchers to secure communications.Most of the proposed steganography techniques achieve higher embedding capacities without compromising visual imperceptibility using LSB substitution.In this work,we have an approach that utilizes a combinationofMost SignificantBit(MSB)matching andLeast Significant Bit(LSB)substitution.The proposed algorithm divides confidential messages into pairs of bits and connects them with the MSBs of individual pixels using pair matching,enabling the storage of 6 bits in one pixel by modifying a maximum of three bits.The proposed technique is evaluated using embedding capacity and Peak Signal-to-Noise Ratio(PSNR)score,we compared our work with the Zakariya scheme the results showed a significant increase in data concealment capacity.The achieved results of ourwork showthat our algorithmdemonstrates an improvement in hiding capacity from11%to 22%for different data samples while maintaining a minimumPeak Signal-to-Noise Ratio(PSNR)of 37 dB.These findings highlight the effectiveness and trustworthiness of the proposed algorithm in securing the communication process and maintaining visual integrity. 展开更多
关键词 STEGANOGRAPHY most significant bit(MSB) least significant bit(LSB) peak signal-to-noise ratio(PSNR)
下载PDF
A Hybrid Model for Improving Software Cost Estimation in Global Software Development
13
作者 Mehmood Ahmed Noraini B.Ibrahim +4 位作者 Wasif Nisar Adeel Ahmed Muhammad Junaid Emmanuel Soriano Flores Divya Anand 《Computers, Materials & Continua》 SCIE EI 2024年第1期1399-1422,共24页
Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely h... Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD. 展开更多
关键词 Artificial neural networks COCOMO II cost drivers global software development linear regression software cost estimation
下载PDF
Sentiment Analysis of Low-Resource Language Literature Using Data Processing and Deep Learning
14
作者 Aizaz Ali Maqbool Khan +2 位作者 Khalil Khan Rehan Ullah Khan Abdulrahman Aloraini 《Computers, Materials & Continua》 SCIE EI 2024年第4期713-733,共21页
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime... Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language. 展开更多
关键词 Urdu sentiment analysis convolutional neural networks recurrent neural network deep learning natural language processing neural networks
下载PDF
Internet of Things Authentication Protocols: Comparative Study
15
作者 Souhayla Dargaoui Mourade Azrour +3 位作者 Ahmad ElAllaoui Azidine Guezzaz Abdulatif Alabdulatif Abdullah Alnajim 《Computers, Materials & Continua》 SCIE EI 2024年第4期65-91,共27页
Nowadays, devices are connected across all areas, from intelligent buildings and smart cities to Industry 4.0 andsmart healthcare. With the exponential growth of Internet of Things usage in our world, IoT security is ... Nowadays, devices are connected across all areas, from intelligent buildings and smart cities to Industry 4.0 andsmart healthcare. With the exponential growth of Internet of Things usage in our world, IoT security is still thebiggest challenge for its deployment. The main goal of IoT security is to ensure the accessibility of services providedby an IoT environment, protect privacy, and confidentiality, and guarantee the safety of IoT users, infrastructures,data, and devices. Authentication, as the first line of defense against security threats, becomes the priority ofeveryone. It can either grant or deny users access to resources according to their legitimacy. As a result, studyingand researching authentication issues within IoT is extremely important. As a result, studying and researchingauthentication issues within IoT is extremely important. This article presents a comparative study of recent researchin IoT security;it provides an analysis of recent authentication protocols from2019 to 2023 that cover several areaswithin IoT (such as smart cities, healthcare, and industry). This survey sought to provide an IoT security researchsummary, the biggest susceptibilities, and attacks, the appropriate technologies, and the most used simulators. Itillustrates that the resistance of protocols against attacks, and their computational and communication cost arelinked directly to the cryptography technique used to build it. Furthermore, it discusses the gaps in recent schemesand provides some future research directions. 展开更多
关键词 ATTACKS CRYPTOGRAPHY Internet of Things SECURITY AUTHENTICATION
下载PDF
Fortifying Healthcare Data Security in the Cloud:A Comprehensive Examination of the EPM-KEA Encryption Protocol
16
作者 Umi Salma Basha Shashi Kant Gupta +2 位作者 Wedad Alawad SeongKi Kim Salil Bharany 《Computers, Materials & Continua》 SCIE EI 2024年第5期3397-3416,共20页
A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is stil... A new era of data access and management has begun with the use of cloud computing in the healthcare industry.Despite the efficiency and scalability that the cloud provides, the security of private patient data is still a majorconcern. Encryption, network security, and adherence to data protection laws are key to ensuring the confidentialityand integrity of healthcare data in the cloud. The computational overhead of encryption technologies could leadto delays in data access and processing rates. To address these challenges, we introduced the Enhanced ParallelMulti-Key Encryption Algorithm (EPM-KEA), aiming to bolster healthcare data security and facilitate the securestorage of critical patient records in the cloud. The data was gathered from two categories Authorization forHospital Admission (AIH) and Authorization for High Complexity Operations.We use Z-score normalization forpreprocessing. The primary goal of implementing encryption techniques is to secure and store massive amountsof data on the cloud. It is feasible that cloud storage alternatives for protecting healthcare data will become morewidely available if security issues can be successfully fixed. As a result of our analysis using specific parametersincluding Execution time (42%), Encryption time (45%), Decryption time (40%), Security level (97%), and Energyconsumption (53%), the system demonstrated favorable performance when compared to the traditional method.This suggests that by addressing these security concerns, there is the potential for broader accessibility to cloudstorage solutions for safeguarding healthcare data. 展开更多
关键词 Cloud computing healthcare data security enhanced parallel multi-key encryption algorithm(EPM-KEA)
下载PDF
Enhancing Network Design through Statistical Evaluation of MANET Routing Protocols
17
作者 Ibrahim Alameri Tawfik Al-Hadhrami +2 位作者 Anjum Nazir Abdulsamad Ebrahim Yahya Atef Gharbi 《Computers, Materials & Continua》 SCIE EI 2024年第7期319-339,共21页
This paper contributes a sophisticated statistical method for the assessment of performance in routing protocols salient Mobile Ad Hoc Network(MANET)routing protocols:Destination Sequenced Distance Vector(DSDV),Ad hoc... This paper contributes a sophisticated statistical method for the assessment of performance in routing protocols salient Mobile Ad Hoc Network(MANET)routing protocols:Destination Sequenced Distance Vector(DSDV),Ad hoc On-Demand Distance Vector(AODV),Dynamic Source Routing(DSR),and Zone Routing Protocol(ZRP).In this paper,the evaluation will be carried out using complete sets of statistical tests such as Kruskal-Wallis,Mann-Whitney,and Friedman.It articulates a systematic evaluation of how the performance of the previous protocols varies with the number of nodes and the mobility patterns.The study is premised upon the Quality of Service(QoS)metrics of throughput,packet delivery ratio,and end-to-end delay to gain an adequate understanding of the operational efficiency of each protocol under different network scenarios.The findings explained significant differences in the performance of different routing protocols;as a result,decisions for the selection and optimization of routing protocols can be taken effectively according to different network requirements.This paper is a step forward in the general understanding of the routing dynamics of MANETs and contributes significantly to the strategic deployment of robust and efficient network infrastructures. 展开更多
关键词 Routing protocols statistical approach friedman MANET
下载PDF
Prediction of Geopolymer Concrete Compressive Strength Using Convolutional Neural Networks
18
作者 Kolli Ramujee Pooja Sadula +4 位作者 Golla Madhu Sandeep Kautish Abdulaziz S.Almazyad Guojiang Xiong Ali Wagdy Mohamed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期1455-1486,共32页
Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventio... Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering. 展开更多
关键词 Class F fly ash compressive strength geopolymer concrete PREDICTION deep learning convolutional neural network
下载PDF
Depression Intensity Classification from Tweets Using Fast Text Based Weighted Soft Voting Ensemble
19
作者 Muhammad Rizwan Muhammad Faheem Mushtaq +5 位作者 Maryam Rafiq Arif Mehmood Isabel de la Torre Diez Monica Gracia Villar Helena Garay Imran Ashraf 《Computers, Materials & Continua》 SCIE EI 2024年第2期2047-2066,共20页
Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major ... Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances. 展开更多
关键词 Depression classification deep learning FastText machine learning
下载PDF
Enhancing Security in QR Code Technology Using AI: Exploration and Mitigation Strategies
20
作者 Saranya Vaithilingam Santhosh Aradhya Mohan Shankar 《International Journal of Intelligence Science》 2024年第2期49-57,共9页
The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potentia... The widespread adoption of QR codes has revolutionized various industries, streamlined transactions and improved inventory management. However, this increased reliance on QR code technology also exposes it to potential security risks that malicious actors can exploit. QR code Phishing, or “Quishing”, is a type of phishing attack that leverages QR codes to deceive individuals into visiting malicious websites or downloading harmful software. These attacks can be particularly effective due to the growing popularity and trust in QR codes. This paper examines the importance of enhancing the security of QR codes through the utilization of artificial intelligence (AI). The abstract investigates the integration of AI methods for identifying and mitigating security threats associated with QR code usage. By assessing the current state of QR code security and evaluating the effectiveness of AI-driven solutions, this research aims to propose comprehensive strategies for strengthening QR code technology’s resilience. The study contributes to discussions on secure data encoding and retrieval, providing valuable insights into the evolving synergy between QR codes and AI for the advancement of secure digital communication. 展开更多
关键词 Artificial Intelligence Cyber Security QR Codes Quishing AI Framework Machine Learning AI-Enhanced Security
下载PDF
上一页 1 2 51 下一页 到第
使用帮助 返回顶部