期刊文献+
共找到1,015篇文章
< 1 2 51 >
每页显示 20 50 100
The Value of Modern Information Technology in Nursing Emergencies
1
作者 Xushan Zhou Ying Li +2 位作者 Henglong Xu Chenyan Li Hongling Sun 《Health》 2024年第9期849-855,共7页
In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information... In first aid, traditional information interchange has numerous shortcomings. For example, delayed information and disorganized departmental communication cause patients to miss out on critical rescue time. Information technology is becoming more and more mature, and as a result, its use across numerous industries is now standard. China is still in the early stages of developing its integration of emergency medical services with modern information technology;despite our progress, there are still numerous obstacles and constraints to overcome. Our goal is to integrate information technology into every aspect of emergency patient care, offering robust assistance for both patient rescue and the efforts of medical personnel. Information may be communicated in a fast, multiple, and effective manner by utilizing modern information technology. This study aims to examine the current state of this field’s development, current issues, and the field’s future course of development. 展开更多
关键词 Modern Information Technology Nursing First Aid Critical Care
下载PDF
Information Technology in Domain of Electronic Intelligence and Cyberterrorism Combat
2
作者 Robert Brumnik Sergii Kavun 《Computer Technology and Application》 2015年第1期45-56,共12页
Terms of intelligence in 20th and 21th century mean the methods of automatic extraction, analysis, interpretation and use of information. Thus, the intelligence services in the future created an electronic database in... Terms of intelligence in 20th and 21th century mean the methods of automatic extraction, analysis, interpretation and use of information. Thus, the intelligence services in the future created an electronic database in which to their being classified intelligence products, users could choose between the latter themselves relevant information. The EU (European Union) that activities are carried out from at least in year 1996, terrorist attacks in year 200l is only accelerating. Proposals to increase surveillance and international cooperation in this field have been drawn up before September 11 2011. On the Web you can fmd a list of networks (Cryptome, 2011), which could be connected, or are under the control of the security service--NSA (National Security Agency). United States of America in year 1994 enacted a law for telephone communication--Digital Telephony Act, which would require manufacturers of telecommunications equipment, leaving some security holes for control. In addition, we monitor the Internet and large corporations. The example of the United States of America in this action reveals the organization for electronic freedoms against a telecom company that the NSA illegally gains access to data on information technology users and Internet telephony. 展开更多
关键词 Intelligence and counterintelligence activities electronic spy devices detection methods netwar counterterrorism.
下载PDF
Identification and Validation of Social Media Socio-Technical Information Security Factors with Respect to Usable-Security Principles 被引量:1
3
作者 Joe Mutebi Margaret Kareyo +1 位作者 Umezuruike Chinecherem Akampurira Paul 《Journal of Computer and Communications》 2022年第8期41-63,共23页
The goal of this manuscript is to present a research finding, based on a study conducted to identify, examine, and validate Social Media (SM) socio-technical information security factors, in line with usable-security ... The goal of this manuscript is to present a research finding, based on a study conducted to identify, examine, and validate Social Media (SM) socio-technical information security factors, in line with usable-security principles. The study followed literature search techniques, as well as theoretical and empirical methods of factor validation. The strategy used in literature search includes Boolean keywords search, and citation guides, using mainly web of science databases. As guided by study objectives, 9 SM socio-technical factors were identified, verified and validated. Both theoretical and empirical validation processes were followed. Thus, a theoretical validity test was conducted on 45 Likert scale items, involving 10 subject experts. From the score ratings of the experts, Content Validity Index (CVI) was calculated to determine the degree to which the identified factors exhibit appropriate items for the construct being measured, and 7 factors attained an adequate level of validity index. However, for reliability test, 32 respondents and 45 Likert scale items were used. Whereby, Cronbach’s alpha coefficient (α-values) were generated using SPSS. Subsequently, 8 factors attained an adequate level of reliability. Overall, the validated factors include;1) usability—visibility, learnability, and satisfaction;2) education and training—help and documentation;3) SM technology development—error handling, and revocability;4) information security —security, privacy, and expressiveness. In this case, the confirmed factors would add knowledge by providing a theoretical basis for rationalizing information security requirements on SM usage. 展开更多
关键词 Social Media Usage Information Security Factors Cyber Security Socio-Technical Usable-Security
下载PDF
Current status of magnetic resonance imaging radiomics in hepatocellular carcinoma:A quantitative review with Radiomics Quality Score 被引量:1
4
作者 Valentina Brancato Marco Cerrone +2 位作者 Nunzia Garbino Marco Salvatore Carlo Cavaliere 《World Journal of Gastroenterology》 SCIE CAS 2024年第4期381-417,共37页
BACKGROUND Radiomics is a promising tool that may increase the value of magnetic resonance imaging(MRI)for different tasks related to the management of patients with hepatocellular carcinoma(HCC).However,its implement... BACKGROUND Radiomics is a promising tool that may increase the value of magnetic resonance imaging(MRI)for different tasks related to the management of patients with hepatocellular carcinoma(HCC).However,its implementation in clinical practice is still far,with many issues related to the methodological quality of radiomic studies.AIM To systematically review the current status of MRI radiomic studies concerning HCC using the Radiomics Quality Score(RQS).METHODS A systematic literature search of PubMed,Google Scholar,and Web of Science databases was performed to identify original articles focusing on the use of MRI radiomics for HCC management published between 2017 and 2023.The methodological quality of radiomic studies was assessed using the RQS tool.Spearman’s correlation(ρ)analysis was performed to explore if RQS was correlated with journal metrics and characteristics of the studies.The level of statistical significance was set at P<0.05.RESULTS One hundred and twenty-seven articles were included,of which 43 focused on HCC prognosis,39 on prediction of pathological findings,16 on prediction of the expression of molecular markers outcomes,18 had a diagnostic purpose,and 11 had multiple purposes.The mean RQS was 8±6.22,and the corresponding percentage was 24.15%±15.25%(ranging from 0.0% to 58.33%).RQS was positively correlated with journal impact factor(IF;ρ=0.36,P=2.98×10^(-5)),5-years IF(ρ=0.33,P=1.56×10^(-4)),number of patients included in the study(ρ=0.51,P<9.37×10^(-10))and number of radiomics features extracted in the study(ρ=0.59,P<4.59×10^(-13)),and time of publication(ρ=-0.23,P<0.0072).CONCLUSION Although MRI radiomics in HCC represents a promising tool to develop adequate personalized treatment as a noninvasive approach in HCC patients,our study revealed that studies in this field still lack the quality required to allow its introduction into clinical practice. 展开更多
关键词 Hepatocellular carcinoma Systematic review Magnetic resonance imaging Radiomics Radiomics quality score
下载PDF
Technology Landscape for Epidemiological Prediction and Diagnosis of COVID-19
5
作者 Siddhant Banyal Rinky Dwivedi +3 位作者 Koyel Datta Gupta Deepak Kumar Sharma Fadi Al-Turjman Leonardo Mostarda 《Computers, Materials & Continua》 SCIE EI 2021年第5期1679-1696,共18页
The COVID-19 outbreak initiated from the Chinese city of Wuhanand eventually affected almost every nation around the globe. From China,the disease started spreading to the rest of the world. After China, Italybecame t... The COVID-19 outbreak initiated from the Chinese city of Wuhanand eventually affected almost every nation around the globe. From China,the disease started spreading to the rest of the world. After China, Italybecame the next epicentre of the virus and witnessed a very high death toll.Soon nations like the USA became severely hit by SARS-CoV-2 virus. TheWorld Health Organisation, on 11th March 2020, declared COVID-19 a pandemic. To combat the epidemic, the nations from every corner of the worldhas instituted various policies like physical distancing, isolation of infectedpopulation and researching on the potential vaccine of SARS-CoV-2. Toidentify the impact of various policies implemented by the affected countrieson the pandemic spread, a myriad of AI-based models have been presented toanalyse and predict the epidemiological trends of COVID-19. In this work, theauthors present a detailed study of different articial intelligence frameworksapplied for predictive analysis of COVID-19 patient record. The forecastingmodels acquire information from records to detect the pandemic spreadingand thus enabling an opportunity to take immediate actions to reduce thespread of the virus. This paper addresses the research issues and correspondingsolutions associated with the prediction and detection of infectious diseaseslike COVID-19. It further focuses on the study of vaccinations to cope withthe pandemic. Finally, the research challenges in terms of data availability,reliability, the accuracy of the existing prediction models and other open issuesare discussed to outline the future course of this study. 展开更多
关键词 COVID-19 DIAGNOSIS deep learning forecasting models machine learning metaheuristics PREDICTION big data PANDEMIC
下载PDF
Off-line Programming Technology Based on RPC Communication Method and its Application
6
作者 王宏杰 Ding +6 位作者 Guoqing Yan Guozheng Zhu Honghai Lin Liangming 《High Technology Letters》 EI CAS 2001年第4期63-68,共6页
This paper puts forward a communication programming method between robot and external computer based on RPC (Remote Produce Call) communication method, which realizes robot distributed controlling network system model... This paper puts forward a communication programming method between robot and external computer based on RPC (Remote Produce Call) communication method, which realizes robot distributed controlling network system model. And a new Robot off line programming method is built based on this communication method and network model. Further more, as an example, robot auto marking and auto cutting of shipbuilding profile system is developed, which proves the ideas of author’s off line programming and development methods of robot flexible automation system. As a result, this paper presents a new method for developing robot flexible automation system. 展开更多
关键词 Off line programming RPC communication Robot flexible automation Profile processing
下载PDF
A Review on the Recent Trends of Image Steganography for VANET Applications
7
作者 Arshiya S.Ansari 《Computers, Materials & Continua》 SCIE EI 2024年第3期2865-2892,共28页
Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate w... Image steganography is a technique of concealing confidential information within an image without dramatically changing its outside look.Whereas vehicular ad hoc networks(VANETs),which enable vehicles to communicate with one another and with roadside infrastructure to enhance safety and traffic flow provide a range of value-added services,as they are an essential component of modern smart transportation systems.VANETs steganography has been suggested by many authors for secure,reliable message transfer between terminal/hope to terminal/hope and also to secure it from attack for privacy protection.This paper aims to determine whether using steganography is possible to improve data security and secrecy in VANET applications and to analyze effective steganography techniques for incorporating data into images while minimizing visual quality loss.According to simulations in literature and real-world studies,Image steganography proved to be an effectivemethod for secure communication on VANETs,even in difficult network conditions.In this research,we also explore a variety of steganography approaches for vehicular ad-hoc network transportation systems like vector embedding,statistics,spatial domain(SD),transform domain(TD),distortion,masking,and filtering.This study possibly shall help researchers to improve vehicle networks’ability to communicate securely and lay the door for innovative steganography methods. 展开更多
关键词 STEGANOGRAPHY image steganography image steganography techniques information exchange data embedding and extracting vehicular ad hoc network(VANET) transportation system
下载PDF
Class Imbalanced Problem:Taxonomy,Open Challenges,Applications and State-of-the-Art Solutions
8
作者 Khursheed Ahmad Bhat Shabir Ahmad Sofi 《China Communications》 SCIE CSCD 2024年第11期216-242,共27页
The study of machine learning has revealed that it can unleash new applications in a variety of disciplines.Many limitations limit their expressiveness,and researchers are working to overcome them to fully exploit the... The study of machine learning has revealed that it can unleash new applications in a variety of disciplines.Many limitations limit their expressiveness,and researchers are working to overcome them to fully exploit the power of data-driven machine learning(ML)and deep learning(DL)techniques.The data imbalance presents major hurdles for classification and prediction problems in machine learning,restricting data analytics and acquiring relevant insights in practically all real-world research domains.In visual learning,network information security,failure prediction,digital marketing,healthcare,and a variety of other domains,raw data suffers from a biased data distribution of one class over the other.This article aims to present a taxonomy of the approaches for handling imbalanced data problems and their comparative study on the classification metrics and their application areas.We have explored very recent trends of techniques employed for solutions to class imbalance problems in datasets and have also discussed their limitations.This article has also identified open challenges for further research in the direction of class data imbalance. 展开更多
关键词 class imbalance classification deep learning GANs sampling
下载PDF
Enhancing Cybersecurity Competency in the Kingdom of Saudi Arabia:A Fuzzy Decision-Making Approach
9
作者 Wajdi Alhakami 《Computers, Materials & Continua》 SCIE EI 2024年第5期3211-3237,共27页
The Kingdom of Saudi Arabia(KSA)has achieved significant milestones in cybersecurity.KSA has maintained solid regulatorymechanisms to prevent,trace,and punish offenders to protect the interests of both individual user... The Kingdom of Saudi Arabia(KSA)has achieved significant milestones in cybersecurity.KSA has maintained solid regulatorymechanisms to prevent,trace,and punish offenders to protect the interests of both individual users and organizations from the online threats of data poaching and pilferage.The widespread usage of Information Technology(IT)and IT Enable Services(ITES)reinforces securitymeasures.The constantly evolving cyber threats are a topic that is generating a lot of discussion.In this league,the present article enlists a broad perspective on how cybercrime is developing in KSA at present and also takes a look at some of the most significant attacks that have taken place in the region.The existing legislative framework and measures in the KSA are geared toward deterring criminal activity online.Different competency models have been devised to address the necessary cybercrime competencies in this context.The research specialists in this domain can benefit more by developing a master competency level for achieving optimum security.To address this research query,the present assessment uses the Fuzzy Decision-Making Trial and Evaluation Laboratory(Fuzzy-DMTAEL),Fuzzy Analytic Hierarchy Process(F.AHP),and Fuzzy TOPSIS methodology to achieve segment-wise competency development in cyber security policy.The similarities and differences between the three methods are also discussed.This cybersecurity analysis determined that the National Cyber Security Centre got the highest priority.The study concludes by perusing the challenges that still need to be examined and resolved in effectuating more credible and efficacious online security mechanisms to offer amoreempowered ITES-driven economy for SaudiArabia.Moreover,cybersecurity specialists and policymakers need to collate their efforts to protect the country’s digital assets in the era of overt and covert cyber warfare. 展开更多
关键词 Cyber security fuzzy DMTAEL security policy cyber crime MCDM
下载PDF
A New Framework for Software Vulnerability Detection Based on an Advanced Computing
10
作者 Bui Van Cong Cho Do Xuan 《Computers, Materials & Continua》 SCIE EI 2024年第6期3699-3723,共25页
The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses ... The detection of software vulnerabilities written in C and C++languages takes a lot of attention and interest today.This paper proposes a new framework called DrCSE to improve software vulnerability detection.It uses an intelligent computation technique based on the combination of two methods:Rebalancing data and representation learning to analyze and evaluate the code property graph(CPG)of the source code for detecting abnormal behavior of software vulnerabilities.To do that,DrCSE performs a combination of 3 main processing techniques:(i)building the source code feature profiles,(ii)rebalancing data,and(iii)contrastive learning.In which,the method(i)extracts the source code’s features based on the vertices and edges of the CPG.The method of rebalancing data has the function of supporting the training process by balancing the experimental dataset.Finally,contrastive learning techniques learn the important features of the source code by finding and pulling similar ones together while pushing the outliers away.The experiment part of this paper demonstrates the superiority of the DrCSE Framework for detecting source code security vulnerabilities using the Verum dataset.As a result,the method proposed in the article has brought a pretty good performance in all metrics,especially the Precision and Recall scores of 39.35%and 69.07%,respectively,proving the efficiency of the DrCSE Framework.It performs better than other approaches,with a 5%boost in Precision and a 5%boost in Recall.Overall,this is considered the best research result for the software vulnerability detection problem using the Verum dataset according to our survey to date. 展开更多
关键词 Source code vulnerability source code vulnerability detection code property graph feature profile contrastive learning data rebalancing
下载PDF
Experimental Verification of the Theory of Non-Force (Information) Interaction
11
作者 Iurii Teslia 《Journal of Modern Physics》 2022年第4期518-573,共56页
The work is devoted to the demonstration of the possibility of applying the formulas of information handling obtained in the theory of non-force interaction for the natural language processing. These formulas were obt... The work is devoted to the demonstration of the possibility of applying the formulas of information handling obtained in the theory of non-force interaction for the natural language processing. These formulas were obtained in computer experiments in modelling the movement and interaction of material objects by changing the amount of information that triggers this movement. The hypothesis, objective and tasks of the experimental research were defined. The methods and software tools were developed to conduct the experiments. To compare different results of the simulation of the processes in a human brain during speech production, there was a range of methods proposed to calculate the estimate of sequence of fragments of natural language texts including the methods based on linear approximation. The experiments confirmed that the formulas of information handling obtained in the theory of non-force interaction reflect the processes of language formation. It is shown that the offered approach can successfully be used to create systems of reactive artificial intelligence machines. Experimental and, presented in this work, practical results constitute that the non-force (informational) interaction formulae are generally valid. 展开更多
关键词 Non-Force Interaction Non-Force Interaction Method Computer Linguistics Mechanical Movement Special Relativity
下载PDF
Improving the Transmission Security of Vein Images Using a Bezier Curve and Long Short-Term Memory
12
作者 Ahmed H.Alhadethi Ikram Smaoui +1 位作者 Ahmed Fakhfakh Saad M.Darwish 《Computers, Materials & Continua》 SCIE EI 2024年第6期4825-4844,共20页
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c... The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%. 展开更多
关键词 Image transmission image compression text hiding Bezier curve Histogram of Oriented Gradients(HOG) LSTM image enhancement Gaussian noise ROTATION
下载PDF
A Review of Image Steganography Based on Multiple Hashing Algorithm
13
作者 Abdullah Alenizi Mohammad Sajid Mohammadi +1 位作者 Ahmad A.Al-Hajji Arshiya Sajid Ansari 《Computers, Materials & Continua》 SCIE EI 2024年第8期2463-2494,共32页
Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a s... Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms. 展开更多
关键词 Image steganography multiple hashing algorithms Hash-LSB approach RSA algorithm discrete cosine transform(DCT)algorithm blowfish algorithm
下载PDF
Enhanced Mechanism for Link Failure Rerouting in Software-Defined Exchange Point Networks
14
作者 Abdijalil Abdullahi Selvakumar Manickam 《Computers, Materials & Continua》 SCIE EI 2024年第9期4361-4385,共25页
Internet Exchange Point(IXP)is a system that increases network bandwidth performance.Internet exchange points facilitate interconnection among network providers,including Internet Service Providers(ISPs)andContent Del... Internet Exchange Point(IXP)is a system that increases network bandwidth performance.Internet exchange points facilitate interconnection among network providers,including Internet Service Providers(ISPs)andContent Delivery Providers(CDNs).To improve service management,Internet exchange point providers have adopted the Software Defined Network(SDN)paradigm.This implementation is known as a Software-Defined Exchange Point(SDX).It improves network providers’operations and management.However,performance issues still exist,particularly with multi-hop topologies.These issues include switch memory costs,packet processing latency,and link failure recovery delays.The paper proposes Enhanced Link Failure Rerouting(ELFR),an improved mechanism for rerouting link failures in software-defined exchange point networks.The proposed mechanism aims to minimize packet processing time for fast link failure recovery and enhance path calculation efficiency while reducing switch storage overhead by exploiting the Programming Protocol-independent Packet Processors(P4)features.The paper presents the proposed mechanisms’efficiency by utilizing advanced algorithms and demonstrating improved performance in packet processing speed,path calculation effectiveness,and switch storage management compared to current mechanisms.The proposed mechanism shows significant improvements,leading to a 37.5%decrease in Recovery Time(RT)and a 33.33%decrease in both Calculation Time(CT)and Computational Overhead(CO)when compared to current mechanisms.The study highlights the effectiveness and resource efficiency of the proposed mechanism in effectively resolving crucial issues inmulti-hop software-defined exchange point networks. 展开更多
关键词 Link failure recovery Internet exchange point software-defined exchange point software-defined network multihop topologies
下载PDF
Real-Time Prediction of Urban Traffic Problems Based on Artificial Intelligence-Enhanced Mobile Ad Hoc Networks(MANETS)
15
作者 Ahmed Alhussen Arshiya S.Ansari 《Computers, Materials & Continua》 SCIE EI 2024年第5期1903-1923,共21页
Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Ne... Traffic in today’s cities is a serious problem that increases travel times,negatively affects the environment,and drains financial resources.This study presents an Artificial Intelligence(AI)augmentedMobile Ad Hoc Networks(MANETs)based real-time prediction paradigm for urban traffic challenges.MANETs are wireless networks that are based on mobile devices and may self-organize.The distributed nature of MANETs and the power of AI approaches are leveraged in this framework to provide reliable and timely traffic congestion forecasts.This study suggests a unique Chaotic Spatial Fuzzy Polynomial Neural Network(CSFPNN)technique to assess real-time data acquired from various sources within theMANETs.The framework uses the proposed approach to learn from the data and create predictionmodels to detect possible traffic problems and their severity in real time.Real-time traffic prediction allows for proactive actions like resource allocation,dynamic route advice,and traffic signal optimization to reduce congestion.The framework supports effective decision-making,decreases travel time,lowers fuel use,and enhances overall urban mobility by giving timely information to pedestrians,drivers,and urban planners.Extensive simulations and real-world datasets are used to test the proposed framework’s prediction accuracy,responsiveness,and scalability.Experimental results show that the suggested framework successfully anticipates urban traffic issues in real-time,enables proactive traffic management,and aids in creating smarter,more sustainable cities. 展开更多
关键词 Mobile AdHocNetworks(MANET) urban traffic prediction artificial intelligence(AI) traffic congestion chaotic spatial fuzzy polynomial neural network(CSFPNN)
下载PDF
Automatic Rule Discovery for Data Transformation Using Fusion of Diversified Feature Formats
16
作者 G.Sunil Santhosh Kumar M.Rudra Kumar 《Computers, Materials & Continua》 SCIE EI 2024年第7期695-713,共19页
This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed... This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors. 展开更多
关键词 XGBoost business rule engine machine learning categorical query language humanitarian computing environment
下载PDF
Adaptation of Federated Explainable Artificial Intelligence for Efficient and Secure E-Healthcare Systems
17
作者 Rabia Abid Muhammad Rizwan +3 位作者 Abdulatif Alabdulatif Abdullah Alnajim Meznah Alamro Mourade Azrour 《Computers, Materials & Continua》 SCIE EI 2024年第3期3413-3429,共17页
Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit... Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system. 展开更多
关键词 Artificial intelligence data privacy federated machine learning healthcare system SECURITY
下载PDF
Sentiment Analysis of Low-Resource Language Literature Using Data Processing and Deep Learning
18
作者 Aizaz Ali Maqbool Khan +2 位作者 Khalil Khan Rehan Ullah Khan Abdulrahman Aloraini 《Computers, Materials & Continua》 SCIE EI 2024年第4期713-733,共21页
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime... Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language. 展开更多
关键词 Urdu sentiment analysis convolutional neural networks recurrent neural network deep learning natural language processing neural networks
下载PDF
A Hybrid Model for Improving Software Cost Estimation in Global Software Development
19
作者 Mehmood Ahmed Noraini B.Ibrahim +4 位作者 Wasif Nisar Adeel Ahmed Muhammad Junaid Emmanuel Soriano Flores Divya Anand 《Computers, Materials & Continua》 SCIE EI 2024年第1期1399-1422,共24页
Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely h... Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD. 展开更多
关键词 Artificial neural networks COCOMO II cost drivers global software development linear regression software cost estimation
下载PDF
Elevating Image Steganography:A Fusion of MSB Matching and LSB Substitution for Enhanced Concealment Capabilities
20
作者 Muhammad Zaman Ali Omer Riaz +3 位作者 Hafiz Muhammad Hasnain Waqas Sharif Tenvir Ali Gyu Sang Choi 《Computers, Materials & Continua》 SCIE EI 2024年第5期2923-2943,共21页
In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encr... In today’s rapidly evolving landscape of communication technologies,ensuring the secure delivery of sensitive data has become an essential priority.To overcome these difficulties,different steganography and data encryption methods have been proposed by researchers to secure communications.Most of the proposed steganography techniques achieve higher embedding capacities without compromising visual imperceptibility using LSB substitution.In this work,we have an approach that utilizes a combinationofMost SignificantBit(MSB)matching andLeast Significant Bit(LSB)substitution.The proposed algorithm divides confidential messages into pairs of bits and connects them with the MSBs of individual pixels using pair matching,enabling the storage of 6 bits in one pixel by modifying a maximum of three bits.The proposed technique is evaluated using embedding capacity and Peak Signal-to-Noise Ratio(PSNR)score,we compared our work with the Zakariya scheme the results showed a significant increase in data concealment capacity.The achieved results of ourwork showthat our algorithmdemonstrates an improvement in hiding capacity from11%to 22%for different data samples while maintaining a minimumPeak Signal-to-Noise Ratio(PSNR)of 37 dB.These findings highlight the effectiveness and trustworthiness of the proposed algorithm in securing the communication process and maintaining visual integrity. 展开更多
关键词 STEGANOGRAPHY most significant bit(MSB) least significant bit(LSB) peak signal-to-noise ratio(PSNR)
下载PDF
上一页 1 2 51 下一页 到第
使用帮助 返回顶部