期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Query Optimization Framework for Graph Database in Cloud Dew Environment
1
作者 Tahir Alyas Ali Alzahrani +3 位作者 Yazed Alsaawy Khalid Alissa qaiser abbas Nadia Tabassum 《Computers, Materials & Continua》 SCIE EI 2023年第1期2317-2330,共14页
The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is... The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is a combinatorial optimization problem,which renders exhaustive search impossible as query sizes rise.Increases in CPU performance have surpassed main memory,and disk access speeds in recent decades,allowing data compression to be used—strategies for improving database performance systems.For performance enhancement,compression and query optimization are the two most factors.Compression reduces the volume of data,whereas query optimization minimizes execution time.Compressing the database reduces memory requirement,data takes less time to load into memory,fewer buffer missing occur,and the size of intermediate results is more diminutive.This paper performed query optimization on the graph database in a cloud dew environment by considering,which requires less time to execute a query.The factors compression and query optimization improve the performance of the databases.This research compares the performance of MySQL and Neo4j databases in terms of memory usage and execution time running on cloud dew servers. 展开更多
关键词 Query optimization compression cloud dew DECOMPRESSION graph database
下载PDF
Reducing Dataset Specificity for Deepfakes Using Ensemble Learning
2
作者 qaiser abbas Turki Alghamdi +4 位作者 Yazed Alsaawy Tahir Alyas Ali Alzahrani Khawar Iqbal Malik Saira Bibi 《Computers, Materials & Continua》 SCIE EI 2023年第2期4261-4276,共16页
The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employi... The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employing deep learning to analyze speech or emotional content.Because of how clever these videos are frequently,Manipulation is challenging to spot.Social media are the most frequent and dangerous targets since they are weak outlets that are open to extortion or slander a human.In earlier times,it was not so easy to alter the videos,which required expertise in the domain and time.Nowadays,the generation of fake videos has become easier and with a high level of realism in the video.Deepfakes are forgeries and altered visual data that appear in still photos or video footage.Numerous automatic identification systems have been developed to solve this issue,however they are constrained to certain datasets and performpoorly when applied to different datasets.This study aims to develop an ensemble learning model utilizing a convolutional neural network(CNN)to handle deepfakes or Face2Face.We employed ensemble learning,a technique combining many classifiers to achieve higher prediction performance than a single classifier,boosting themodel’s accuracy.The performance of the generated model is evaluated on Face Forensics.This work is about building a new powerful model for automatically identifying deep fake videos with the DeepFake-Detection-Challenges(DFDC)dataset.We test our model using the DFDC,one of the most difficult datasets and get an accuracy of 96%. 展开更多
关键词 Deep machine learning deep fake CNN DFDC ensemble learning
下载PDF
Optimizing Resource Allocation Framework for Multi-Cloud Environment
3
作者 Tahir Alyas Taher M.Ghazal +3 位作者 Badria Sulaiman Alfurhood Ghassan F.Issa Osama Ali Thawabeh qaiser abbas 《Computers, Materials & Continua》 SCIE EI 2023年第5期4119-4136,共18页
Cloud computingmakes dynamic resource provisioning more accessible.Monitoring a functioning service is crucial,and changes are made when particular criteria are surpassed.This research explores the decentralized multi... Cloud computingmakes dynamic resource provisioning more accessible.Monitoring a functioning service is crucial,and changes are made when particular criteria are surpassed.This research explores the decentralized multi-cloud environment for allocating resources and ensuring the Quality of Service(QoS),estimating the required resources,and modifying allotted resources depending on workload and parallelism due to resources.Resource allocation is a complex challenge due to the versatile service providers and resource providers.The engagement of different service and resource providers needs a cooperation strategy for a sustainable quality of service.The objective of a coherent and rational resource allocation is to attain the quality of service.It also includes identifying critical parameters to develop a resource allocation mechanism.A framework is proposed based on the specified parameters to formulate a resource allocation process in a decentralized multi-cloud environment.The three main parameters of the proposed framework are data accessibility,optimization,and collaboration.Using an optimization technique,these three segments are further divided into subsets for resource allocation and long-term service quality.The CloudSim simulator has been used to validate the suggested framework.Several experiments have been conducted to find the best configurations suited for enhancing collaboration and resource allocation to achieve sustained QoS.The results support the suggested structure for a decentralized multi-cloud environment and the parameters that have been determined. 展开更多
关键词 Multi-cloud query optimization cloud resources allocation MODELLING VIRTUALIZATION
下载PDF
Performance Framework for Virtual Machine Migration in Cloud Computing
4
作者 Tahir Alyas Taher M.Ghazal +4 位作者 Badria Sulaiman Alfurhood Munir Ahmad Ossma Ali Thawabeh Khalid Alissa qaiser abbas 《Computers, Materials & Continua》 SCIE EI 2023年第3期6289-6305,共17页
In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the ... In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the migration performance of the virtual machine(VM)in the cloud environment.VMs allow cloud customers to store essential data and resources.However,server usage has grown dramatically due to the virtualization of computer systems,resulting in higher data centre power consumption,storage needs,and operating expenses.Multiple VMs on one data centre manage share resources like central processing unit(CPU)cache,network bandwidth,memory,and application bandwidth.Inmulti-cloud,VMmigration addresses the performance degradation due to cloud server configuration,unbalanced traffic load,resource load management,and fault situations during data transfer.VMmigration speed is influenced by the size of the VM,the dirty rate of the running application,and the latency ofmigration iterations.As a result,evaluating VM migration performance while considering all of these factors becomes a difficult task.Themain effort of this research is to assess migration problems on performance.The simulation results in Matlab show that if the VMsize grows,themigration time of VMs and the downtime can be impacted by three orders ofmagnitude.The dirty page rate decreases,themigration time and the downtime grow,and the latency time decreases as network bandwidth increases during the migration time and post-migration overhead calculation when the VMtransfer is completed.All the simulated cases of VMs migration were performed in a fuzzy inference system with performance graphs. 展开更多
关键词 LATENCY cloud computing dirty page ratio storage migration performance
下载PDF
Intrusion Detection in 5G Cellular Network Using Machine Learning
5
作者 Ishtiaque Mahmood Tahir Alyas +3 位作者 Sagheer abbas Tariq Shahzad qaiser abbas Khmaies Ouahada 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2439-2453,共15页
Attacks on fully integrated servers,apps,and communication networks via the Internet of Things(IoT)are growing exponentially.Sensitive devices’effectiveness harms end users,increases cyber threats and identity theft,... Attacks on fully integrated servers,apps,and communication networks via the Internet of Things(IoT)are growing exponentially.Sensitive devices’effectiveness harms end users,increases cyber threats and identity theft,raises costs,and negatively impacts income as problems brought on by the Internet of Things network go unnoticed for extended periods.Attacks on Internet of Things interfaces must be closely monitored in real time for effective safety and security.Following the 1,2,3,and 4G cellular networks,the 5th generation wireless 5G network is indeed the great invasion of mankind and is known as the global advancement of cellular networks.Even to this day,experts are working on the evolution’s sixth generation(6G).It offers amazing capabilities for connecting everything,including gadgets and machines,with wavelengths ranging from 1 to 10 mm and frequencies ranging from 300 MHz to 3 GHz.It gives you the most recent information.Many countries have already established this technology within their border.Security is the most crucial aspect of using a 5G network.Because of the absence of study and network deployment,new technology first introduces new gaps for attackers and hackers.Internet Protocol(IP)attacks and intrusion will become more prevalent in this system.An efficient approach to detect intrusion in the 5G network using a Machine Learning algorithm will be provided in this research.This research will highlight the high accuracy rate by validating it for unidentified and suspicious circumstances in the 5G network,such as intruder hackers/attackers.After applying different machine learning algorithms,obtained the best result on Linear Regression Algorithm’s implementation on the dataset results in 92.12%on test data and 92.13%on train data with 92%precision. 展开更多
关键词 Intrusion detection system machine learning CONFIDENTIALITY INTEGRITY AVAILABILITY
下载PDF
Acral melanoma detection using dermoscopic images and convolutional neural networks
6
作者 qaiser abbas Farheen Ramzan Muhammad Usman Ghani 《Visual Computing for Industry,Biomedicine,and Art》 EI 2021年第1期246-257,共12页
Acral melanoma(AM)is a rare and lethal type of skin cancer.It can be diagnosed by expert dermatologists,using dermoscopic imaging.It is challenging for dermatologists to diagnose melanoma because of the very minor dif... Acral melanoma(AM)is a rare and lethal type of skin cancer.It can be diagnosed by expert dermatologists,using dermoscopic imaging.It is challenging for dermatologists to diagnose melanoma because of the very minor differences between melanoma and non-melanoma cancers.Most of the research on skin cancer diagnosis is related to the binary classification of lesions into melanoma and non-melanoma.However,to date,limited research has been conducted on the classification of melanoma subtypes.The current study investigated the effectiveness of dermoscopy and deep learning in classifying melanoma subtypes,such as,AM.In this study,we present a novel deep learning model,developed to classify skin cancer.We utilized a dermoscopic image dataset from the Yonsei University Health System South Korea for the classification of skin lesions.Various image processing and data augmentation techniques have been applied to develop a robust automated system for AM detection.Our custombuilt model is a seven-layered deep convolutional network that was trained from scratch.Additionally,transfer learning was utilized to compare the performance of our model,where AlexNet and ResNet-18 were modified,fine-tuned,and trained on the same dataset.We achieved improved results from our proposed model with an accuracy of more than 90%for AM and benign nevus,respectively.Additionally,using the transfer learning approach,we achieved an average accuracy of nearly 97%,which is comparable to that of state-of-the-art methods.From our analysis and results,we found that our model performed well and was able to effectively classify skin cancer.Our results show that the proposed system can be used by dermatologists in the clinical decision-making process for the early diagnosis of AM. 展开更多
关键词 Deep learning Acral melanoma Skin cancer detection Convolutional networks Dermoscopic images Medical image analysis Computer based diagnosis
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部