期刊文献+
共找到20篇文章
< 1 >
每页显示 20 50 100
Vertical Pod Autoscaling in Kubernetes for Elastic Container Collaborative Framework
1
作者 Mushtaq Niazi Sagheer Abbas +3 位作者 Abdel-Hamid Soliman tahir alyas Shazia Asif Tauqeer Faiz 《Computers, Materials & Continua》 SCIE EI 2023年第1期591-606,共16页
Kubernetes is an open-source container management tool which automates container deployment,container load balancing and container(de)scaling,including Horizontal Pod Autoscaler(HPA),Vertical Pod Autoscaler(VPA).HPA e... Kubernetes is an open-source container management tool which automates container deployment,container load balancing and container(de)scaling,including Horizontal Pod Autoscaler(HPA),Vertical Pod Autoscaler(VPA).HPA enables flawless operation,interactively scaling the number of resource units,or pods,without downtime.Default Resource Metrics,such as CPU and memory use of host machines and pods,are monitored by Kubernetes.Cloud Computing has emerged as a platform for individuals beside the corporate sector.It provides cost-effective infrastructure,platform and software services in a shared environment.On the other hand,the emergence of industry 4.0 brought new challenges for the adaptability and infusion of cloud computing.As the global work environment is adapting constituents of industry 4.0 in terms of robotics,artificial intelligence and IoT devices,it is becoming eminent that one emerging challenge is collaborative schematics.Provision of such autonomous mechanism that can develop,manage and operationalize digital resources like CoBots to perform tasks in a distributed and collaborative cloud environment for optimized utilization of resources,ensuring schedule completion.Collaborative schematics are also linked with Bigdata management produced by large scale industry 4.0 setups.Different use cases and simulation results showed a significant improvement in Pod CPU utilization,latency,and throughput over Kubernetes environment. 展开更多
关键词 Autoscaling query optimization PODS kubernetes CONTAINER ORCHESTRATION
下载PDF
Query Optimization Framework for Graph Database in Cloud Dew Environment
2
作者 tahir alyas Ali Alzahrani +3 位作者 Yazed Alsaawy Khalid Alissa Qaiser Abbas Nadia Tabassum 《Computers, Materials & Continua》 SCIE EI 2023年第1期2317-2330,共14页
The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is... The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is a combinatorial optimization problem,which renders exhaustive search impossible as query sizes rise.Increases in CPU performance have surpassed main memory,and disk access speeds in recent decades,allowing data compression to be used—strategies for improving database performance systems.For performance enhancement,compression and query optimization are the two most factors.Compression reduces the volume of data,whereas query optimization minimizes execution time.Compressing the database reduces memory requirement,data takes less time to load into memory,fewer buffer missing occur,and the size of intermediate results is more diminutive.This paper performed query optimization on the graph database in a cloud dew environment by considering,which requires less time to execute a query.The factors compression and query optimization improve the performance of the databases.This research compares the performance of MySQL and Neo4j databases in terms of memory usage and execution time running on cloud dew servers. 展开更多
关键词 Query optimization compression cloud dew DECOMPRESSION graph database
下载PDF
Reducing Dataset Specificity for Deepfakes Using Ensemble Learning
3
作者 Qaiser Abbas Turki Alghamdi +4 位作者 Yazed Alsaawy tahir alyas Ali Alzahrani Khawar Iqbal Malik Saira Bibi 《Computers, Materials & Continua》 SCIE EI 2023年第2期4261-4276,共16页
The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employi... The emergence of deep fake videos in recent years has made image falsification a real danger.A person’s face and emotions are deep-faked in a video or speech and are substituted with a different face or voice employing deep learning to analyze speech or emotional content.Because of how clever these videos are frequently,Manipulation is challenging to spot.Social media are the most frequent and dangerous targets since they are weak outlets that are open to extortion or slander a human.In earlier times,it was not so easy to alter the videos,which required expertise in the domain and time.Nowadays,the generation of fake videos has become easier and with a high level of realism in the video.Deepfakes are forgeries and altered visual data that appear in still photos or video footage.Numerous automatic identification systems have been developed to solve this issue,however they are constrained to certain datasets and performpoorly when applied to different datasets.This study aims to develop an ensemble learning model utilizing a convolutional neural network(CNN)to handle deepfakes or Face2Face.We employed ensemble learning,a technique combining many classifiers to achieve higher prediction performance than a single classifier,boosting themodel’s accuracy.The performance of the generated model is evaluated on Face Forensics.This work is about building a new powerful model for automatically identifying deep fake videos with the DeepFake-Detection-Challenges(DFDC)dataset.We test our model using the DFDC,one of the most difficult datasets and get an accuracy of 96%. 展开更多
关键词 Deep machine learning deep fake CNN DFDC ensemble learning
下载PDF
Optimizing Resource Allocation Framework for Multi-Cloud Environment
4
作者 tahir alyas Taher M.Ghazal +3 位作者 Badria Sulaiman Alfurhood Ghassan F.Issa Osama Ali Thawabeh Qaiser Abbas 《Computers, Materials & Continua》 SCIE EI 2023年第5期4119-4136,共18页
Cloud computingmakes dynamic resource provisioning more accessible.Monitoring a functioning service is crucial,and changes are made when particular criteria are surpassed.This research explores the decentralized multi... Cloud computingmakes dynamic resource provisioning more accessible.Monitoring a functioning service is crucial,and changes are made when particular criteria are surpassed.This research explores the decentralized multi-cloud environment for allocating resources and ensuring the Quality of Service(QoS),estimating the required resources,and modifying allotted resources depending on workload and parallelism due to resources.Resource allocation is a complex challenge due to the versatile service providers and resource providers.The engagement of different service and resource providers needs a cooperation strategy for a sustainable quality of service.The objective of a coherent and rational resource allocation is to attain the quality of service.It also includes identifying critical parameters to develop a resource allocation mechanism.A framework is proposed based on the specified parameters to formulate a resource allocation process in a decentralized multi-cloud environment.The three main parameters of the proposed framework are data accessibility,optimization,and collaboration.Using an optimization technique,these three segments are further divided into subsets for resource allocation and long-term service quality.The CloudSim simulator has been used to validate the suggested framework.Several experiments have been conducted to find the best configurations suited for enhancing collaboration and resource allocation to achieve sustained QoS.The results support the suggested structure for a decentralized multi-cloud environment and the parameters that have been determined. 展开更多
关键词 Multi-cloud query optimization cloud resources allocation MODELLING VIRTUALIZATION
下载PDF
Performance Framework for Virtual Machine Migration in Cloud Computing
5
作者 tahir alyas Taher M.Ghazal +4 位作者 Badria Sulaiman Alfurhood Munir Ahmad Ossma Ali Thawabeh Khalid Alissa Qaiser Abbas 《Computers, Materials & Continua》 SCIE EI 2023年第3期6289-6305,共17页
In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the ... In the cloud environment,the transfer of data from one cloud server to another cloud server is called migration.Data can be delivered in various ways,from one data centre to another.This research aims to increase the migration performance of the virtual machine(VM)in the cloud environment.VMs allow cloud customers to store essential data and resources.However,server usage has grown dramatically due to the virtualization of computer systems,resulting in higher data centre power consumption,storage needs,and operating expenses.Multiple VMs on one data centre manage share resources like central processing unit(CPU)cache,network bandwidth,memory,and application bandwidth.Inmulti-cloud,VMmigration addresses the performance degradation due to cloud server configuration,unbalanced traffic load,resource load management,and fault situations during data transfer.VMmigration speed is influenced by the size of the VM,the dirty rate of the running application,and the latency ofmigration iterations.As a result,evaluating VM migration performance while considering all of these factors becomes a difficult task.Themain effort of this research is to assess migration problems on performance.The simulation results in Matlab show that if the VMsize grows,themigration time of VMs and the downtime can be impacted by three orders ofmagnitude.The dirty page rate decreases,themigration time and the downtime grow,and the latency time decreases as network bandwidth increases during the migration time and post-migration overhead calculation when the VMtransfer is completed.All the simulated cases of VMs migration were performed in a fuzzy inference system with performance graphs. 展开更多
关键词 LATENCY cloud computing dirty page ratio storage migration performance
下载PDF
Intrusion Detection in 5G Cellular Network Using Machine Learning
6
作者 Ishtiaque Mahmood tahir alyas +3 位作者 Sagheer Abbas Tariq Shahzad Qaiser Abbas Khmaies Ouahada 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2439-2453,共15页
Attacks on fully integrated servers,apps,and communication networks via the Internet of Things(IoT)are growing exponentially.Sensitive devices’effectiveness harms end users,increases cyber threats and identity theft,... Attacks on fully integrated servers,apps,and communication networks via the Internet of Things(IoT)are growing exponentially.Sensitive devices’effectiveness harms end users,increases cyber threats and identity theft,raises costs,and negatively impacts income as problems brought on by the Internet of Things network go unnoticed for extended periods.Attacks on Internet of Things interfaces must be closely monitored in real time for effective safety and security.Following the 1,2,3,and 4G cellular networks,the 5th generation wireless 5G network is indeed the great invasion of mankind and is known as the global advancement of cellular networks.Even to this day,experts are working on the evolution’s sixth generation(6G).It offers amazing capabilities for connecting everything,including gadgets and machines,with wavelengths ranging from 1 to 10 mm and frequencies ranging from 300 MHz to 3 GHz.It gives you the most recent information.Many countries have already established this technology within their border.Security is the most crucial aspect of using a 5G network.Because of the absence of study and network deployment,new technology first introduces new gaps for attackers and hackers.Internet Protocol(IP)attacks and intrusion will become more prevalent in this system.An efficient approach to detect intrusion in the 5G network using a Machine Learning algorithm will be provided in this research.This research will highlight the high accuracy rate by validating it for unidentified and suspicious circumstances in the 5G network,such as intruder hackers/attackers.After applying different machine learning algorithms,obtained the best result on Linear Regression Algorithm’s implementation on the dataset results in 92.12%on test data and 92.13%on train data with 92%precision. 展开更多
关键词 Intrusion detection system machine learning CONFIDENTIALITY INTEGRITY AVAILABILITY
下载PDF
Resource Based Automatic Calibration System (RBACS) Using Kubernetes Framework
7
作者 tahir alyas Nadia Tabassum +3 位作者 Muhammad Waseem Iqbal Abdullah S.Alshahrani Ahmed Alghamdi Syed Khuram Shahzad 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期1165-1179,共15页
Kubernetes,a container orchestrator for cloud-deployed applications,allows the application provider to scale automatically to match thefluctuating intensity of processing demand.Container cluster technology is used to... Kubernetes,a container orchestrator for cloud-deployed applications,allows the application provider to scale automatically to match thefluctuating intensity of processing demand.Container cluster technology is used to encapsulate,isolate,and deploy applications,addressing the issue of low system reliability due to interlocking failures.Cloud-based platforms usually entail users define application resource supplies for eco container virtualization.There is a constant problem of over-service in data centers for cloud service providers.Higher operating costs and incompetent resource utilization can occur in a waste of resources.Kubernetes revolutionized the orchestration of the container in the cloud-native age.It can adaptively manage resources and schedule containers,which provide real-time status of the cluster at runtime without the user’s contribution.Kubernetes clusters face unpredictable traffic,and the cluster performs manual expansion configuration by the controller.Due to operational delays,the system will become unstable,and the service will be unavailable.This work proposed an RBACS that vigorously amended the distribution of containers operating in the entire Kubernetes cluster.RBACS allocation pattern is analyzed with the Kubernetes VPA.To estimate the overall cost of RBACS,we use several scientific benchmarks comparing the accomplishment of container to remote node migration and on-site relocation.The experiments ran on the simulations to show the method’s effectiveness yielded high precision in the real-time deployment of resources in eco containers.Compared to the default baseline,Kubernetes results in much fewer dropped requests with only slightly more supplied resources. 展开更多
关键词 DOCKER CONTAINER VIRTUALIZATION cloud resource kubernetes
下载PDF
An Optimized Convolution Neural Network Architecture for Paddy Disease Classification 被引量:2
8
作者 Muhammad Asif Saleem Muhammad Aamir +2 位作者 Rosziati Ibrahim Norhalina Senan tahir alyas 《Computers, Materials & Continua》 SCIE EI 2022年第6期6053-6067,共15页
Plant disease classification based on digital pictures is challenging.Machine learning approaches and plant image categorization technologies such as deep learning have been utilized to recognize,identify,and diagnose... Plant disease classification based on digital pictures is challenging.Machine learning approaches and plant image categorization technologies such as deep learning have been utilized to recognize,identify,and diagnose plant diseases in the previous decade.Increasing the yield quantity and quality of rice forming is an important cause for the paddy production countries.However,some diseases that are blocking the improvement in paddy production are considered as an ominous threat.Convolution Neural Network(CNN)has shown a remarkable performance in solving the early detection of paddy leaf diseases based on its images in the fast-growing era of science and technology.Nevertheless,the significant CNN architectures construction is dependent on expertise in a neural network and domain knowledge.This approach is time-consuming,and high computational resources are mandatory.In this research,we propose a novel method based on Mutant Particle swarm optimization(MUT-PSO)Algorithms to search for an optimum CNN architecture for Paddy leaf disease classification.Experimentation results show that Mutant Particle swarm optimization Convolution Neural Network(MUTPSO-CNN)can find optimumCNNarchitecture that offers better performance than existing hand-crafted CNN architectures in terms of accuracy,precision/recall,and execution time. 展开更多
关键词 Deep learning optimum CNN architecture particle swarm optimization convolutional neural network parameter optimization
下载PDF
Machine Learning Enabled Early Detection of Breast Cancer by Structural Analysis of Mammograms
9
作者 Mavra Mehmood Ember Ayub +7 位作者 Fahad Ahmad Madallah Alruwaili Ziyad AAlrowaili Saad Alanazi Mamoona Humayun Muhammad Rizwan Shahid Naseem tahir alyas 《Computers, Materials & Continua》 SCIE EI 2021年第4期641-657,共17页
Clinical image processing plays a signicant role in healthcare systems and is currently a widely used methodology.In carcinogenic diseases,time is crucial;thus,an image’s accurate analysis can help treat disease at a... Clinical image processing plays a signicant role in healthcare systems and is currently a widely used methodology.In carcinogenic diseases,time is crucial;thus,an image’s accurate analysis can help treat disease at an early stage.Ductal carcinoma in situ(DCIS)and lobular carcinoma in situ(LCIS)are common types of malignancies that affect both women and men.The number of cases of DCIS and LCIS has increased every year since 2002,while it still takes a considerable amount of time to recommend a controlling technique.Image processing is a powerful technique to analyze preprocessed images to retrieve useful information by using some remarkable processing operations.In this paper,we used a dataset from the Mammographic Image Analysis Society and MATLAB 2019b software from MathWorks to simulate and extract our results.In this proposed study,mammograms are primarily used to diagnose,more precisely,the breast’s tumor component.The detection of DCIS and LCIS on breast mammograms is done by preprocessing the images using contrast-limited adaptive histogram equalization.The resulting images’tumor portions are then isolated by a segmentation process,such as threshold detection.Furthermore,morphological operations,such as erosion and dilation,are applied to the images,then a gray-level co-occurrence matrix texture features,Harlick texture features,and shape features are extracted from the regions of interest.For classication purposes,a support vector machine(SVM)classier is used to categorize normal and abnormal patterns.Finally,the adaptive neuro-fuzzy inference system is deployed for the amputation of fuzziness due to overlapping features of patterns within the images,and the exact categorization of prior patterns is gained through the SVM.Early detection of DCIS and LCIS can save lives and help physicians and surgeons todiagnose and treat these diseases.Substantial results are obtained through cubic support vector machine(CSVM),respectively,showing 98.95%and 98.01%accuracies for normal and abnormal mammograms.Through ANFIS,promising results of mean square error(MSE)0.01866,0.18397,and 0.19640 for DCIS and LCIS differentiation during the training,testing,and checking phases. 展开更多
关键词 Image processing tumor segmentation DILATION EROSION machine learning classication support vector machine adaptive neuro-fuzzy inference system
下载PDF
Roman Urdu News Headline Classification Empowered with Machine Learning
10
作者 Rizwan Ali Naqvi Muhammad Adnan Khan +3 位作者 Nauman Malik Shazia Saqib tahir alyas Dildar Hussain 《Computers, Materials & Continua》 SCIE EI 2020年第11期1221-1236,共16页
Roman Urdu has been used for text messaging over the Internet for years especially in Indo-Pak Subcontinent.Persons from the subcontinent may speak the same Urdu language but they might be using different scripts for ... Roman Urdu has been used for text messaging over the Internet for years especially in Indo-Pak Subcontinent.Persons from the subcontinent may speak the same Urdu language but they might be using different scripts for writing.The communication using the Roman characters,which are used in the script of Urdu language on social media,is now considered the most typical standard of communication in an Indian landmass that makes it an expensive information supply.English Text classification is a solved problem but there have been only a few efforts to examine the rich information supply of Roman Urdu in the past.This is due to the numerous complexities involved in the processing of Roman Urdu data.The complexities associated with Roman Urdu include the non-availability of the tagged corpus,lack of a set of rules,and lack of standardized spellings.A large amount of Roman Urdu news data is available on mainstream news websites and social media websites like Facebook,Twitter but meaningful information can only be extracted if data is in a structured format.We have developed a Roman Urdu news headline classifier,which will help to classify news into relevant categories on which further analysis and modeling can be done.The author of this research aims to develop the Roman Urdu news classifier,which will classify the news into five categories(health,business,technology,sports,international).First,we will develop the news dataset using scraping tools and then after preprocessing,we will compare the results of different machine learning algorithms like Logistic Regression(LR),Multinomial Naïve Bayes(MNB),Long short term memory(LSTM),and Convolutional Neural Network(CNN).After this,we will use a phonetic algorithm to control lexical variation and test news from different websites.The preliminary results suggest that a more accurate classification can be accomplished by monitoring noise inside data and by classifying the news.After applying above mentioned different machine learning algorithms,results have shown that Multinomial Naïve Bayes classifier is giving the best accuracy of 90.17%which is due to the noise lexical variation. 展开更多
关键词 Roman urdu news headline classification long short term memory recurrent neural network logistic regression multinomial naïve Bayes random forest k neighbor gradient boosting classifier
下载PDF
Prediction of Cloud Ranking in a Hyperconverged Cloud Ecosystem Using Machine Learning
11
作者 Nadia Tabassum Allah Ditta +4 位作者 tahir alyas Sagheer Abbas Hani Alquhayz Natash Ali Mian Muhammad Adnan Khan 《Computers, Materials & Continua》 SCIE EI 2021年第6期3129-3141,共13页
Cloud computing is becoming popular technology due to its functional properties and variety of customer-oriented services over the Internet.The design of reliable and high-quality cloud applications requires a strong ... Cloud computing is becoming popular technology due to its functional properties and variety of customer-oriented services over the Internet.The design of reliable and high-quality cloud applications requires a strong Quality of Service QoS parameter metric.In a hyperconverged cloud ecosystem environment,building high-reliability cloud applications is a challenging job.The selection of cloud services is based on the QoS parameters that play essential roles in optimizing and improving cloud rankings.The emergence of cloud computing is significantly reshaping the digital ecosystem,and the numerous services offered by cloud service providers are playing a vital role in this transformation.Hyperconverged software-based unified utilities combine storage virtualization,compute virtualization,and network virtualization.The availability of the latter has also raised the demand for QoS.Due to the diversity of services,the respective quality parameters are also in abundance and need a carefully designed mechanism to compare and identify the critical,common,and impactful parameters.It is also necessary to reconsider the market needs in terms of service requirements and the QoS provided by various CSPs.This research provides a machine learning-based mechanism to monitor the QoS in a hyperconverged environment with three core service parameters:service quality,downtime of servers,and outage of cloud services. 展开更多
关键词 Cloud computing hyperconverged neural network QoS parameter cloud service providers RANKING PREDICTION
下载PDF
Live Migration of Virtual Machines Using a Mamdani Fuzzy Inference System
12
作者 tahir alyas Iqra Javed +3 位作者 Abdallah Namoun Ali Tufail Sami Alshmrany Nadia Tabassum 《Computers, Materials & Continua》 SCIE EI 2022年第5期3019-3033,共15页
Efforts were exerted to enhance the live virtual machines(VMs)migration,including performance improvements of the live migration of services to the cloud.The VMs empower the cloud users to store relevant data and reso... Efforts were exerted to enhance the live virtual machines(VMs)migration,including performance improvements of the live migration of services to the cloud.The VMs empower the cloud users to store relevant data and resources.However,the utilization of servers has increased significantly because of the virtualization of computer systems,leading to a rise in power consumption and storage requirements by data centers,and thereby the running costs.Data center migration technologies are used to reduce risk,minimize downtime,and streamline and accelerate the data center move process.Indeed,several parameters,such as non-network overheads and downtime adjustment,may impact the live migration time and server downtime to a large extent.By virtualizing the network resources,the infrastructure as a service(IaaS)can be used dynamically to allocate the bandwidth to services and monitor the network flow routing.Due to the large amount of filthy retransmission,existing live migration systems still suffer from extensive downtime and significant performance degradation in crossdata-center situations.This study aims to minimize the energy consumption by restricting the VMs migration and switching off the guests depending on a threshold,thereby boosting the residual network bandwidth in the data center with a minimal breach of the service level agreement(SLA).In this research,we analyzed and evaluated the findings observed through simulating different parameters,like availability,downtime,and outage of VMs in data center processes.This new paradigm is composed of two forms of detection strategies in the live migration approach from the source host to the destination source machine. 展开更多
关键词 Cloud computing IAAS data centre STORAGE performance analysis live migration
下载PDF
Recognition of Urdu Handwritten Alphabet Using Convolutional Neural Network (CNN)
13
作者 Gulzar Ahmed tahir alyas +4 位作者 Muhammad Waseem Iqbal Muhammad Usman Ashraf Ahmed Mohammed Alghamdi Adel A.Bahaddad Khalid Ali Almarhabi 《Computers, Materials & Continua》 SCIE EI 2022年第11期2967-2984,共18页
Handwritten character recognition systems are used in every field of life nowadays,including shopping malls,banks,educational institutes,etc.Urdu is the national language of Pakistan,and it is the fourth spoken langua... Handwritten character recognition systems are used in every field of life nowadays,including shopping malls,banks,educational institutes,etc.Urdu is the national language of Pakistan,and it is the fourth spoken language in the world.However,it is still challenging to recognize Urdu handwritten characters owing to their cursive nature.Our paper presents a Convolutional Neural Networks(CNN)model to recognize Urdu handwritten alphabet recognition(UHAR)offline and online characters.Our research contributes an Urdu handwritten dataset(aka UHDS)to empower future works in this field.For offline systems,optical readers are used for extracting the alphabets,while diagonal-based extraction methods are implemented in online systems.Moreover,our research tackled the issue concerning the lack of comprehensive and standard Urdu alphabet datasets to empower research activities in the area of Urdu text recognition.To this end,we collected 1000 handwritten samples for each alphabet and a total of 38000 samples from 12 to 25 age groups to train our CNN model using online and offline mediums.Subsequently,we carried out detailed experiments for character recognition,as detailed in the results.The proposed CNN model outperformed as compared to previously published approaches. 展开更多
关键词 Urdu handwritten text recognition handwritten dataset convolutional neural network artificial intelligence machine learning deep learning
下载PDF
Innovative Fungal Disease Diagnosis System Using Convolutional Neural Network
14
作者 tahir alyas Khalid Alissa +3 位作者 Abdul Salam Mohammad Shazia Asif Tauqeer Faiz Gulzar Ahmed 《Computers, Materials & Continua》 SCIE EI 2022年第12期4869-4883,共15页
Fungal disease affects more than a billion people worldwide,resulting in different types of fungus diseases facing life-threatening infections.The outer layer of your body is called the integumentary system.Your skin,... Fungal disease affects more than a billion people worldwide,resulting in different types of fungus diseases facing life-threatening infections.The outer layer of your body is called the integumentary system.Your skin,hair,nails,and glands are all part of it.These organs and tissues serve as your first line of defence against bacteria while protecting you from harm and the sun.The It serves as a barrier between the outside world and the regulated environment inside our bodies and a regulating effect.Heat,light,damage,and illness are all protected by it.Fungi-caused infections are found in almost every part of the natural world.When an invasive fungus takes over a body region and overwhelms the immune system,it causes fungal infections in people.Another primary goal of this study was to create a Convolutional Neural Network(CNN)-based technique for detecting and classifying various types of fungal diseases.There are numerous fungal illnesses,but only two have been identified and classified using the proposed Innovative Fungal Disease Diagnosis(IFDD)system of Candidiasis and Tinea Infections.This paper aims to detect infected skin issues and provide treatment recommendations based on proposed system findings.To identify and categorize fungal infections,deep machine learning techniques are utilized.A CNN architecture was created,and it produced a promising outcome to improve the proposed system accuracy.The collected findings demonstrated that CNN might be used to identify and classify numerous species of fungal spores early and estimate all conceivable fungus hazards.Our CNN-Based can detect fungal diseases through medical images;earmarked IFDD system has a predictive performance of 99.6%accuracy. 展开更多
关键词 Deep machine learning CNN ReLU skin disease FUNGAL
下载PDF
Load Balancing Framework for Cross-Region Tasks in Cloud Computing
15
作者 Jaleel Nazir Muhammad Waseem Iqbal +4 位作者 tahir alyas Muhammad Hamid Muhammad Saleem Saadia Malik Nadia Tabassum 《Computers, Materials & Continua》 SCIE EI 2022年第1期1479-1490,共12页
Load balancing is a technique for identifying overloaded and underloaded nodes and balancing the load between them.To maximize various performance parameters in cloud computing,researchers suggested various load balan... Load balancing is a technique for identifying overloaded and underloaded nodes and balancing the load between them.To maximize various performance parameters in cloud computing,researchers suggested various load balancing approaches.To store and access data and services provided by the different service providers through the network over different regions,cloud computing is one of the latest technology systems for both end-users and service providers.The volume of data is increasing due to the pandemic and a significant increase in usage of the internet has also been experienced.Users of the cloud are looking for services that are intelligent,and,can balance the traffic load by service providers,resulting in seamless and uninterrupted services.Different types of algorithms and techniques are available that can manage the load balancing in the cloud services.In this paper,a newly proposed method for load balancing in cloud computing at the database level is introduced.The database cloud services are frequently employed by companies of all sizes,for application development and business process.Load balancing for distributed applications can be used to maintain an efficient task scheduling process that also meets the user requirements and improves resource utilization.Load balancing is the process of distributing the load on various nodes to ensure that no single node is overloaded.To avoid the nodes from being overloaded,the load balancer divides an equal amount of computing time to all nodes.The results of two different scenarios showed the cross-region traffic management and significant growth in revenue of restaurants by using load balancer decisions on application traffic gateways. 展开更多
关键词 Load balancing performance DATABASE REGION VIRTUALIZATION
下载PDF
Hyper-Convergence Storage Framework for EcoCloud Correlates
16
作者 Nadia Tabassum tahir alyas +2 位作者 Muhammad Hamid Muhammad Saleem Saadia Malik 《Computers, Materials & Continua》 SCIE EI 2022年第1期1573-1584,共12页
Cloud computing is an emerging domain that is capturing global users from all walks of life—the corporate sector,government sector,and social arena as well.Various cloud providers have offered multiple services and f... Cloud computing is an emerging domain that is capturing global users from all walks of life—the corporate sector,government sector,and social arena as well.Various cloud providers have offered multiple services and facilities to this audience and the number of providers is increasing very swiftly.This enormous pace is generating the requirement of a comprehensive ecosystem that shall provide a seamless and customized user environment not only to enhance the user experience but also to improve security,availability,accessibility,and latency.Emerging technology is providing robust solutions to many of our problems,the cloud platform is one of them.It is worth mentioning that these solutions are also amplifying the complexity and need of sustenance of these rapid solutions.As with cloud computing,new entrants as cloud service providers,resellers,tech-support,hardware manufacturers,and software developers appear on a daily basis.These actors playing their role in the growth and sustenance of the cloud ecosystem.Our objective is to use convergence for cloud services,software-defined networks,network function virtualization for infrastructure,cognition for pattern development,and knowledge repository.In order to gear up these processes,machine learning to induce intelligence to maintain ecosystem growth,to monitor performance,and to become able to make decisions for the sustenance of the ecosystem.Workloads may be programmed to“superficially”imitate most business applications and create large numbers using lightweight workload generators that merely stress the storage.In today’s current IT environment,when many enterprises use the cloud to service some of their application demands,a different performance testing technique that assesses more than the storage is necessary.Compute and storage are merged into a single building block with HCI(Hyper-converged infrastructure),resulting in a huge pool of compute and storage resources when clustered with other building blocks.The novelty of thiswork to design and test cloud storage using themeasurement of availability,downtime,and outage parameters.Results showed that the storage reliability in a hyper-converged system is above 92%. 展开更多
关键词 Virtual cloud software define network network function virtualization hyper-convergence VIRTUALIZATION
下载PDF
QoS Based Cloud Security Evaluation Using Neuro Fuzzy Model
17
作者 Nadia Tabassum tahir alyas +3 位作者 Muhammad Hamid Muhammad Saleem Saadia Malik Syeda Binish Zahra 《Computers, Materials & Continua》 SCIE EI 2022年第1期1127-1140,共14页
Cloud systems are tools and software for cloud computing that are deployed on the Internet or a cloud computing network,and users can use them at any time.After assessing and choosing cloud providers,however,customers... Cloud systems are tools and software for cloud computing that are deployed on the Internet or a cloud computing network,and users can use them at any time.After assessing and choosing cloud providers,however,customers confront the variety and difficulty of quality of service(QoS).To increase customer retention and engagement success rates,it is critical to research and develops an accurate and objective evaluation model.Cloud is the emerging environment for distributed services at various layers.Due to the benefits of this environment,globally cloud is being taken as a standard environment for individuals as well as for the corporate sector as it reduces capital expenditure and provides secure,accessible,and manageable services to all stakeholders but Cloud computing has security challenges,including vulnerability for clients and association acknowledgment,that delay the rapid adoption of computing models.Allocation of resources in the Cloud is difficult because resources provide numerous measures of quality of service.In this paper,the proposed resource allocation approach is based on attribute QoS Scoring that takes into account parameters the reputation of the asset,task completion time,task completion ratio,and resource loading.This article is focused on the cloud service’s security,cloud reliability,and could performance.In this paper,the machine learning algorithm neuro-fuzzy has been used to address the cloud security issues to measure the parameter security and privacy,trust issues.The findings reveal that the ANFIS-dependent parameters are primarily designed to discern anomalies in cloud security and features output normally yields better results and guarantees data consistency and computational power. 展开更多
关键词 Cloud computing performance quality of service machine learning NEURO-FUZZY
下载PDF
Bit Rate Reduction in Cloud Gaming Using Object Detection Technique
18
作者 Daniyal Baig tahir alyas +4 位作者 Muhammad Hamid Muhammad Saleem Saadia Malik Nadia Tabassum Natash Ali Mian 《Computers, Materials & Continua》 SCIE EI 2021年第9期3653-3669,共17页
The past two decades witnessed a broad-increase in web technology and on-line gaming.Enhancing the broadband confinements is viewed as one of the most significant variables that prompted new gaming technology.The imme... The past two decades witnessed a broad-increase in web technology and on-line gaming.Enhancing the broadband confinements is viewed as one of the most significant variables that prompted new gaming technology.The immense utilization of web applications and games additionally prompted growth in the handled devices and moving the limited gaming experience from user devices to online cloud servers.As internet capabilities are enhanced new ways of gaming are being used to improve the gaming experience.In cloud-based video gaming,game engines are hosted in cloud gaming data centers,and compressed gaming scenes are rendered to the players over the internet with updated controls.In such systems,the task of transferring games and video compression imposes huge computational complexity is required on cloud servers.The basic problems in cloud gaming in particular are high encoding time,latency,and low frame rates which require a new methodology for a better solution.To improve the bandwidth issue in cloud games,the compression of video sequences requires an alternative mechanism to improve gaming adaption without input delay.In this paper,the proposed improved methodology is used for automatic unnecessary scene detection,scene removing and bit rate reduction using an adaptive algorithm for object detection in a game scene.As a result,simulations showed without much impact on the players’quality experience,the selective object encoding method and object adaption technique decrease the network latency issue,reduce the game streaming bitrate at a remarkable scale on different games.The proposed algorithm was evaluated for three video game scenes.In this paper,achieved 14.6%decrease in encoding and 45.6%decrease in bit rate for the first video game scene. 展开更多
关键词 Video encoding object detection bit rate reduction game video motion estimation computational complexity
下载PDF
Forecast the Influenza Pandemic Using Machine Learning
19
作者 Muhammad Adnan Khan Wajhe Ul Husnain Abidi +5 位作者 Mohammed A.Al Ghamdi Sultan H.Almotiri Shazia Saqib tahir alyas Khalid Masood Khan Nasir Mahmood 《Computers, Materials & Continua》 SCIE EI 2021年第1期331-340,共10页
Forecasting future outbreaks can help in minimizing their spread.Influenza is a disease primarily found in animals but transferred to humans through pigs.In 1918,influenza became a pandemic and spread rapidly all over... Forecasting future outbreaks can help in minimizing their spread.Influenza is a disease primarily found in animals but transferred to humans through pigs.In 1918,influenza became a pandemic and spread rapidly all over the world becoming the cause behind killing one-third of the human population and killing one-fourth of the pig population.Afterwards,that influenza became a pandemic several times on a local and global levels.In 2009,influenza‘A’subtype H1N1 again took many human lives.The disease spread like in a pandemic quickly.This paper proposes a forecasting modeling system for the influenza pandemic using a feed-forward propagation neural network(MSDII-FFNN).This model helps us predict the outbreak,and determines which type of influenza becomes a pandemic,as well as which geographical area is infected.Data collection for the model is done by using IoT devices.This model is divided into 2 phases:The training phase and the validation phase,both being connected through the cloud.In the training phase,the model is trained using FFNN and is updated on the cloud.In the validation phase,whenever the input is submitted through the IoT devices,the system model is updated through the cloud and predicts the pandemic alert.In our dataset,the data is divided into an 85%training ratio and a 15%validation ratio.By applying the proposed model to our dataset,the predicted output precision is 90%. 展开更多
关键词 Influenza pandemic machine learning prediction influenza influenza pandemic prediction forecast pandemic influenza
下载PDF
Analysis of the Smart Player’s Impact on the Success of a Team Empowered with Machine LeAnalysis of the Smart Player’s Impact on the Success of a Team Empowered with Machine Learningarning
20
作者 Muhammad Adnan Khan Mubashar Habib +4 位作者 Shazia Saqib tahir alyas Khalid Masood Khan Mohammed A.Al Ghamdi Sultan H.Almotiri 《Computers, Materials & Continua》 SCIE EI 2021年第1期691-706,共16页
The innovation and development in data science have an impact in all trades of life.The commercialization of sport has encouraged players,coaches,and other concerns to use technology to be in better position than r th... The innovation and development in data science have an impact in all trades of life.The commercialization of sport has encouraged players,coaches,and other concerns to use technology to be in better position than r their opponents.In the past,the focus was on improved training techniques for better physical performance.These days,sports analytics identify the patterns in the performance and highlight strengths and weaknesses of potential players.Sports analytics not only predict the performance of players in the near future but it also performs predictive modeling for a particular behavior of a player in the past.The impact of a smart player on the success of a team is always a big question mark before the start of a match.The fans always want to know performance analysis of these superstar players and they always are interested to get to know more about their favorite player and they always have high hopes from their favorite player.Machine learning(ML)based techniques help in predicting the performance of an individual player as well as for the whole team.The statistics are very vital and useful for management,fans,and expert analysis.In our proposed framework,the adaptive back propagation neural network(ABPNN)model is used for the prediction of a player’s performance.The data is collected from football websites,and the results are stored in the cloud for fast fetching of data.They can be retrieved anywhere in the world through cloud storage.The results are computed with 94%accuracy and the performance of the smart player is formulated for the success of a team. 展开更多
关键词 Machine learning adaptive feed forwarded neural network adaptive back propagation neural network cloud computing fetching
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部