期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Spatial Correlation Module for Classification of Multi-Label Ocular Diseases Using Color Fundus Images
1
作者 Ali Haider Khan Hassaan Malik +3 位作者 wajeeha khalil Sayyid Kamran Hussain Tayyaba Anees Muzammil Hussain 《Computers, Materials & Continua》 SCIE EI 2023年第7期133-150,共18页
To prevent irreversible damage to one’s eyesight,ocular diseases(ODs)need to be recognized and treated immediately.Color fundus imaging(CFI)is a screening technology that is both effective and economical.According to... To prevent irreversible damage to one’s eyesight,ocular diseases(ODs)need to be recognized and treated immediately.Color fundus imaging(CFI)is a screening technology that is both effective and economical.According to CFIs,the early stages of the disease are characterized by a paucity of observable symptoms,which necessitates the prompt creation of automated and robust diagnostic algorithms.The traditional research focuses on image-level diagnostics that attend to the left and right eyes in isolation without making use of pertinent correlation data between the two sets of eyes.In addition,they usually only target one or a few different kinds of eye diseases at the same time.In this study,we design a patient-level multi-label OD(PLML_ODs)classification model that is based on a spatial correlation network(SCNet).This model takes into consideration the relevance of patient-level diagnosis combining bilateral eyes and multi-label ODs classification.PLML_ODs is made up of three parts:a backbone convolutional neural network(CNN)for feature extraction i.e.,DenseNet-169,a SCNet for feature correlation,and a classifier for the development of classification scores.The DenseNet-169 is responsible for retrieving two separate sets of attributes,one from each of the left and right CFI.After then,the SCNet will record the correlations between the two feature sets on a pixel-by-pixel basis.After the attributes have been analyzed,they are integrated to provide a representation at the patient level.Throughout the whole process of ODs categorization,the patient-level representation will be used.The efficacy of the PLML_ODs is examined using a soft margin loss on a dataset that is readily accessible to the public,and the results reveal that the classification performance is significantly improved when compared to several baseline approaches. 展开更多
关键词 Ocular disease MULTI-LABEL spatial correlation CNN eye disease
下载PDF
A Service Level Agreement Aware Online Algorithm for Virtual Machine Migration
2
作者 Iftikhar Ahmad Ambreen Shahnaz +2 位作者 Muhammad Asfand-e-Yar wajeeha khalil Yasmin Bano 《Computers, Materials & Continua》 SCIE EI 2023年第1期279-291,共13页
The demand for cloud computing has increased manifold in the recent past.More specifically,on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computi... The demand for cloud computing has increased manifold in the recent past.More specifically,on-demand computing has seen a rapid rise as organizations rely mostly on cloud service providers for their day-to-day computing needs.The cloud service provider fulfills different user requirements using virtualization-where a single physical machine can host multiple VirtualMachines.Each virtualmachine potentially represents a different user environment such as operating system,programming environment,and applications.However,these cloud services use a large amount of electrical energy and produce greenhouse gases.To reduce the electricity cost and greenhouse gases,energy efficient algorithms must be designed.One specific area where energy efficient algorithms are required is virtual machine consolidation.With virtualmachine consolidation,the objective is to utilize the minimumpossible number of hosts to accommodate the required virtual machines,keeping in mind the service level agreement requirements.This research work formulates the virtual machine migration as an online problem and develops optimal offline and online algorithms for the single host virtual machine migration problem under a service level agreement constraint for an over-utilized host.The online algorithm is analyzed using a competitive analysis approach.In addition,an experimental analysis of the proposed algorithm on real-world data is conducted to showcase the improved performance of the proposed algorithm against the benchmark algorithms.Our proposed online algorithm consumed 25%less energy and performed 43%fewer migrations than the benchmark algorithms. 展开更多
关键词 Cloud computing green computing online algorithms virtual machine migration
下载PDF
Classifying Misinformation of User Credibility in Social Media Using Supervised Learning
3
作者 Muhammad Asfand-e-Yar Qadeer Hashir +1 位作者 Syed Hassan Tanvir wajeeha khalil 《Computers, Materials & Continua》 SCIE EI 2023年第5期2921-2938,共18页
The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social network... The growth of the internet and technology has had a significant effect on social interactions.False information has become an important research topic due to the massive amount of misinformed content on social networks.It is very easy for any user to spread misinformation through the media.Therefore,misinformation is a problem for professionals,organizers,and societies.Hence,it is essential to observe the credibility and validity of the News articles being shared on social media.The core challenge is to distinguish the difference between accurate and false information.Recent studies focus on News article content,such as News titles and descriptions,which has limited their achievements.However,there are two ordinarily agreed-upon features of misinformation:first,the title and text of an article,and second,the user engagement.In the case of the News context,we extracted different user engagements with articles,for example,tweets,i.e.,read-only,user retweets,likes,and shares.We calculate user credibility and combine it with article content with the user’s context.After combining both features,we used three Natural language processing(NLP)feature extraction techniques,i.e.,Term Frequency-Inverse Document Frequency(TF-IDF),Count-Vectorizer(CV),and Hashing-Vectorizer(HV).Then,we applied different machine learning classifiers to classify misinformation as real or fake.Therefore,we used a Support Vector Machine(SVM),Naive Byes(NB),Random Forest(RF),Decision Tree(DT),Gradient Boosting(GB),and K-Nearest Neighbors(KNN).The proposed method has been tested on a real-world dataset,i.e.,“fakenewsnet”.We refine the fakenewsnet dataset repository according to our required features.The dataset contains 23000+articles with millions of user engagements.The highest accuracy score is 93.4%.The proposed model achieves its highest accuracy using count vector features and a random forest classifier.Our discoveries confirmed that the proposed classifier would effectively classify misinformation in social networks. 展开更多
关键词 MISINFORMATION user credibility fake news machine learning
下载PDF
Fault Tolerant Suffix Trees
4
作者 Iftikhar Ahmad Syed Zulfiqar Ali Shah +5 位作者 Ambreen Shahnaz Sadeeq Jan Salma Noor wajeeha khalil Fazal Qudus Khan Muhammad Iftikhar Khan 《Computers, Materials & Continua》 SCIE EI 2021年第1期157-164,共8页
Classical algorithms and data structures assume that the underlying memory is reliable,and the data remain safe during or after processing.However,the assumption is perilous as several studies have shown that large an... Classical algorithms and data structures assume that the underlying memory is reliable,and the data remain safe during or after processing.However,the assumption is perilous as several studies have shown that large and inexpensive memories are vulnerable to bit flips.Thus,the correctness of output of a classical algorithm can be threatened by a few memory faults.Fault tolerant data structures and resilient algorithms are developed to tolerate a limited number of faults and provide a correct output based on the uncorrupted part of the data.Suffix tree is one of the important data structures that has widespread applications including substring search,super string problem and data compression.The fault tolerant version of the suffix tree presented in the literature uses complex techniques of encodable and decodable error-correcting codes,blocked data structures and fault-resistant tries.In this work,we use the natural approach of data replication to develop a fault tolerant suffix tree based on the faulty memory random access machine model.The proposed data structure stores copies of the indices to sustain memory faults injected by an adversary.We develop a resilient version of the Ukkonen’s algorithm for constructing the fault tolerant suffix tree and derive an upper bound on the number of corrupt suffixes. 展开更多
关键词 Resilient data structures fault tolerant data structures suffix tree
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部