期刊文献+
共找到27篇文章
< 1 2 >
每页显示 20 50 100
Improving Prediction of Chronic Kidney Disease Using KNN Imputed SMOTE Features and TrioNet Model
1
作者 Nazik Alturki Abdulaziz Altamimi +5 位作者 Muhammad Umer Oumaima Saidani Amal Alshardan Shtwai Alsubai Marwan Omar Imran Ashraf 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3513-3534,共22页
Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ... Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD. 展开更多
关键词 Precisionmedicine chronic kidney disease detection SMOTE missing values healthcare KNNimputer ensemble learning
下载PDF
Depression Intensity Classification from Tweets Using Fast Text Based Weighted Soft Voting Ensemble
2
作者 Muhammad Rizwan Muhammad Faheem Mushtaq +5 位作者 Maryam Rafiq Arif Mehmood Isabel de la Torre Diez Monica Gracia Villar Helena Garay Imran Ashraf 《Computers, Materials & Continua》 SCIE EI 2024年第2期2047-2066,共20页
Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major ... Predicting depression intensity from microblogs and social media posts has numerous benefits and applications,including predicting early psychological disorders and stress in individuals or the general public.A major challenge in predicting depression using social media posts is that the existing studies do not focus on predicting the intensity of depression in social media texts but rather only perform the binary classification of depression and moreover noisy data makes it difficult to predict the true depression in the social media text.This study intends to begin by collecting relevant Tweets and generating a corpus of 210000 public tweets using Twitter public application programming interfaces(APIs).A strategy is devised to filter out only depression-related tweets by creating a list of relevant hashtags to reduce noise in the corpus.Furthermore,an algorithm is developed to annotate the data into three depression classes:‘Mild,’‘Moderate,’and‘Severe,’based on International Classification of Diseases-10(ICD-10)depression diagnostic criteria.Different baseline classifiers are applied to the annotated dataset to get a preliminary idea of classification performance on the corpus.Further FastText-based model is applied and fine-tuned with different preprocessing techniques and hyperparameter tuning to produce the tuned model,which significantly increases the depression classification performance to an 84%F1 score and 90%accuracy compared to baselines.Finally,a FastText-based weighted soft voting ensemble(WSVE)is proposed to boost the model’s performance by combining several other classifiers and assigning weights to individual models according to their individual performances.The proposed WSVE outperformed all baselines as well as FastText alone,with an F1 of 89%,5%higher than FastText alone,and an accuracy of 93%,3%higher than FastText alone.The proposed model better captures the contextual features of the relatively small sample class and aids in the detection of early depression intensity prediction from tweets with impactful performances. 展开更多
关键词 Depression classification deep learning FastText machine learning
下载PDF
Offshore Software Maintenance Outsourcing Process Model Validation:A Case Study Approach
3
作者 Atif Ikram Masita Abdul Jalil +3 位作者 Amir Bin Ngah Adel Sulaiman Muhammad Akram Ahmad Salman Khan 《Computers, Materials & Continua》 SCIE EI 2023年第3期5035-5048,共14页
The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO client.Although a lot of research on software outsourcing is going on,most... The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO client.Although a lot of research on software outsourcing is going on,most of the existing literature on offshore outsourcing deals with the outsourcing of software development only.Several frameworks have been developed focusing on guiding software systemmanagers concerning offshore software outsourcing.However,none of these studies delivered comprehensive guidelines for managing the whole process of OSMO.There is a considerable lack of research working on managing OSMO from a vendor’s perspective.Therefore,to find the best practices for managing an OSMO process,it is necessary to further investigate such complex and multifaceted phenomena from the vendor’s perspective.This study validated the preliminary OSMO process model via a case study research approach.The results showed that the OSMO process model is applicable in an industrial setting with few changes.The industrial data collected during the case study enabled this paper to extend the preliminary OSMO process model.The refined version of the OSMO processmodel has four major phases including(i)Project Assessment,(ii)SLA(iii)Execution,and(iv)Risk. 展开更多
关键词 Offshore outsourcing process model model validation vendor challenges case study
下载PDF
Spatial Correlation Module for Classification of Multi-Label Ocular Diseases Using Color Fundus Images
4
作者 Ali Haider Khan Hassaan Malik +3 位作者 Wajeeha Khalil Sayyid Kamran Hussain Tayyaba Anees Muzammil Hussain 《Computers, Materials & Continua》 SCIE EI 2023年第7期133-150,共18页
To prevent irreversible damage to one’s eyesight,ocular diseases(ODs)need to be recognized and treated immediately.Color fundus imaging(CFI)is a screening technology that is both effective and economical.According to... To prevent irreversible damage to one’s eyesight,ocular diseases(ODs)need to be recognized and treated immediately.Color fundus imaging(CFI)is a screening technology that is both effective and economical.According to CFIs,the early stages of the disease are characterized by a paucity of observable symptoms,which necessitates the prompt creation of automated and robust diagnostic algorithms.The traditional research focuses on image-level diagnostics that attend to the left and right eyes in isolation without making use of pertinent correlation data between the two sets of eyes.In addition,they usually only target one or a few different kinds of eye diseases at the same time.In this study,we design a patient-level multi-label OD(PLML_ODs)classification model that is based on a spatial correlation network(SCNet).This model takes into consideration the relevance of patient-level diagnosis combining bilateral eyes and multi-label ODs classification.PLML_ODs is made up of three parts:a backbone convolutional neural network(CNN)for feature extraction i.e.,DenseNet-169,a SCNet for feature correlation,and a classifier for the development of classification scores.The DenseNet-169 is responsible for retrieving two separate sets of attributes,one from each of the left and right CFI.After then,the SCNet will record the correlations between the two feature sets on a pixel-by-pixel basis.After the attributes have been analyzed,they are integrated to provide a representation at the patient level.Throughout the whole process of ODs categorization,the patient-level representation will be used.The efficacy of the PLML_ODs is examined using a soft margin loss on a dataset that is readily accessible to the public,and the results reveal that the classification performance is significantly improved when compared to several baseline approaches. 展开更多
关键词 Ocular disease MULTI-LABEL spatial correlation CNN eye disease
下载PDF
Application of Recursive Query on Structured Query Language Server
5
作者 荀雪莲 ABHIJIT Sen 姚志强 《Journal of Donghua University(English Edition)》 CAS 2023年第1期68-73,共6页
The advantage of recursive programming is that it is very easy to write and it only requires very few lines of code if done correctly.Structured query language(SQL)is a database language and is used to manipulate data... The advantage of recursive programming is that it is very easy to write and it only requires very few lines of code if done correctly.Structured query language(SQL)is a database language and is used to manipulate data.In Microsoft SQL Server 2000,recursive queries are implemented to retrieve data which is presented in a hierarchical format,but this way has its disadvantages.Common table expression(CTE)construction introduced in Microsoft SQL Server 2005 provides the significant advantage of being able to reference itself to create a recursive CTE.Hierarchical data structures,organizational charts and other parent-child table relationship reports can easily benefit from the use of recursive CTEs.The recursive query is illustrated and implemented on some simple hierarchical data.In addition,one business case study is brought forward and the solution using recursive query based on CTE is shown.At the same time,stored procedures are programmed to do the recursion in SQL.Test results show that recursive queries based on CTEs bring us the chance to create much more complex queries while retaining a much simpler syntax. 展开更多
关键词 structured query language(SQL)server common table expression(CTE) recursive query stored procedure hierarchical data
下载PDF
Project Assessment in Offshore Software Maintenance Outsourcing Using Deep Extreme Learning Machines
6
作者 Atif Ikram Masita Abdul Jalil +6 位作者 Amir Bin Ngah Saqib Raza Ahmad Salman Khan Yasir Mahmood Nazri Kama Azri Azmi Assad Alzayed 《Computers, Materials & Continua》 SCIE EI 2023年第1期1871-1886,共16页
Software maintenance is the process of fixing,modifying,and improving software deliverables after they are delivered to the client.Clients can benefit from offshore software maintenance outsourcing(OSMO)in different w... Software maintenance is the process of fixing,modifying,and improving software deliverables after they are delivered to the client.Clients can benefit from offshore software maintenance outsourcing(OSMO)in different ways,including time savings,cost savings,and improving the software quality and value.One of the hardest challenges for the OSMO vendor is to choose a suitable project among several clients’projects.The goal of the current study is to recommend a machine learning-based decision support system that OSMO vendors can utilize to forecast or assess the project of OSMO clients.The projects belong to OSMO vendors,having offices in developing countries while providing services to developed countries.In the current study,Extreme Learning Machine’s(ELM’s)variant called Deep Extreme Learning Machines(DELMs)is used.A novel dataset consisting of 195 projects data is proposed to train the model and to evaluate the overall efficiency of the proposed model.The proposed DELM’s based model evaluations achieved 90.017%training accuracy having a value with 1.412×10^(-3) Root Mean Square Error(RMSE)and 85.772%testing accuracy with 1.569×10^(-3) RMSE with five DELMs hidden layers.The results express that the suggested model has gained a notable recognition rate in comparison to any previous studies.The current study also concludes DELMs as the most applicable and useful technique for OSMO client’s project assessment. 展开更多
关键词 Software outsourcing deep extreme learning machine(DELM) machine learning(ML) extreme learning machine ASSESSMENT
下载PDF
Efficient and Secure IoT Based Smart Home Automation Using Multi-Model Learning and Blockchain Technology
7
作者 Nazik Alturki Raed Alharthi +5 位作者 Muhammad Umer Oumaima Saidani Amal Alshardan Reemah M.Alhebshi Shtwai Alsubai Ali Kashif Bashir 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3387-3415,共29页
The concept of smart houses has grown in prominence in recent years.Major challenges linked to smart homes are identification theft,data safety,automated decision-making for IoT-based devices,and the security of the d... The concept of smart houses has grown in prominence in recent years.Major challenges linked to smart homes are identification theft,data safety,automated decision-making for IoT-based devices,and the security of the device itself.Current home automation systems try to address these issues but there is still an urgent need for a dependable and secure smart home solution that includes automatic decision-making systems and methodical features.This paper proposes a smart home system based on ensemble learning of random forest(RF)and convolutional neural networks(CNN)for programmed decision-making tasks,such as categorizing gadgets as“OFF”or“ON”based on their normal routine in homes.We have integrated emerging blockchain technology to provide secure,decentralized,and trustworthy authentication and recognition of IoT devices.Our system consists of a 5V relay circuit,various sensors,and a Raspberry Pi server and database for managing devices.We have also developed an Android app that communicates with the server interface through an HTTP web interface and an Apache server.The feasibility and efficacy of the proposed smart home automation system have been evaluated in both laboratory and real-time settings.It is essential to use inexpensive,scalable,and readily available components and technologies in smart home automation systems.Additionally,we must incorporate a comprehensive security and privacy-centric design that emphasizes risk assessments,such as cyberattacks,hardware security,and other cyber threats.The trial results support the proposed system and demonstrate its potential for use in everyday life. 展开更多
关键词 Blockchain Internet of Things(IoT) smart home automation CYBERSECURITY
下载PDF
Data Analytics for the Identification of Fake Reviews Using Supervised Learning 被引量:5
8
作者 Saleh Nagi Alsubari Sachin N.Deshmukh +4 位作者 Ahmed Abdullah Alqarni Nizar Alsharif Theyazn H.H.Aldhyani Fawaz Waselallah Alsaade Osamah I.Khalaf 《Computers, Materials & Continua》 SCIE EI 2022年第2期3189-3204,共16页
Fake reviews,also known as deceptive opinions,are used to mislead people and have gained more importance recently.This is due to the rapid increase in online marketing transactions,such as selling and purchasing.E-com... Fake reviews,also known as deceptive opinions,are used to mislead people and have gained more importance recently.This is due to the rapid increase in online marketing transactions,such as selling and purchasing.E-commerce provides a facility for customers to post reviews and comment about the product or service when purchased.New customers usually go through the posted reviews or comments on the website before making a purchase decision.However,the current challenge is how new individuals can distinguish truthful reviews from fake ones,which later deceives customers,inflicts losses,and tarnishes the reputation of companies.The present paper attempts to develop an intelligent system that can detect fake reviews on ecommerce platforms using n-grams of the review text and sentiment scores given by the reviewer.The proposed methodology adopted in this study used a standard fake hotel review dataset for experimenting and data preprocessing methods and a term frequency-Inverse document frequency(TF-IDF)approach for extracting features and their representation.For detection and classification,n-grams of review texts were inputted into the constructed models to be classified as fake or truthful.However,the experiments were carried out using four different supervised machine-learning techniques and were trained and tested on a dataset collected from the Trip Advisor website.The classification results of these experiments showed that na飗e Bayes(NB),support vector machine(SVM),adaptive boosting(AB),and random forest(RF)received 88%,93%,94%,and 95%,respectively,based on testing accuracy and tje F1-score.The obtained results were compared with existing works that used the same dataset,and the proposed methods outperformed the comparable methods in terms of accuracy. 展开更多
关键词 E-COMMERCE fake reviews detection METHODOLOGIES machine learning hotel reviews
下载PDF
Supervised Machine Learning-Based Prediction of COVID-19 被引量:2
9
作者 Atta-ur-Rahman Kiran Sultan +7 位作者 Iftikhar Naseer Rizwan Majeed Dhiaa Musleh Mohammed Abdul Salam Gollapalli Sghaier Chabani Nehad Ibrahim Shahan Yamin Siddiqui Muhammad Adnan Khan 《Computers, Materials & Continua》 SCIE EI 2021年第10期21-34,共14页
COVID-19 turned out to be an infectious and life-threatening viral disease,and its swift and overwhelming spread has become one of the greatest challenges for the world.As yet,no satisfactory vaccine or medication has... COVID-19 turned out to be an infectious and life-threatening viral disease,and its swift and overwhelming spread has become one of the greatest challenges for the world.As yet,no satisfactory vaccine or medication has been developed that could guarantee its mitigation,though several efforts and trials are underway.Countries around the globe are striving to overcome the COVID-19 spread and while they are finding out ways for early detection and timely treatment.In this regard,healthcare experts,researchers and scientists have delved into the investigation of existing as well as new technologies.The situation demands development of a clinical decision support system to equip the medical staff ways to timely detect this disease.The state-of-the-art research in Artificial intelligence(AI),Machine learning(ML)and cloud computing have encouraged healthcare experts to find effective detection schemes.This study aims to provide a comprehensive review of the role of AI&ML in investigating prediction techniques for the COVID-19.A mathematical model has been formulated to analyze and detect its potential threat.The proposed model is a cloud-based smart detection algorithm using support vector machine(CSDC-SVM)with cross-fold validation testing.The experimental results have achieved an accuracy of 98.4%with 15-fold cross-validation strategy.The comparison with similar state-of-the-art methods reveals that the proposed CSDC-SVM model possesses better accuracy and efficiency. 展开更多
关键词 COVID-19 CSDC-SVM artificial intelligence machine learning cloud computing support vector machine
下载PDF
An Event Alarm System Based on Single and Group Human Behavior Analysis 被引量:1
10
作者 Hung-Yu Yeh I-Cheng Chang Yung-Hsin Chen 《Journal of Electronic Science and Technology》 CAS CSCD 2017年第2期123-132,共10页
Due to the increasing demand for security, the development of intelligent surveillance systems has attracted considerable attention in recent years. This study aims to develop a system that is able to identify whether... Due to the increasing demand for security, the development of intelligent surveillance systems has attracted considerable attention in recent years. This study aims to develop a system that is able to identify whether or not the people need help in a public place. Different from previous work, our work considers not only the behaviors of the target person but also the interaction between him and nearby people. In the paper, we propose an event alarm system which can detect the human behaviors and recognize the happening event through integrating the results generated from the single and group behavior analysis. Several new effective features are proposed in the study. Besides, a mechanism capable of extracting one-to-one and multiple-to-one relations is also developed. Experimental results show that the proposed approach can correctly detect human behaviors and provide the alarm messages when emergency events occur. 展开更多
关键词 Index Terms Event alarm system group behavior analysis human behavior recognition single behavior analysis stooping curve.
下载PDF
Intelligent Decision Support System for COVID-19 Empowered with Deep Learning 被引量:1
11
作者 Shahan Yamin Siddiqui Sagheer Abbas +5 位作者 Muhammad Adnan Khan Iftikhar Naseer Tehreem Masood Khalid Masood Khan Mohammed A.Al Ghamdi Sultan H.Almotiri 《Computers, Materials & Continua》 SCIE EI 2021年第2期1719-1732,共14页
The prompt spread of Coronavirus(COVID-19)subsequently adorns a big threat to the people around the globe.The evolving and the perpetually diagnosis of coronavirus has become a critical challenge for the healthcare se... The prompt spread of Coronavirus(COVID-19)subsequently adorns a big threat to the people around the globe.The evolving and the perpetually diagnosis of coronavirus has become a critical challenge for the healthcare sector.Drastically increase of COVID-19 has rendered the necessity to detect the people who are more likely to get infected.Lately,the testing kits for COVID-19 are not available to deal it with required proficiency,along with-it countries have been widely hit by the COVID-19 disruption.To keep in view the need of hour asks for an automatic diagnosis system for early detection of COVID-19.It would be a feather in the cap if the early diagnosis of COVID-19 could reveal that how it has been affecting the masses immensely.According to the apparent clinical research,it has unleashed that most of the COVID-19 cases are more likely to fall for a lung infection.The abrupt changes do require a solution so the technology is out there to pace up,Chest X-ray and Computer tomography(CT)scan images could significantly identify the preliminaries of COVID-19 like lungs infection.CT scan and X-ray images could flourish the cause of detecting at an early stage and it has proved to be helpful to radiologists and the medical practitioners.The unbearable circumstances compel us to flatten the curve of the sufferers so a need to develop is obvious,a quick and highly responsive automatic system based on Artificial Intelligence(AI)is always there to aid against the masses to be prone to COVID-19.The proposed Intelligent decision support system for COVID-19 empowered with deep learning(ID2S-COVID19-DL)study suggests Deep learning(DL)based Convolutional neural network(CNN)approaches for effective and accurate detection to the maximum extent it could be,detection of coronavirus is assisted by using X-ray and CT-scan images.The primary experimental results here have depicted the maximum accuracy for training and is around 98.11 percent and for validation it comes out to be approximately 95.5 percent while statistical parameters like sensitivity and specificity for training is 98.03 percent and 98.20 percent respectively,and for validation 94.38 percent and 97.06 percent respectively.The suggested Deep Learning-based CNN model unleashed here opts for a comparable performance with medical experts and it ishelpful to enhance the working productivity of radiologists. It could take the curvedown with the downright contribution of radiologists, rapid detection ofCOVID-19, and to overcome this current pandemic with the proven efficacy. 展开更多
关键词 COVID-19 deep learning convolutional neural network CT-SCAN X-RAY decision support system ID2S-COVID19-DL
下载PDF
Performance Analysis of DEBT Routing Protocols for Pocket Switch Networks
12
作者 Khairol Amali bin Ahmad Mohammad Nazmul Hasan +1 位作者 Md.Sharif Hossen Khaleel Ahmad 《Computers, Materials & Continua》 SCIE EI 2021年第3期3075-3087,共13页
Pocket Switched Networks(PSN)represent a particular remittent network for direct communication between the handheld mobile devices.Compared to traditional networks,there is no stable topology structure for PSN where t... Pocket Switched Networks(PSN)represent a particular remittent network for direct communication between the handheld mobile devices.Compared to traditional networks,there is no stable topology structure for PSN where the nodes observe the mobility model of human society.It is a kind of Delay Tolerant Networks(DTNs)that gives a description to circulate information among the network nodes by the way of taking the benefit of transferring nodes from one area to another.Considering its inception,there are several schemes for message routing in the infrastructure-less environment in which human mobility is only the best manner to exchange information.For routing messages,PSN uses different techniques such asDistributed Expectation-Based Spatio-Temporal(DEBT)Epidemic(DEBTE),DEBT Cluster(DEBTC),and DEBT Tree(DEBTT).Understanding on how the network environment is affected for these routing strategies are the main motivation of this research.In this paper,we have investigated the impact of network nodes,the message copies per transmission,and the overall carrying out of these routing protocols.ONE simulator was used to analyze those techniques on the basis of delivery,overhead,and latency.The result of this task demonstrates that for a particular simulation setting,DEBTE is the best PSN routing technique among all,against DEBTC and DEBTT. 展开更多
关键词 Pocket switched networks routings distributed cluster detections delay tolerant networks mobility in network
下载PDF
Encryption Algorithm for Securing Non-Disclosure Agreements in Outsourcing Offshore Software Maintenance
13
作者 Atif Ikram Masita Abdul Jalil +6 位作者 Amir Bin Ngah Nadeem Iqbal Nazri Kama Azri Azmi Ahmad Salman Khan Yasir Mahmood Assad Alzayed 《Computers, Materials & Continua》 SCIE EI 2022年第11期3827-3845,共19页
Properly created and securely communicated,non-disclosure agreement(NDA)can resolve most of the common disputes related to outsourcing of offshore software maintenance(OSMO).Occasionally,these NDAs are in the form of ... Properly created and securely communicated,non-disclosure agreement(NDA)can resolve most of the common disputes related to outsourcing of offshore software maintenance(OSMO).Occasionally,these NDAs are in the form of images.Since the work is done offshore,these agreements or images must be shared through the Internet or stored over the cloud.The breach of privacy,on the other hand,is a potential threat for the image owners as both the Internet and cloud servers are not void of danger.This article proposes a novel algorithm for securing the NDAs in the form of images.As an agreement is signed between the two parties,it will be encrypted before sending to the cloud server or travelling through the public network,the Internet.As the image is input to the algorithm,its pixels would be scrambled through the set of randomly generated rectangles for an arbitrary amount of time.The confusion effects have been realized through an XOR operation between the confused image,and chaotic data.Besides,5D multi-wing hyperchaotic system has been employed to spawn the chaotic vectors due to good properties of chaoticity it has.The machine experimentation and the security analysis through a comprehensive set of validation metric vividly demonstrate the robustness,defiance to the multifarious threats and the prospects for some real-world application of the proposed encryption algorithm for the NDA images. 展开更多
关键词 Non-disclosure agreement encryption DECRYPTION secret key chaoticmap CONFUSION diffusion
下载PDF
A Bio-Inspired Routing Optimization in UAV-enabled Internet of Everything
14
作者 Masood Ahmad Fasee Ullah +4 位作者 Ishtiaq Wahid Atif Khan M.Irfan Uddin Abdullah Alharbi Wael Alosaimi 《Computers, Materials & Continua》 SCIE EI 2021年第4期321-336,共16页
Internet of Everything(IoE)indicates a fantastic vision of the future,where everything is connected to the internet,providing intelligent services and facilitating decision making.IoE is the collection of static and m... Internet of Everything(IoE)indicates a fantastic vision of the future,where everything is connected to the internet,providing intelligent services and facilitating decision making.IoE is the collection of static and moving objects able to coordinate and communicate with each other.The moving objects may consist of ground segments and ying segments.The speed of ying segment e.g.,Unmanned Ariel Vehicles(UAVs)may high as compared to ground segment objects.The topology changes occur very frequently due to high speed nature of objects in UAV-enabled IoE(Ue-IoE).The routing maintenance overhead may increase when scaling the Ue-IoE(number of objects increases).A single change in topology can force all the objects of the Ue-IoE to update their routing tables.Similarly,the frequent updating in routing table entries will result more energy dissipation and the lifetime of the Ue-IoE may decrease.The objects consume more energy on routing computations.To prevent the frequent updation of routing tables associated with each object,the computation of routes from source to destination may be limited to optimum number of objects in the Ue-IoE.In this article,we propose a routing scheme in which the responsibility of route computation(from neighbor objects to destination)is assigned to some IoE-objects in the Ue-IoE.The route computation objects(RCO)are selected on the basis of certain parameters like remaining energy and mobility.The RCO send the routing information of destination objects to their neighbors once they want to communicate with other objects.The proposed protocol is simulated and the results show that it outperform state-of-the-art protocols in terms of average energy consumption,messages overhead,throughput,delay etc. 展开更多
关键词 UAV ying sensor network IOT IoE optimization ROUTING
下载PDF
K-Banhatti Invariants Empowered Topological Investigation of Bridge Networks
15
作者 Khalid Hamid Muhammad Waseem Iqbal +5 位作者 Erssa Arif Yasir Mahmood Ahmad Salman Khan Nazri Kama Azri Azmi Atif Ikram 《Computers, Materials & Continua》 SCIE EI 2022年第12期5423-5440,共18页
Any number that can be uniquely determined by a graph is called graph invariants.During the most recent twenty years’innumerable numerical graph invariants have been described and used for correlation analysis.In the... Any number that can be uniquely determined by a graph is called graph invariants.During the most recent twenty years’innumerable numerical graph invariants have been described and used for correlation analysis.In the fast and advanced environment of manufacturing of networks and other products which used different networks,no dependable assessment has been embraced to choose,how much these invariants are connected with a network graph or molecular graph.In this paper,it will talk about three distinct variations of bridge networks with great capability of expectation in the field of computer science,chemistry,physics,drug industry,informatics,and mathematics in setting with physical and synthetic constructions and networks,since K-Banhatti invariants are newly introduced and have various forecast characteristics for various variations of bridge graphs or networks.The review settled the topology of bridge graph/networks of three unique sorts with three types of K-Banhatti Indices.These concluded outcomes can be utilized for the modeling of interconnection networks of Personal computers(PC),networks like Local area network(LAN),Metropolitan area network(MAN)and Wide area network(WAN),the spine of internet and different networks/designs of PCs,power generation interconnection,bio-informatics and chemical structures. 展开更多
关键词 Bridge networks INVARIANTS k-banhatti indices MAPLE network graph molecular graph
下载PDF
Deep Learning and Machine Learning for Early Detection of Stroke and Haemorrhage
16
作者 Zeyad Ghaleb Al-Mekhlafi Ebrahim Mohammed Senan +5 位作者 Taha H.Rassem Badiea Abdulkarem Mohammed Nasrin M.Makbol Adwan Alownie Alanazi Tariq S.Almurayziq Fuad A.Ghaleb 《Computers, Materials & Continua》 SCIE EI 2022年第7期775-796,共22页
Stroke and cerebral haemorrhage are the second leading causes of death in the world after ischaemic heart disease.In this work,a dataset containing medical,physiological and environmental tests for stroke was used to ... Stroke and cerebral haemorrhage are the second leading causes of death in the world after ischaemic heart disease.In this work,a dataset containing medical,physiological and environmental tests for stroke was used to evaluate the efficacy of machine learning,deep learning and a hybrid technique between deep learning and machine learning on theMagnetic Resonance Imaging(MRI)dataset for cerebral haemorrhage.In the first dataset(medical records),two features,namely,diabetes and obesity,were created on the basis of the values of the corresponding features.The t-Distributed Stochastic Neighbour Embedding algorithm was applied to represent the high-dimensional dataset in a low-dimensional data space.Meanwhile,the Recursive Feature Elimination algorithm(RFE)was applied to rank the features according to priority and their correlation to the target feature and to remove the unimportant features.The features are fed into the various classification algorithms,namely,Support Vector Machine(SVM),K Nearest Neighbours(KNN),Decision Tree,Random Forest,and Multilayer Perceptron.All algorithms achieved superior results.The Random Forest algorithm achieved the best performance amongst the algorithms;it reached an overall accuracy of 99%.This algorithm classified stroke cases with Precision,Recall and F1 score of 98%,100%and 99%,respectively.In the second dataset,the MRI image dataset was evaluated by using the AlexNet model and AlexNet+SVM hybrid technique.The hybrid model AlexNet+SVM performed is better than the AlexNet model;it reached accuracy,sensitivity,specificity and Area Under the Curve(AUC)of 99.9%,100%,99.80%and 99.86%,respectively. 展开更多
关键词 STROKE cerebral haemorrhage deep learning machine learning t-SNE and RFE algorithms
下载PDF
Reverse Engineering of Mobile Banking Applications
17
作者 Syeda Warda Asher Sadeeq Jan +3 位作者 George Tsaramirsis Fazal Qudus Khan Abdullah Khalil Muhammad Obaidullah 《Computer Systems Science & Engineering》 SCIE EI 2021年第9期265-278,共14页
Software reverse engineering is the process of analyzing a software system to extract the design and implementation details.Reverse engineering provides the source code of an application,the insight view of the archit... Software reverse engineering is the process of analyzing a software system to extract the design and implementation details.Reverse engineering provides the source code of an application,the insight view of the architecture and the third-party dependencies.From a security perspective,it is mostly used for finding vulnerabilities and attacking or cracking an application.The process is carried out either by obtaining the code in plaintext or reading it through the binaries or mnemonics.Nowadays,reverse engineering is widely used for mobile applications and is considered a security risk.The Open Web Application Security Project(OWASP),a leading security research forum,has included reverse engineering in its top 10 list of mobile application vulnerabilities.Mobile applications are used in many sectors,e.g.,banking,education,health.In particular,the banking applications are critical in terms of security as they are used for financial transactions.A security breach of such applications can result in huge financial losses for the customers as well as the banks.There exist various tools for reverse engineering of mobile applications,however,they have deficiencies,e.g.,complex configurations,lack of detailed analysis reports.In this research work,we perform an analysis of the available tools for reverse engineering of mobile applications.Our dataset consists of the mobile banking applications of the banks providing services in Pakistan.Our results indicate that none of the existing tools can carry out the complete reverse engineering process as a standalone tool.In addition,we observe significant differences in terms of the execution time and the number of files generated by each tool for the same file. 展开更多
关键词 Reverse engineering mobile banking applications security analysis
下载PDF
A Hierarchal Clustered Based Proactive Caching in NDN-Based Vehicular Network 被引量:1
18
作者 Muhammad Yasir Khan Muhammad Adnan +3 位作者 Jawaid Iqbal Noor ul Amin Byeong-Hee Roh Jehad Ali 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期1185-1208,共24页
An Information-Centric Network(ICN)provides a promising paradigm for the upcoming internet architecture,which will struggle with steady growth in data and changes in accessmodels.Various ICN architectures have been de... An Information-Centric Network(ICN)provides a promising paradigm for the upcoming internet architecture,which will struggle with steady growth in data and changes in accessmodels.Various ICN architectures have been designed,including Named Data Networking(NDN),which is designed around content delivery instead of hosts.As data is the central part of the network.Therefore,NDN was developed to get rid of the dependency on IP addresses and provide content effectively.Mobility is one of the major research dimensions for this upcoming internet architecture.Some research has been carried out to solve the mobility issues,but it still has problems like handover delay and packet loss ratio during real-time video streaming in the case of consumer and producer mobility.To solve this issue,an efficient hierarchical Cluster Base Proactive Caching for Device Mobility Management(CB-PC-DMM)in NDN Vehicular Networks(NDN-VN)is proposed,through which the consumer receives the contents proactively after handover during the mobility of the consumer.When a consumer moves to the next destination,a handover interest is sent to the connected router,then the router multicasts the consumer’s desired data packet to the next hop of neighboring routers.Thus,once the handover process is completed,consumers can easily get the content to the newly connected router.A CB-PCDMM in NDN-VN is proposed that improves the packet delivery ratio and reduces the handover delay aswell as cluster overhead.Moreover,the intra and inter-domain handover handling procedures in CB-PC-DMM for NDN-VN have been described.For the validation of our proposed scheme,MATLAB simulations are conducted.The simulation results show that our proposed scheme reduces the handover delay and increases the consumer’s interest satisfaction ratio.The proposed scheme is compared with the existing stateof-the-art schemes,and the total percentage of handover delays is decreased by up to 0.1632%,0.3267%,2.3437%,2.3255%,and 3.7313%at the mobility speeds of 5 m/s,10 m/s,15 m/s,20 m/s,and 25 m/s,and the efficiency of the packet delivery ratio is improved by up to 1.2048%,5.0632%,6.4935%,6.943%,and 8.4507%.Furthermore,the simulation results of our proposed scheme show better efficiency in terms of Packet Delivery Ratio(PDR)from 0.071 to 0.077 and a decrease in the handover delay from 0.1334 to 0.129. 展开更多
关键词 Vehicular network named data networking CACHING hierarchical architecture
下载PDF
Baseline Isolated Printed Text Image Database for Pashto Script Recognition
19
作者 Arfa Siddiqu Abdul Basit +3 位作者 Waheed Noor Muhammad Asfandyar Khan M.Saeed H.Kakar Azam Khan 《Intelligent Automation & Soft Computing》 SCIE 2023年第7期875-885,共11页
The optical character recognition for the right to left and cursive languages such as Arabic is challenging and received little attention from researchers in the past compared to the other Latin languages.Moreover,the... The optical character recognition for the right to left and cursive languages such as Arabic is challenging and received little attention from researchers in the past compared to the other Latin languages.Moreover,the absence of a standard publicly available dataset for several low-resource lan-guages,including the Pashto language remained a hurdle in the advancement of language processing.Realizing that,a clean dataset is the fundamental and core requirement of character recognition,this research begins with dataset generation and aims at a system capable of complete language understanding.Keeping in view the complete and full autonomous recognition of the cursive Pashto script.The first achievement of this research is a clean and standard dataset for the isolated characters of the Pashto script.In this paper,a database of isolated Pashto characters for forty four alphabets using various font styles has been introduced.In order to overcome the font style shortage,the graphical software Inkscape has been used to generate sufficient image data samples for each character.The dataset has been pre-processed and reduced in dimensions to 32×32 pixels,and further converted into the binary format with a black background and white text so that it resembles the Modified National Institute of Standards and Technology(MNIST)database.The benchmark database is publicly available for further research on the standard GitHub and Kaggle database servers both in pixel and Comma Separated Values(CSV)formats. 展开更多
关键词 Text-image database optical character recognition(OCR) pashto isolated characters visual recognition autonomous language understanding deep learning convolutional neural network(CNN)
下载PDF
Fake News Classification: Past, Current, and Future
20
作者 Muhammad Usman Ghani Khan Abid Mehmood +1 位作者 Mourad Elhadef Shehzad Ashraf Chaudhry 《Computers, Materials & Continua》 SCIE EI 2023年第11期2225-2249,共25页
The proliferation of deluding data such as fake news and phony audits on news web journals,online publications,and internet business apps has been aided by the availability of the web,cell phones,and social media.Indi... The proliferation of deluding data such as fake news and phony audits on news web journals,online publications,and internet business apps has been aided by the availability of the web,cell phones,and social media.Individuals can quickly fabricate comments and news on social media.The most difficult challenge is determining which news is real or fake.Accordingly,tracking down programmed techniques to recognize fake news online is imperative.With an emphasis on false news,this study presents the evolution of artificial intelligence techniques for detecting spurious social media content.This study shows past,current,and possible methods that can be used in the future for fake news classification.Two different publicly available datasets containing political news are utilized for performing experiments.Sixteen supervised learning algorithms are used,and their results show that conventional Machine Learning(ML)algorithms that were used in the past perform better on shorter text classification.In contrast,the currently used Recurrent Neural Network(RNN)and transformer-based algorithms perform better on longer text.Additionally,a brief comparison of all these techniques is provided,and it concluded that transformers have the potential to revolutionize Natural Language Processing(NLP)methods in the near future. 展开更多
关键词 Supervised learning algorithms fake news classification online disinformation TRANSFORMERS recurrent neural network(RNN)disinformation TRANSFORMERS recurrent neural network(RNN)
下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部